I have, over the years, been involved in a number of standards groups and efforts, from the W3C to the IEEE to US and Canadian health care standards, sometimes devoting hundreds or even thousands of hours attempting to craft something that will improve interoperability. Sometimes these efforts have succeeded well. Sometimes we were able to craft a reasonably good solution only to fail to see it adopted, and sometimes things never even coalesced to the point where we could reach a compromise. While there are many reasons why one standard may flourish and another one flounder, there are usually a few core causes when things go wrong.
Too Much Invested Ego
This is a guaranteed standardization killer. Someone within an organization wants to have their name at the top of a standard, whether because they see it as a way of establishing a legacy, they feel it gives their company a competitive edge, they are convinced that their views are better than everyone else’s, they see it as a great marketing vehicle, or related excuses for vanity.
You need a strong personality to put together a standard, because that person is in effect bucking the status quo, recognizing that there is a strong need for consensus and interoperability that is not currently being met. However, for every person that was lionized for establishing certain standards, there are hundreds of people that did the hard work of working out the consensus, of testing various hypotheses, and of ensuring such standards were useful who generally are only known to a handful of people in very, very narrow fields, even when the standard itself has proven a success. For instance, it’s quite likely that you have never heard of Philo Farnsworth, even though he had a profound impact on billions of people globally (and if you don’t know who he is, Google him).
Standards are by their very nature, consensual – an agreement by different parties to put aside their own self-interests in order to improve upon a technology or a field of knowledge. The moment that ego gets in the way of it, standard efforts almost invariably fall apart, because people and organizations are no longer willing to compromise if they get nothing out of it.
Lack Of Motivation
There’s a principle called a Power Law that plays out in technology: one party or company tends to dominate a given space most of the time, effectively dictating how a given technology is implemented (and frequently who gets paid for it), with a second player that makes up all but a small sliver of the remaining space. In this particular scenario, there is no compelling reason for that player to give up a competitive advantage in the market.
It’s unusual for a company to maintain dominance for over a couple of decades, primarily because as they become more established, they invite more competition, and every so often, an event will occur that upends the status quo. At that point, multiple organizations (both corporate and governmental) will often use that opportunity to call for an agreed-upon standard for interoperability. Especially when there are regulatory bodies as well, this can force the development of a standard, one in which the wounded primary agrees to cede a monopoly position to get agreements in other areas (the closest analogy here may be King John and the Barons in England of 1215, which ultimately led to the establishment of Parliament).
Sometimes these standards succeed, and sometimes they are pyrrhic victories – the benefit of standardization didn’t prove worth the fight.
Technological Obsolescence
in the early 2000s, the newly emergent cellular phone industry decided to agree upon a standard called Wireless Markup Language (WML) that would provide a way to create simpler user interfaces for mobile applications, on the assumption that none of the primary players could implement a fully HTML based browser in a phone interface. The group came up with a very watered-down version of HTML with significant limitations, taking quite a few years to reach a consensus even on this subset. Yet in less than five years the engineering capabilities in creating smartphones had reached a stage where such a standard became superfluous because the reason for the standard in the first place no longer held. The standard was abandoned shortly after being published.
Standards can take months, or even years, to pull together, long enough that obsolescence can be a real problem. Standards that try to be too all-encompassing may find that the ground rules have changed by the time the standards are published, with new technologies solving a problem faster than the standards can. This can be especially true when the standard lacks flexibility or was oriented towards one particular solution that failed to gain traction in the industry.
Green Field Syndrome
Sometimes standards organizations propose a solution where implementers have to give up what they have already built in order to accommodate it. In effect, the standards provider says it is worth starting over from a blank slate, or greenfield, to build up a superior infrastructure. The problem with this is that the deeper the infrastructure, the more complex and costly it becomes to change that infrastructure. Even relatively simple changes (such as the move from v4 to v6 IP addresses) can take years of testing and implementing, and the creation of a new, mostly backward compatible solution can nonetheless consume huge resources to make happen.
In general, any standards developer’s latitude will be defined primarily upon the degree to which a niche already exists for that particular standard. HTML and HTTP are prime examples of standards that succeeded because there was a great deal of potential (need) and very little established solution. By the time XML came along a few years later (pushed by the SAME standards group) it was now competing against HTML, and an XML-based version of HTML (XHTML) failed to gain much traction within the browser vendor market, which had optimized around the HTML standard only a few years before. Despite the a considerable benefit in HTML being XML compliant, the greenfield approach failed because the entrenched industry (browser vendors) didn’t want to lose their hold on the market.
This is one of the reasons why, when developing standards, it usually behooves the authors to ask how much of this standard can currently be done with existing standardized technology. This becomes especially true when the differences come down to either syntactical sugar or metadata that can be expressed externally to the existing solutions.
The Adoption Factor
In 2006, I gave a keynote address at the SVG Conference in the Netherlands, a group of a few hundred of us who had been working hard to get the Scalable Vector Graphics standard out the door and into use. It was a tumultuous and frustrating time. The standard, proposed by the W3C, provided a way to specify graphical documents declaratively using XML, thus making it possible to create graphics that could scale to any size without loss of fidelity, that could be changed programmatically, and that could even be distributed into multiple components.
The challenge, ultimately, was that the browser vendors at the time were trying to push their own competing graphical standards, and one of the last holdouts, which had created a specific viewer for SVG graphics that was largely compliant to the specification, finally pulled the plug on that project, and for the next several years, SVG as a standard was mostly dead.
However, a funny thing happened in 2009. One of the Linux distros built a new component into their platforms to support SVG, and then a major browser vendor implemented a modest but functional SVG renderer. For a while, SVG became one of the compromise formats for doing clip-art, because it was scalable, and it proved remarkably useful for building masks and filters for compositing imagery. As the Ecmascript revolution revised Javascript over the course of nearly a decade, SVG ended up becoming the go-to language for rendering graphs, charts, and information graphics, even over the alternative canvas component, because it DID handle many of the edge cases that canvas couldn’t.
The lesson there is that a standards-based solution can succeed, but primarily because that solution addresses a need that only emerges when you need cross-environment behavior. SVG is far more prevalent on the mobile internet today than it is on the desktop for precisely that reason, but in 2006, mobile was a fairly low use case for the people evangelizing that standard.
Failure to Interoperate
Most standards that succeed do so largely because they make it easier to provide interoperability between different systems. During the early days of the railroad expansion during the nineteenth century, each rail company developed tracks to different gauges (track widths) in order to build out their rail networks. However, eventually, it became obvious that the costs for each company to build out their tracks in the same area were significant, and reluctantly, they agreed to standardize on a single consistent gauge, allowing trains from one transport company to run on others’ lines.
The use of standards to force development has had a fairly mixed track record. Far more usual is the case where a standard exists as an agreement between industry vendors, governmental agencies, and interested NGOs to give up a competitive interest in technology to build out a platform that provides a potentially larger playing field for all concerned.
Linux was an audacious attempt by one person (Linus Torvalds) to create an operating system that a single corporation didn’t control. From its inception in 1990, Linux became the darling of computer science geeks everywhere. Still, it had difficulty gaining commercial traction until the late 1990s when Microsoft effectively dominated the personal computer desktop. Linux was making inroads in servers because most servers did not need the complex GUI layer but a platform for interoperability between vendors, especially given comparatively small margins for server hardware. When smartphones emerged in the late oughts, Google used a Linux base to build Android, which became one of the most pervasive operating systems globally. By 2015, Microsoft’s new CEO threw in the towel and started to adapt Windows to run at least a subset of the Linux operating system.
Interoperability is key to standardization. It is usually initially an abstraction layer that sits on top of an existing software stack, providing ways for either data or functionality to work across systems. Once such interoperability exists, subsequent versions of the underlying technologies can strengthen and optimize that abstraction to perform better on both systems. Standards that fail to get to this level generally fail to thrive.
Failure to Test
Most standards bodies have adopted a practice called viability testing, which tests to ensure that an organization can implement a given standard, given sufficient resources. This is in some organizations a requirement that at least two different organizations (vendors or open source) can implement the standard as proposed, but this isn’t always the case, though in many respects it should be.
Multiple implementations provide what amount to scientific validation. Is the standard implementable without proprietary software? Does it have a dependency upon particular (and limited hardware)? Are specific assertions too ambiguous, opening up the way for multiple potentially contradictory behaviors? Are there assumptions made about models that are likely to be different from multiple implementations. Without multiple potentially implementations that can test viability, technology can become standardized with significant holes that open up room for forking as different organizations try to resolve discrepancies in their own way.
Failure to Evolve
Standards driven technology can also fail even by being successful. XSLT, a standard for transforming XML content, is by many measures a successful technology. However, most XSLT processors were built around Java, and Java defaulted to the use of Xalan in the early days of the XML revolution in and around 2000. One of the most consistently powerful implementations of XSLT, the Saxon parser, can be used in place of the Xalan with a two line change in code in the Java implementation, but because this wasn’t standard, later versions of XSLT (which Saxon incorporated but Xalan didn’t) have seen only modest growth over time. It’s worth noting both that Saxon is not free or open source (though it is modestly priced), and as such it runs afoul of open source stack legislative requirements that failed to take into account the middle-land of “mildly licensed” products.
This highlights another important point – simply because one follows open standards does not mean that you also are building an open source solution. These are not the same thing. Open standards provide a common framework for communication and processing but says nothing about whether the implementation is free (open source) or commercial in nature.
Conclusion
Standards are important for communication between two organizations, whether at the human or the machine level. Despite this, simply by asserting that something is a standard does not make it so: there has to be a consensus that all parties involved will adhere to that standard, something that can be difficult to achieve in a highly charged and competitive climate. Understanding this can go a long way towards the parties involved in that standard making it a success.
You must log in to post a comment.