Evolving Internet standards beyond ‘rough consensus and working code’

One of the more useful things about science fiction (and I am enamored by science fiction) is not just the science or the fiction but some of the more thoughtful plot points used to drive the larger narrative. One such remarkable one is the “three laws of robotics” construct invented by the science fiction maestro, Isaac Asimov. Today saying ‘robots’ is so passé, its 2014 after all. However when Asimov came up with the concept in 1942 or earlier, it was nothing short of a monumental leap in imagination – there had been no silicon revolution, no computer revolution (ENIAC, UNIVAC and their ilk) and we were barely beyond the steam economy to the combustion engine economy.

So it’s amazing that Asimov did not immediately just explore the still wonderful ideas of mechanical machines that did men’s bidding. He also discussed a set of ethical and philosophical ‘laws’ that needed to be met before any of those machines came into existence or were manufactured. This was a realization that what was at stake was not just a matter of technical construction finesse, but a matter of purpose and principle.

In contrast, when the pioneers of the current internet started its thinking and construction, they turned to Requests for Comments (RFCs) as a way to build common functional consensus. An RFC, according to Wikipedia, is “a memorandum describing methods, behaviours, research, or innovations applicable to the working of the Internet and Internet-connected systems. ….The IETF adopts some of the proposals published as RFCs as Internet standards.” Thus RFCs are much closer to functional specs than anything else; in fact they are often unadulterated input into developer work items – when I was working at Wind River Systems, the developer I worked with wrote PIM-SM entirely from the RFC (the correctness of the implementation remains to be seen). This stems directly from the ethos of the early work of the Internet Engineering Task Force (IETF), one of the main standard making bodies of the Internet. Its earliest organizing mantra was the following poem: “We reject presidents, kings, and voting. We believe in rough consensus and working code.”1. This basically libertarian credo has been the main ideological underpinning of internet systems for a very long time. But it’s not just libertarian, it’s also a very low level and functional point of view, and summarily dispenses with goals and principles of the complex systems created by these standards. In short the creators and sustainers of the Internet systems have given very short shrift to inveighing on the purposes that their creation can be put to.

I don’t want to sound judgmental. By basically taking a tabula rasa view of the internet, its progenitors have allowed marvelous things to evolve from the primordial slime of TCP/IP. Using another analogy, toddlers are told what to do, it’s only as they age into adulthood that goals are introduced in order to allow scope for their complex ingenuity to drive multiple alternative actions.

The internet has reached adulthood. We have very complex emergent behavior that is connected to potentially billions of devices that in turn touches the lives of billions of people. Creating standards that mandate certain principles and goals and integrate ethics is crucial for humanity to maintain some kind of control of the direction of global innovation in internet connected technology. We are already living in one of many alternate realities of innovation. In this freewheeling reality: we are constantly in fear of the intelligence communities’ (IC) snooping and subversion, the threat of 0-day attacks, corporate co-option of internet commerce and employee surveillance, etc. However these are all basically architectural choices. Privacy protections can be embedded in the way protocols are approved as standards, subversion of agreed ethics can have sanctions attached to them, embedded nodes can be forced to integrate automatic update processes to improve our defenses against vulnerable internet of things devices, etc.

My biggest pet peeve is related to the emergence of connected unattended devices like your router, your fitbit, your smart weight scale, etc. Once these things are released into the market no one seems to take responsibility for updating the firmware and software to patch software related security issues. We should have rules that a) selling it means you are liable for any harm that comes from its connectivity functions b) updating the software to latest to solve latest 0-days is also your responsibility c) update systems on each of these devices should be uncompromisable and isolated. And many more along these lines

We live in a more complex world and new kinds of standards are required to make the internet work for all of us. We are way beyond “rough consensus and running code”. We need standards that are as ambitious and expansive as Asimov’s 3 laws of robotics, which doesn’t just describe what it takes to build a system but also articulates the limits of what it can be legitimately employed to do. Or else.

1
http://www.niso.org/about/documents/strategic_plan/strategic_dir_preview.pdf

Share This:

Share on facebook
Share on twitter
Share on linkedin

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign up for my newsletter

Write a newsletter on product management and product strategy. stay
up to date on frameworks, tools and resources.