Access to fast, affordable and open broadband, for users and developers alike is, I believe, the single most important driver of innovation in our business. The FCC will likely vote next week on a framework for net neutrality—we got aspects of this wrong ten years ago, we can’t afford to be wrong again. For the reasons I outline below, we are at an important juncture in the evolution of how we connect to the Internet and how services are delivered on top of the platform. The lack of basic “rules of the road” for what network providers and others can and can’t do is starting to hamper innovation and growth. The proposals aren’t perfect but now is the time for the FCC to act.
Brad Burnham stopped by our office earlier this week to talk about his proposal for the future of net neutrality. The FCC has circulated a draft of a set of rules about neutrality that the Commission will likely vote on this week. Though the rules are not public, Chairman Genachowski outlined their substance last week. Through a combination of the Chairman’s talk, the Waxman Proposal, and the Google/Verizon proposal, one can derive the substance of the issue and understand its opportunities and risks. I strongly support much of what the Chairman has proposed and I support the clarifications that Burnham outlines. But before further discussing this point, I have to ask, why does this matter now? Over the past few years there has been a lot of discussion, a lot of promises, and some proposals with regard to net neutrality. Here are three reasons why this matters now:
1. The Internet and how we build things on the network is undergoing meaningful change as we transition to broadband and wireless access.
Network providers are making significant capital commitments that will shape access to networks in coming years. Despite this, the US is behind in both broadband and wireless connectivity. Only 65% of American households have broadband access, compared to 90% of households in South Korea. It is important to note that not all access is created equal. A study from earlier this year puts the US in 18th place with an average of 3.8 Mbps downstream compared to an average of 14.6Mbps in South Korea. The US is now 22nd in terms of downstream broadband speed, behind Latvia and the Czech Republic. The story is the same on a price per megabit basis: in the US, we pay $40 per month for an average of 3.9Mbps, which can be compared to a $45 per month fee that includes 20-30Mbps connections in France(plus VoIP service and HDTV + DVR to boot).
As I said at the outset, access to fast, affordable broadband for users and developers is, I believe, the single most important driver of innovation in our market. We got this wrong ten years ago—we don’t have a competitive market for broadband today, access is inconsistent, prices are high and speeds are often anemic—and we can’t afford to be wrong again. The structural separation approach that the Europeans took a decade ago yielded cheap, fast access in their market. I believe this access has been the most significant factor in the advancement of European Internet innovation. Despite this, the European approach is now reaching its limits. The transition to wireless Internet access provides an opportunity; and as the network becomes more diverse, the need for common technical standards becomes essential. An uneven experience across various platforms will fragment innovation and promote gatekeepers’ ability to tax applications. Match this situation with the embedded conflicts of interest in the delivery of video over DOCIS, or wireless vs. over-the-top IPTV, and you get a sense of the network complexities at hand. As Chairman Genachowski pointed out, we need “rules of the road” and now is the time to act.
2. Most of the innovation that has taken place online over the past 15 years was born out of a handful of architectural decisions. Two of these decisions are now being challenged.
Non-discriminatory pricing of bits and the clear definition of layers (i.e. the logical separation of conduit and content) that make up the Internet stack are two of the key architectural foundations of the network. The fact that bits containing applications, images, text or videos are handled in the same manner is central to how the Internet works. Network providers can shape or manage traffic on an aggregate, best-effort basis but identifying a single application or any content in an application or page will change the way the network is used. Specifically, it will hamper innovation by end-users such as individuals, developers and new or existing companies. Similarly, the layers are building blocks that are vital to how we develop and build Internet companies. This goes back to seminal pieces of Internet literature like the rise of the stupid network. I agree that, in the short term, tightly coupled systems can provide more efficient means to drive end-to-end innovation when you know precisely what you want to build. But I fundamentally believe that the essence of innovation is that you don’t usually know exactly what you want to build.
Innovators aim to solve problems—they start in one place and then they iterate. All too often real innovation is simply stumbled upon. Ideas and companies evolve (or pivot, as we now call it) as they better understand the problem they are seeking to solve. The Internet has demonstrated time and time again that loosely coupled systems and edge-based innovation is what drives the kind of massive change we have seen over the past two decades. This freedom to create “on the edge”, and to evolve ideas, is what gets me up in the morning and keeps me up late at night.
Like all good architecture, structural principles are remarkably resilient to change and scale. There have been continual challenges to these principles over the past few decades but this has all been part of the persistent tension that exists in a network between centralization and decentralization. Today, given our current transition to wireless and broadband access, the challenges faced are more fundamental as network providers attempt to change these building blocks as preconditions to future investment. The conflation of access (and control of access) with control of the stack of the open Internet is wrong.
3. Edge-based innovation has been the driver of change and creativity online, yet the edge has no single representative.
The edge-based innovation I talk of is predicated on access to a handful of things and the persistent tension between centralization and decentralization is a hallmark of a healthy web, evident in debates all the way back to Napster, CompuServe and AOL and, more recently, Facebook and Wikileaks. We have many native Internet companies relative to ten years ago. Though these native Internet companies come from the edge, no single company represents the edge. Moreover, as companies scale, they become increasingly misaligned with the edge. Google, Amazon, Facebook, eBay, and Yahoo, for example, all came from edge-based innovation but no longer represent the edge. Despite intentions to the contrary, there is a natural evolutionary path through which a large company becomes less likely to let edge-based innovations flourish and more likely to preserve the status quo. There is currently an over-representation of the center in Washington DC and the edge needs a louder voice. That’s up to us and, most likely, also up to you.
So what to do? As Burnham outlines, there are a handful of areas that merit attention. The key points are:
Application discrimination and specialized services
Burnham advocates Barbara van Schewick’s approach to “all application-specific discrimination”. I believe this approach can work because it works today. It is hard to understand where to draw lines here but we know what we think when a network provider discriminates against a specific application or specific content. We know it when we see it. Schewick proposes a generalized rule to ensure that this discrimination does not happen. If you doubt this approach, read the Zedevia letter as evidence that companies hesitate to invest without clarity—companies need clearly drawn lines. How much edge-based telephony (i.e. voice-based communication) innovation have you seen on the iPhone? Not a lot. Today—the list of issues and examples of discrimination is starting to grow. This is happening as the adoption of over the top services (IPTV etc.) places pressure on the cable companies’ video based revenues or the wireless companies’ voice and data revenues. Application-agnostic network management with a definition of an application should include apps, sites and web services. To the extent that there are specialized services that network providers want to put in market they should do that—but they need to be distinguished from the open internet.
The arguments that wireless should be treated separately from wireline are in my mind specious at best. Despite the fact that wireless network providers manage the network differently than wireline providers (given a need to share a limited resource among varying densities of users), wireless providers, like wireline providers, should not have the ability to discriminate against specific content, sites or applications.
Furthermore application developers need uniformity of standards at the lower levels of the stack to be able to build products and services in a seamless manner at higher levels of the stack. For example, we are currently building a social reading service that will ship as an iPad application. It includes an interface that distills content streams that should be of interest to you, the reader. The content is then displayed inline, regardless of whether it is text, images or videos. Imagine you use this iPad application at home on your home network. All images, text, and videos are displayed and usable. Now imagine that you take your iPad to the park and fire up the same application through a 3G or 4G wireless connection and all of a sudden the videos won’t work? Not that they are slow—they just wont work given the plan you are on.
Increasingly, users expect experiences to be the same regardless of connection type. Devices like the iPad are designed to be used in many environments; the idea that connectivity should dictate experience is becoming antiquated. Distinctions that network providers have around wireline and wireless should be limited to the physical layer of the stack. People who are creating companies should not have to build for two different networks. Commissioner Clyburn got this right when she recently said: “We should ensure that, while there are two kinds of networks, we don’t cause the development of two kinds of Internet worlds.” She continues, “Some have raised the issue that different rules are needed in the wireless arena because it is more competitive than the wired world. But I believe we cannot ignore the fact that there are many features of the wireless market that create high switching costs, such as exorbitant ETFs and a lack of handset compatibility across carriers.” Wireless access is the future for the majority of internet access—carving it out of an agreement, or limiting the rules to Internet websites (vs. websites, applications, or services as Waxman proposed) would, I believe, be a mistake.
Since my work years ago on the Microsoft Antitrust trial, I have been an adamant believer in minimizing the role of government as it relates to technology policy. Nonetheless, if government has a role in technology policy it is right here. Our business at betaworks is predicated on a thriving market for early-stage tech innovation at the content and application layer. Most of the businesses we have built or funded would not exist without the assumed freedoms that formed the platform we call the Internet.
We now have the same opportunity that we faced a decade ago. We can support the FCC in putting in place “rules of the road” to enforce basic tenants or we can continue down a path that de-facto leaves these decisions in the hands of large companies with limited oversight, no transparency, and no means of enforcement. The pace of innovation today is staggering yet there are walled gardens that are becoming increasingly difficult for small startups to surmount. I hope the FCC and the Chairman will take a bold step forward and that this results in something we can work with to scale the next decade of innovation in this sector.
If you agree that net neutrality is worth fighting for, do something about it, starting with making some noise.
Tell your friends on Facebook why this is important
Post this on Tumblr
Retweet this to your followers on Twitter
John Borthwick is CEO of betaworks. betaworks is a technology company that operates as a studio. betaworks builds new products, runs companies and seed invests. Prior to betaworks John was Senior Vice President of Alliances and Technology Strategy for Time Warner Inc. John’s company, WP-Studio, founded in 1994, was one of the first content studios in New York’s Silicon Alley. John holds an MBA from Wharton (1994) and an undergraduate degree BA...
Founded in 2008, betaworks is a company of builders. A tightly linked network of ideas, people, capital, products and data brought together in imaginative ways to build out a more connected world. At first glance we seem to do many things. But first and foremost, we’re builders, seeking to create a more sustainable innovation model. The more we build, the more we learn, the more we get ideas for peripheral things, all related, connected – in a loosely...