The question of broadband metering is becoming more important by the day. And while there’s much to be discussed regarding the cost of bandwidth, the trends of consumption, the public money involved in the infrastructure, and so on, one basic fact today is this: AT&T wants to put caps on your bandwidth, but they can’t be trusted to measure it correctly. That’s not a situation consumers should take without protest.
Readers over at Broadband Reports are noticing marked differences between AT&T’s measurements and their own. One user found differences of several orders of magnitude. Now, if AT&T (and of course Comcast and others) are unwilling to allow for wiggle room in their GB caps (fees start the byte over 250GB), why should we allow wiggle room in their measurement? After all, we don’t let grocers use poorly (or maliciously) calibrated scales.
If we’re going to be paying by the byte, we need real legal protections against being taken advantage of by companies that have their customers over a barrel. The average AT&T customer would likely recognize if their electricity bill was far more than they expected, and of course at the grocery store, they’d be surprised and concerned to find that a single apple tips the scale at ten pounds. But if they were told that they’d exceeded their bandwidth limit (uncommon today, but bandwidth use is growing as streaming video becomes more accessible), what could they say? Unlike the readers of Broadband Reports, they don’t know how to tell their router to track packets, or set up a software monitor — many would be hard-pressed to access the online meter provided by AT&T.
Without a legally standardized, reliable, and understandable way to track the bandwidth we’re using, we’re completely at the mercy of the telecoms. There are legitimate questions as to how traffic should be tracked. Is it before or after the router? Should it require separate hardware? Will there be exceptions for “promotional” packets, overhead introduced by the service, bits we didn’t request, or wijacking? If the service is down, will we be reimbursed? At what rate, and by whose measurements? These aren’t trifling technicalities or rounding errors. They’re essential regulatory questions that mean the difference between being charged for what you use, and being charged whatever they say. It’s a fundamental conflict of interest that the telecoms are the ones tracking this usage.
How best to proceed isn’t really clear, but here’s what I’m thinking: a few pioneering cities or counties (depending on the jurisdiction required) should implement pilot programs with simple, certified, publicly-developed hardware designed to count bits accurately and report them securely (a big university would love designing this). Work out the kinks with a study of a town or neighborhood, make the device (or integration of one into a router or cable modem, or central cable box) required by law, and with luck others will follow suit. Yeah, it’s rather optimistic, and the money will have to come from somewhere, but it’s not complicated and it is necessary.
AT&T has, predictably, attempted to account for the huge discrepancies by suggesting user error. They’re working tirelessly to ensure accuracy. Yes, but whose accuracy, AT&T? Your accuracy or mine?
Update: AT&T has reached out with some information, which I include here in the interest of fairness. AT&T was the subject of this post, and their methods of measuring bandwidth may or may not be accurate (they tell us they are, but they would either way), but the criticism in this post is intended for the other companies as well. I stand by my assertion that it is a conflict of interest to have as an impartial observer of usage the same company that charges for said usage, and sets its own rules and measures. Here is the email I was sent, with contact information removed:
When I reached out to Karl Bode yesterday, I told him that we’re already addressing ways we can make the labels and information on the online tool more clear for customers between now and May… but assured him that our team is performing checks everyday to ensure accuracy. We believe it’s an accurate system.
One reason for any discrepancy could be due to the software the customer is using to measure their usage. Other tools may be measuring different periods of time than we are, and most likely do not take into account the standard network protocols (e.g. Ethernet, IP) that are used to provide applications and content to our customers via the Internet. As you know, this is fairly standard to incorporate when measuring broadband traffic and is applied by other ISPs who measure usage.
Worth noting that we factored all of the above into our allowance settings and into our trials – so they are baked into our data that indicates that less than 2 percent of customers should see an impact from the new policy.
These changes affect less than 2 percent of customers. From our own year-long trial of this model, we validated that a very small group of subscribers – 2 percent – are using about 20 percent of the bandwidth on our network, which risks driving up the cost of providing service to all our customers. (Our average DSL customer uses 18 GB/mo.)
Customers have had direct input in designing this approach. For example, customers said it’s our responsibility to make it easy and convenient for them to know how much bandwidth they use.
· We heard them…and will send alerts when they’ve used 65% of their data plan. If needed, we’ll send out another alert at 90%, and then another if they reach 100%.
· Customers also can check their use—anytime—on line
· If a customer exceeds the allowance a second time, we’ll notify them and provide a grace period.
· The third time a customer exceeds the allowance, we’ll alert them, and they’ll be charged $10 for each additional 50 GB.
· We also have an informational website – http://www.att.com/internet-usage — where customers can learn more about broadband usage, how the allowance works, and why we’re making this change.
I believe this is what’s called “softly, softly, catchy monkey.”