Guest post: Cleaning up privacy for a Facebook generation

Next Story

The $88,000 Home Theater In A Box From JBL Synthesis And Mark Levinson

This is a guest post by Julian Ranger (@rangerj) an Angel investor and founder of Surrey, UK-based innovation hub iBundle. Current iBundle projects include SocialSafe, mifiction and DAD. Julian has been an investor since 2007. Previously he was founder and Managing Director of STASYS Ltd, which he sold to Lockheed Martin in 2005.

Privacy is something we jealously guard in real life. We lock our doors, protect our bank details – we’re in control. But online it’s become a different story. Hardly a week goes by without a major government hack, social network outage or search engine breach: accusations of fault and blame are levied, and our trust is further eroded.

The debate has two camps: those that care and those that don’t.

This is due to an underlying issue: for some reason, being online has shifted the definition of privacy. Two forms of privacy have emerged along with two sets of ‘best practice’ rules: privacy online and privacy offline, (and the term ‘best practice’ is used loosely).

‘Privacy: freedom from public attention’ (Oxford English Dictionary) should be respected by businesses on and offline. And those interacting with businesses should expect the following principles, rightly or wrongly:

A.      We are clearly told at what privacy level a service operates at

B.      The privacy level cannot be changed on us without us knowing

C.      We have an ability to have our information deleted should we so wish it

Many detailed and vast research papers and draft legislation, contend what should be privacy best practice. However the majority are not accessible for the average Internet users. Simplification and accessibility must be the order of the day to communicate the privacy level that sites, such as Facebook, operate at.

Privacy online should have a standard system of easily understood levels:

1.    Me: what I keep totally to myself

2.    Family: what I share with family and close friends

3.    Friends: what I share with wider friends and acquaintances

4.    Business: what I share with a business, which is not shared onwards

5.    Business to Business: what I’ve shared with a business and that business then shares with other businesses

6.    Public: information in the public domain, found by anyone

There are multiple subgroups within these levels. For example at Level 2 / Family, there are things I choose to share with my wife, but don’t with my parents or children, and at Level 3 / Friends, there are things I share with my friends that I party with that I don’t share with those I work with. These sub-groups are an inherent part of who we are and what we do in the physical world; often impossible to define and ring fence in the digital world.

The fragile contract of trust is often down to a deliberate and convenient requirement for clarity. Two examples have recently involved Facebook and, where its users have been mislead into believing they are operating at Levels 2 / Family, and 3 / Friends, when actually their precious information has been sold in a firehose of information to businesses at Levels 4 / Business and 5 / Business to Business and to Level 6 / Public.

Facebook has a history of breaching the principles A and B (A: We are clearly told at what privacy level a service operates at, B: The privacy level cannot be changed on us without us knowing). Since May’s public outcry, they have promised not to breach B again, saying that they will not change on us the privacy levels that we choose to set. However, in my opinion they are still breaching principle A by being deliberately obtuse about their privacy levels.

At first glance, Facebook’s recommended settings look reasonable, with three items being shared with everyone, three with Friends of Friends, and three with Friends Only. However we are recommend to share our status, updates, photos, posts, bio, family relationships – in other words, virtually all our information, with everyone. As individuals we do not derive any value from this, however, this information is the gold at the end of the rainbow for Facebook.

In addition, Facebook, in common with most sites and services, does not follow principle C (that we can choose to delete our data at any time) at all. Accounts can be deleted from view, but Facebook reserves the right to retain users data and old information will still show up in Google’s infinite memory box of search data.

Quoting from the Wall Street Journal about’s business practices: “A visit to the online dictionary site resulting in 234 files or programs being downloaded onto the Journal’s test computer, 223 of which were from companies that track Web users”. This is clearly a Level 5 / Business to Business use, particularly pernicious because the user is not aware that this is being done at all – with no consent, implied or otherwise, provided. As the diagram of levels shows, the information is flowing ‘underground’ from to others hidden from view.

There is no reason why privacy and trust should be handled any differently online from the tight restrictions and respects offered it in the off-line world. If we don’t get privacy right then the online consumer will revolt, which will negatively impact everyone involved in online businesses.

Discussions must be held at international level – it is the world wide web after all – to agree clearly defined privacy levels (either those proposed above, or some other widely adopted definition).  This would be an important first step to helping users as the general public should not have to be experts in privacy law every time they go online.

This should be followed by a mandate whereby sites and services must be explicitly clear at what privacy level they operate.  Opt outs must be as easy as opt ins, for the sharing of data, and retracting permission retrospectively should be possible.

Above all, privacy in the digital world must be about informed consent, as it is in the physical world.

  • Ben Werdmuller von Elgg

    It’s great to see an investor highlighting these issues.

    I’m constantly astonished to see people in the Internet industry, particularly in silicon valley, continue to discuss posting on web applications in terms of “publishing”. While the web certainly has those origins, we’ve gone beyond the publication model – it’s a fully-fledged application platform, with enormous benefits over traditional software application models.

    One thing that developers should consider is that many markets – notably the ones who will pay, like enterprises and government – require privacy to be built-in at a deep level. Additionally, even on the public sites, people like danah boyd have done great research to show that people (particularly teenagers) are creating their own privacy levels by carefully cultivating personas. Perhaps counter-intuitively, adding extra privacy encourages users to share more, and put more time into using your application.

  • Edward Asiedu


    This is a very good write-up. I’ve blogged about this quite a bit, most recently on the lack of transparency about what information gets sold and to whom (

    You’re right that the separate compartments of real life are finding no counterparts online. Mark Zuckerberg has said he doesn’t understand sharing different things with different people because he’s the same awkward person to everyone :)

    I like the structure given above (A-C, 1-6). I think it would work well as a framework for online businesses, but would face opposition because entire business models and blogger’s incomes depend on users not knowing what’s really going on.

    Ben makes a good point above about some users being happy to make specific info public when they know they have control, which explains Twitter’s success to some extent.

  • Julian Ranger

    Edward – I agree re Ben’s point that people are likely do more on sites which have a clearly defined privacy policy and which stick to the lower privacy levels. Even Level 4 is not a great issue for most I would contend if there is trust in the provider -this is analogous to supermarket loyalty cards. Google is a good example of a generally trusted Level 4 provider, though reports suggest they are pushing their boundaries now in response to others, e.g. Jules

  • Guest post: Cleaning up privacy for a Facebook generation |

    […] on This entry was posted in English, Technology and tagged angel investor, business, definition […]

  • links for 2010-08-11 » Wha'Happened?

    […] Guest post: Cleaning up privacy for a Facebook generation (tags: facebook privacy legal) […]

  • Jack Repenning

    Your point that the groupings of visibility should be based on real-world relationships is excellent. Ah, but it’s all so complicated: for example, if I want a site to enforce a border between “friends” and “family,” then I have to identify who actually are in each group–which becomes not only another bit of data to be managed into the spectrum, but also another path of potential leakage, if communications pass among users with different level settings.

  • Speed Cleaning Secrets Revealed | Delta WoodWorking

    […] Guest post: Cleaning up privacy for a Facebook generation […]

  • Julian Ranger

    Jack – complications can arise if a site has to be all things to everyone – clearly most (all?) cannot be. The main point though is to know what a site does with your data – then the user can decide whether that is acceptable for them or not. So it doesn’t stop, for example, a social site taking all user status data and broadcasting to the world – by announcing that as their privacy level, some users will be OK and others won’t.

    As a further example, I am happy (at the moment) with the way Google uses the data they collect about me to help me as they and I see it, but many don’t even know what they do which is wrong – for someone who is avowedly against junk mail say, they are likely to be concerned about Google’s practices if they knew.

    Businesses are, and should be, free to decide what model of privacy they wish to implement; but users need to be able to have an informed understanding of the privacy level that applies when deciding to use a service or not.

    • Jack Repenning

      Julian – The notion that users “can decide whether that is acceptable” is currently ganged to a smoke screen composed of privacy policies that span tens of pages, that provenly no one reads, and that are “controlled” by configuration pages that defy comprehension with their complexity, while simultaneously obscuring with silence the sharing which remains out of end-user control.

      If your purpose is to fix that problem, I’m all in favor.

      But I remain profoundly doubtful that this can be fixed voluntarily, or bit-by-bit.

      • Julian Ranger

        Jack – you are so right – the current endless, jargon filled privacy policies are essentially meaningless to the average user.

        The debate centres on whether a fix is needed or not – I contend that one is needed as eventually users will become more aware and it will adversely affect the whole community. If I am right (some argue strongly I am not) then either we fix it as a community (accepting that the less scrupulous will continue to ignore any fix) or the Governments of the world will step in to fix it. If Governments step in then we are likely to get different rules in different jurisdictions – that would be such an unholy mess that trying to be a world wide web will be nigh on impossible (its already bad enough with data protection laws in Europe being at odds with the US). So my contention is we have to fix the issue as a community, before the problem gets any worse.

        This requires some acceptance of a) the problem, b) how to define privacy and the principles to follow (hence my post) and c) some method of implementation if there is general agreement at b).

        I am sure there are many potential methods of implementation, but one could be a certification process analogous to site certification. An existing body could gain agreement to a well publicised and clear set of privacy definitions and principles, websites can adhere to these and get a “Privacy Badge/Certificate” to display indicating that they are ‘clean’ with respect to privacy (which can mean they operate at any level as long as they say which and say so clearly). Then users, if they are concerned, will be able to select sites/services that are certified with respect to privacy over those that are not.

        Idealistic maybe? Are there other routes to fix the privacy issue? An open debate, without going down all the esoteric rat holes, would be a good start.

      • Jack Repenning

        So this is not a shapeless sea of directionless discourse. There’s a pretty straight-forward decision tree available:

        1. Is there a problem? If not: stop, we’re done. In answering this question, many people say, roughly, “the present situation doesn’t bother me.” That’s fine, bully for them, but that’s not how we assess such questions at a social level. Rather, if there *are* people who are bothered, we notice. We certainly don’t require 100% of people to be inconvenienced before we act; in fact, we don’t even require a majority: a crucial part of the function of community is to protect the minorities and down-trodden. So, “is there a problem?” Well, there’s a hue-and-cry; that’s pretty strong argument that there’s a problem.

        2. OK, so can the community fix it? In this regard, it’s important to bear in mind that the situation did not start with Facebook. On-line privacy protection has been an area of active difficulty for at least a decade. Email spam is a part of the same problem. Browser cookies are a part of the same problem. Opt-in/out webvertising is a part of the same problem. Has the community managed those? Clearly not.

        3. Well, then, that leaves only the last-recourse solution, government. It’s no help pointing out that government does everything badly–totally true, sure, but no help, because other means have had their day and done the job not at all. That’s why we leave “government intervention” as the last recourse. But when there’s immense profit to be made in violating the public trust, and insufficient commercial value in protecting the public trust, then the “free market” social solution fails.

      • Julian Ranger

        I still hold out hope that in the decision tree the “free market” will see sense and not fail us – but optimism may not be enough I grant you.

  • Julian on Privacy on Techcrunch | DadApp Blog

    […] Read it here Privacy Levels – where are we exposing ourselves? – Click for bigger picture […]

  • Cleaning up privacy for a Facebook generation |

    […] for a Facebook generation Posted on September 12, 2010 by Julian Ranger My guest post for TechCrunch on the need for transparency and standardisation of privacy settings for all services that we trust with our personal information.  I conclude that the following […]

  • Cleaning up privacy for a Facebook generation |

    […] for a Facebook generation Posted on August 10, 2010 by Julian Ranger My guest post for TechCrunch on the need for transparency and standardisation of privacy settings for all services that we trust with our personal information.  I conclude that the following […]

  • ed hats

    caps,MLB caps,red bull caps,supra caps,DC
    caps,circa caps,vans caps,kids caps,Racing caps,Lacoste caps,Beanies
    caps,Monster caps,NHL caps,and so on. When you buy a cap, you’re not only
    supporting your team, you’ll be helping our men and women in uniform.

    baseball hats

    slimming capsule

  • Julian on Privacy on Techcrunch | Share with your world, not the whole world – DADapp

    […] Read it here Privacy Levels – where are we exposing ourselves? – Click for bigger picture […]

blog comments powered by Disqus