Dumb things companies do with user security

After iterating on a few ideas, you’ve found something people are interested in. Users are signing up! You’ve got traction! People with money want to give you that money! Excellent.

In the rush to rapid growth, it can be easy to get caught up in what’s next, like the next new layout, feature launch or product release — the next thing that will make users happy.

Equally important to keep in mind — really, more important — is what makes users mad: getting hacked.

It’s advice we’ve heard from just about every security expert who has ever been onstage at Disrupt: Take security seriously from the start. As soon as anyone cares about your company, it’s a target, and the bigger you get, the bigger that target becomes. The more users you acquire, the more valuable your database becomes. Adding features and pushing code creates more things for hackers to poke at.

Last week, we took a look at some things you can do to help keep your employees from getting hacked. This week, we’re looking at some of what you can do to keep your users safe. It’s by no means exhaustive — but for growing teams, it’s the sort of stuff you need to have in the back of your brain, always.

Screwing up password storage

Don’t store passwords in plaintext.

It’s something that, many decades into this whole internet thing, we feel like we really shouldn’t have to say. And yet… companiesstilldo it. Huge companies!

If an attacker pokes into your database and all the passwords are sitting there in plaintext, their job is done. Compounding the problem is that people reuse passwords on other sites, no matter how much you tell them not to. Now the attacker has access to your users’ accounts on your site and anywhere else they may have used that same password. If that’s their email account, game over.

Hashing helps. A hashing algorithm takes what the user punches in as a password and number-crunches the hell out of it with the password emerging on the other side as something vastly different and entirely unrecognizable. Hashing only goes in one direction; a password can become a hash, but a hash (hopefully) can’t be converted back into a password. If a hacker tries to copy/paste the hash as the user’s password, it won’t work — the pasted hash gets crunched into a different hash, which won’t match up with what’s in the database.

The “best” hashing algorithm is something of a religious debate, with the whole thing evolving every few years as people find weaknesses, write new algorithms and new, specialized hash-cracking hardware hits the market. You want something purpose-built for passwords and computationally expensive, because unlike much of computing, slow is good here! Making it slow and expensive for hackers to go from password to hash means it’s harder to brute-force 10,000 passwords. So do your research, stay up to date and don’t try to roll your own algorithms. Learn how to further improve on hashing through techniques like salting and key stretching.

And really, never store things in plaintext. Make it a rule from day one. Don’t even do it in pre-production code, because stuff sneaks through. Build systems that scan for planted dummy account passwords being stored in plaintext. Or, hell, just don’t store passwords at all if you don’t have to.

Making it hard to create a strong password

In an ideal world, everyone would use long (but easy to remember!) passwords stored in a deeply encrypted password manager for safekeeping.

In the real world, most people take their cat’s name, swap one letter for a number to meet the password complexity requirement and write it on a Post-it note.

While “M1tt3ns” or “M!ttens1” seems more secure than “Mittens” to a human brain, there’s not much difference to a computer. But hey, all of the checkboxes turned green!

As Randall Munroe of XKCD so perfectly summed up nearly a decade ago: “through 20 years of effort, we’ve successfully trained everyone to use passwords that are hard for humans to remember, but easy for computers to guess.”

The NIST agrees, with more words:

To address the resultant security concerns, online services have introduced rules in an effort to increase the complexity of these memorized secrets. The most notable form of these is composition rules, which require the user to choose passwords constructed using a mix of character types, such as at least one digit, uppercase letter, and symbol. However, analyses of breached password databases reveal that the benefit of such rules is not nearly as significant as initially thought, although the impact on usability and memorability is severe.

Jon Xavier of Fleetsmith published a great round-up of modern password practices here. The short version: most password “strengthening” requirements ultimately just make people use short, crappy passwords. Make it easy and encourage them to use long but memorable passphrases (such as a collection of random words) rather than stunted, cryptic strings.

(If you are going to make users work through a logic puzzle to make a password, put the requirements on the signup page right off the bat. Don’t just show them when the user fails to guess what you want. It may make your page prettier, but it only annoys the user.)

Lacking support for two-factor authentication!

Two-factor authentication helps keep users safe when things otherwise go wrong. By requiring a second piece of information that only they should have (like, say, a unique code generated by their phone) at a specific moment (like, say, exactly when they’re logging in), even someone who might’ve figured out their password won’t have what they need to fully log in.

Nest was founded in 2010 and Google bought it for $3.2 billion dollars in 2014. Then, Nest bought the security camera company, Dropcam, but it wouldn’t add two-factor authentication for three years.

It might seem bonkers that something as sensitive as a camera in your home could go without two-factor authentication for years and years, but it’s hardly the outlier here. Plenty of huge banks and credit card companies still don’t support two-factor in any form.

In 2020, not offering any form of two-factor authentication is pretty much inexcusable. The bigger your audience, the more crucial this gets — but doing it later just makes it harder to get existing users onboard.

There are three popular ways to do two-factor today: texting a code over SMS, generating a code via a dedicated app or requiring the user to plug-in a dedicated piece of authentication hardware.

Not all methods are created equal; SMS, in particular, has been proven weak time and time again, with even Twitter CEO Jack Dorsey getting owned when hackers stole his phone number. But anything is better than nothing.

While plenty of people will opt to roll their own solutions here, pre-built SDKs like Google Authenticator or Authy can help offload some of the work.

Using the user’s phone number for dumb reasons

If you choose to offer SMS as a two-factor option, please don’t do stupid things with the phone numbers with which you’ve been entrusted.

Late last year, Twitter got caught using two-factor numbers for ad targeting. Facebook got caught doing the same in 2018.

It’s gross, obvious and erodes user trust in a system that’s meant to protect them.

Failing to keep shit configured properly

I can’t believe how many times we’ve written the same story on TechCrunch in the last few years: some company makes their cloud server public without a password and millions of banking documents/700,000 birth certificates/an absolute mountain of Facebook profile data is left out in the open.

All too often, these databases (generally private by default) end up being public as a result of administrative error and forgetfulness — someone says, “I only need it to be public for a second to test something,” and then forgets to set it back.

Limit who can make a private bucket public. Use things like Amazon’s S3 Bucket Permissions Check to monitor systems and ring the alarm if settings are changed.

Forgetting to hire a hacker

Different companies have different names for this role — Chief Security Officer, Chief Information Security Officer or even sometimes just Chief Technology Officer.

Whatever you call it, have someone (or multiple someones!) within a company whose job it is to attack from within. Someone who is keeping up to date on the latest vulnerabilities in your stack, analyzing risks and building response plans for when someone inevitably sneaks through. It can be hard to think about hiring for a role like this early on — but the bigger your codebase gets, the harder it gets for a new hire to fully grok.

Trying to hide breaches

Your site got breached and you lost a bunch of user data? That sucks a lot — but trying to hide it won’t help anyone at this point. Part of the game plan that your CSO/CISO/etc. has figured out should be telling users and the authorities what they need to know as soon as you can.

If you’ve got a non-trivial number of users, many will know what’s going on as soon as they see suspicious account activity and/or forced password resets. One tweet will turn into 30, which turns into a tip in the TechCrunch inbox, which leads to your CEO (or PR team) writing “we take your privacy and security very seriously” two months after a breach — which leads to everyone finding out anyway and just being mad you didn’t tell them sooner.

Oh — and, depending on where you operate, it’s probably not legal. Under GDPR, trying to hide a breach can lead to your company getting slammed with massive fines.

Not being good to researchers!

If someone finds a security issue in your service and is attempting to report it to you in earnest, the proper response is rarely to shove cease and desist letters or legal threats in their face. And yet, it happens.

Consider the alternatives: They could’ve not told you. They could’ve used it maliciously or sold the information to an interested party.

Most researchers wouldn’t expect up-and-coming startups to be able to cough up the wild million-dollar bug bounties that Google might — but you should reward researchers as best you can while you grow and, at the very least, offer a dedicated means for them to flag things for your security team. Give clear guidelines they can follow (what’s in scope? what’s not?) without worrying about you siccing lawyers on them. It’ll be a noisy inbox — you’ll inevitably get people writing in because some column is five pixels too wide, or the dude who thinks tweaking a page’s HTML client side is a hack. But if there’s one worthwhile heads up hidden amongst every hundred misfires, it’s worth it.