If you even slightly care about the for-your-eyes-only stuff on your phone and laptop staying, you know, for your eyes only, it’s been a bad couple of months. The Vault 7 WikiLeaks data dump released earlier in March revealed that the CIA has been using off-the-shelf and homebrew hacks to get at Americans’ smartphones, and that they’re not too picky about where they pick up those hacks. The exploit that the CIA might use to record keystrokes on a compromised Android smartphone is pretty much the same as one that would be used by a teenager in a former Eastern Bloc state just for shits and giggles.
Meanwhile, since the new administration took control of Customs and Border Protection, there’ve been an increasing number of reports of phones and laptops being violently seized from American citizens at borders — citizens, it must be said, who are either of Muslim-American descent or who might “scan” as Muslim-American to someone working the border. Per data provided by the Department of Homeland Security, just 5,000 devices were seized in 2015. That number grew fivefold, to 25,000, in 2016 — and 5,000 phones have already been seized in the first two months of 2017, putting the DHS on track to seize more than 60,000 phones by the end of the year.
One more bit of cheery piece of news that’s recently come to light: The government has been hoarding extremely powerful “zero-day” exploits of consumer devices for years. To use a somewhat strained metaphor, zero-day exploits are the equivalent of someone having a skeleton key to your home that they know about — and you don’t. The exploits are known as “zero day” because they’re present on the first day of a new update to a device’s OS. Standard information-security procedure is for any researcher who finds a zero-day exploit to alert the manufacturer about the gaping holes in their software so it can be patched up; the government has chosen a different course of action.
And that’s not even getting into the stunning rise of ransomware attacking both large businesses and, say, your mom’s old Dell. Or that the rise of the “Internet of Things” is not only making your home much more vulnerable (hackable baby monitor, anyone?), but also allowing the creation and deployment of vast networks of infected “botnets” on a scale simply not seen in history. The government would like to take a peek at plenty of people’s information — but so would plenty of other bad actors out there.
So what is there to do? Computer security is necessarily complicated, in part because everyone has different setups, different needs, and different worries. But security advice doesn’t need to be. We’ve tried to boil it all down to two simple sentences that cover the security needs of 90 percent of consumers (with some explanation, if you want it): One, use Signal on an up-to-date iPhone. Two, use multifactor authentication and a good password manager. If you need better security than this, you probably already know.
First, a Brief Note About “Threat Modeling”
In information security, there’s something people call “threat modeling,” which sounds very Jason Bourne–esque exciting but is actually kind of dull. The Electronic Frontier Foundation has a simple list of five questions you should ask yourself if you want to secure your information:
(1) What do you want to protect?
(2) Who do you want to protect it from?
(3) How likely is it that you will need to protect it?
(4) How bad are the consequences if you fail?
(5) How much trouble are you willing to go through in order to try to prevent those?
The answers to these questions are obviously going to vary both objectively and subjectively from person to person.
Take me, for instance. Very few people are interested in my digital data. My assets are minimal (brag); the amount of confidential or compromising information I have about other people is almost nonexistent; and there simply aren’t many organizations with the time, effort, and desire to crack open my digital life. I mean, don’t get me wrong: The consequences of every single one of my emails, Gchats, and text messages being hacked and exposed would be personally devastating, but nothing in them would bring down a corporation, alter the course of an international election, or place anybody in personal danger (except myself, depending on how mad some people were about some of the shit I’ve talked over IM). But even I’m still willing to jump through some hoops to keep that stuff relatively safe.
But let’s say you work at a hospital and are responsible for keeping information about patients. In that case, some of the information you keep on your phone or your laptop could, if it fell into the wrong hands, disrupt the lives of thousands; that information could even be held ransom by people inclined to do so (it happened 14 times in 2016). In that case, the consequences of failing to keep that information safe are more drastic — and therefore you’re going to accept a bit more hassle to keep your information safe. You may need to present a few forms of authentication to log into your system, and what you do while logged in may be more closely monitored than your average office drone.
And finally, let’s say you work for the government. Not for the government, per se — you just work for a company like Raytheon or Lockheed Martin and design things for them, whether that’s guidance systems or a liquid propellant to help things go even faster. In that case, the information on your computer and smartphone is both monetarily valuable — other countries and other organizations would pay top dollar for that info — and a national-security risk. At that point, you’re hitting high scores on every single question on that threat matrix, and your ability to access and manipulate your personal data is going to be restricted. It’s just the way these things work.
So, as you think about your security choices, you might want to consider your threat model: Who are you worried about, what are you worried about them stealing, where, and why? Still, whether you’re a low-level lackey like me, a hospital admin, or in the employ of a government skunkworks, we think the following two steps will keep about 90 percent of us safe. Ready? Here’s the first one:
Step One: Use Signal on an up-to-date iPhone.
That’s it. Use an iPhone capable of running iOS 10 (that’s an iPhone 5 or later as of March 2017), install the secure-messaging app Signal (or, if you want to make life slightly easier, WhatsApp), and use that as your main communications device.
Both Signal and WhatsApp use something called end-to-end encryption. This essentially means that even if a message I send to another user on Signal is intercepted midway through its journey, it’s encrypted — it’s just a jumble of data — until it’s decoded by the intended recipient’s phone. As of right now, end-to-end encryption is by far the most secure form of digital communication we have.
Feel free to email away from those endless sushi dates that Apple keynotes are always going on about, but besides that, never put any sensitive information in anything besides Signal or WhatsApp. Done!
What’s that? You have questions? Okay, let’s get into them.
So why should I get an iPhone and not an Android?
iPhones are overpriced, lack 3.5-mm headphone jacks, and once forced everyone to download a U2 album because Tim Cook really likes U2. They’re also generally the most secure consumer device out there, simply because Apple’s autocratic view on software and hardware means that nearly everyone updates to the newest version of iOS — that is, the one that has the most up-to-date security patches — much more quickly. Starting this year, Apple is also requiring all apps to run in HTTPS, the secure-browsing protocol that means not even your mobile carrier can tell what data you’re submitting to a website. The Android market is not only huge but remains much more fragmented when it comes to OS update-adoption, and is therefore much more vulnerable.
What if I need to use an Android?
If you’re going to use an Android phone, shell out for a Google Pixel and then immediately install Signal or WhatsApp. Not only is the Pixel a very decent phone, but it also gets the newest versions of Android before other phones, pushes monthly updates protecting against security exploits automatically, has active scanning from Google for possible threats, and therefore is generally much more secure than your average HTC or LG smartphone.
Signal is an app developed by Whisper Systems that uses end-to-end encryption to allow people to communicate, over text, voice, or video. (It gained a fair amount of popularity during the 2016 election cycle and its immediate aftermath.) You’ll need the other people you want to communicate with to also download Signal, but once they’re on there it’s about as simple to use as iMessage or any other chat app.
What about WhatsApp?
WhatsApp is a bit easier to use, already has about 1 billion people using it, and uses the same encryption protocol as Signal. That said, Signal is more secure with its metadata about its users.
WhatsApp, for instance, while not being able to turn over the content of messages to users to authorities, will turn over whom you were messaging with and when. Signal won’t even do that — it will only confirm when someone registered the app and the last time the user opened Signal, as it proved during a recent case in Kentucky. (There’s also the fact that WhatsApp is owned by Facebook, which may give you pause.)
But weren’t Signal and WhatsApp cracked by the CIA?
Thanks to a badly worded WikiLeaks press release and an overly credulous press, it was widely widely reported that Signal and its ilk had been “cracked.” They weren’t.
What did happen, per WikiLeaks, is that malware installed on certain Android devices allowed the government to see everything happening on a phone, thereby “bypassing” these encryption apps. The distinction between Signal being “cracked” and “bypassed” is an important one.
Signal being “cracked” would be like someone being able to intercept all your email, text messages, and phone calls in transit, read and listen to them, and then pass it along to you, with you being none the wiser. (This is, incidentally, basically what the Snowden leaks revealed the NSA was doing to the American public, and one of the reasons end-to-end encryption apps like Signal, which prevent this sort of information gathering, have become more popular.)
The way the CIA “bypassed” Signal, however, is more like breaking into your home and rifling through your credit card offers, Land’s End catalogues, and two-for-one coupons at ShopRite. It doable, but it costs a lot (the FBI spent around $1 million to crack one iPhone) and it’s just much more of a pain in the ass. Unless you’re starring in your own personal episode of Homeland, your phone is probably not getting attacked in this way — and therefore your encryption apps would not be bypassed.
Why can’t I just use iMessage on my iPhone? That’s encrypted, right?
Well, yes, but there are some issues there. First off, iMessage (or as Apple now dubs it, just plain Messages) has had numerous security flaws exposed throughout the years. This can mainly be blamed on Apple’s insistence on keeping its encryption proprietary. While any security researcher can review Signal’s encryption techniques and spot potential problems, that’s not possible with Apple, which chooses not to make its code open-source. There’s also the issue that Apple stores part of the encryption key needed to decrypt a message on its own servers. Cracking those servers and then cracking those keys would be tremendously difficult — it would require the resources of a national-security program, not a couple of kids in an IRC chat room — but it’s still possible. The government has made it clear that they’re frustrated by the inability to read what people send to each other on their iPhones, meaning those Apple servers may be hard targets — but also tempting ones. The only place Signal’s encryption keys are held are on each user’s device — unless someone seizes your actual phone, your messages are always going to be encrypted. Finally, there’s the fact that iMessages are only encrypted if you’re chatting with other iPhone users — anything you send to some schlub using an Android device is perfectly readable to anyone who wants to intercept it. So if you want to stay secure, you’ll need to stay in your bougie blue bubble.
Fine, fine, but I need to use a desktop or laptop — I can’t use a Bluetooth keyboard and an iPhone 7.
Like we said above, an up-to-date iPhone and Signal will satisfy the “threat matrix” for nearly everyone — your information, even if it is valuable, will be so difficult to get at that in all but the rarest of cases you’re going to be safe. But many of us don’t just use our phone — I’m tapping out this little piece of security advice on a laptop, for instance. So what can you do if you need to use a laptop or desktop? Again, just eight simple words:
Step Two: Use multifactor authentication and a good password manager.
First off, let’s be clear: Desktops and laptops are just generally much more insecure than your average mobile device. Windows PCs have an enormous install base and have been targets for viruses and malware since Seinfeld was on the air. Macs used to be better (and still have much fewer exploits floating about in the wild), but there are more and more malware exploits out there even for MacBook users.
But the real issue here is that the PC revolution happened almost four decades ago. Smartphones really only came on the scene in the past ten years. During that time, security measures like sandboxing apps to contain damage from malware, tighter restriction of user permissions, and more stringent app-store certification (or, hell, any app certification) have created an ecosystem for both Android and iOS that’s, on average, more secure than what you get on desktop.
Yeah, yeah, but I have to use a desktop or laptop for work. I can’t just work off my iPhone or Pixel. Can I still use Signal?
Yep! There’s a Signal Chrome client you can run off your desktop; it runs through your mobile connection and remains very secure. (WhatsApp and Telegraph, another end-to-end encryption program, offered similar services but recently saw their desktop clients severely compromised — I’d recommend staying away from them for the time being.)
Same as you would over your cell phone, you should also try to avoid sending anything truly sensitive over email. Clients like ProtonMail offer a layer of security, and Gmail (despite Google’s past peccadillos with the NSA) is seen as somewhat trustworthy, but neither is fully secure. Treat your email like you would if you worked by a high-powered law firm or a government agency — just assume anything you put into email form could, at some point, come to light, even if you don’t get Podesta-spearphished. Making dinner plans for Lucali’s? Unless you’re Jay Z or Beyoncé, you’re in the clear. Want to send your Social Security number to your significant other to finalize a rental application? Use Signal, whether on desktop or on your phone.
But, again, there’re two really important things you should do before heading down that path: Use a password manager and turn on multifactor authentication.
What’s a password manager? And which one should I use?
A password manager does what it says on the tin: manages your passwords. It’s nice because it’s one of the few things you can do to make your life more secure while also making it more convenient. If you’re like me, you can maybe remember, like, four strong passwords — that is, long passwords that use a mixture of uppercase and lowercase letters, numbers, and symbols — at any given time in your head, so you end up reusing them a lot. Which means that when a site you use inevitably gets hacked (and one you use almost certainly has — check here for yourself), that unhackable strong password “ActorBarryPepperIsUnderrated!” is now out there in the wild, accompanied by your username or email address, ready to be typed into every major bank and email site by a smart hacker.
A password manager will let you set a unique strong password for every site you visit, and then do the remembering for you. You’ll just need to remember one, single strong password (and a second factor of authentication — see below) to access your passwords. Setting it all up it takes about 30 minutes — I use LastPass, but 1Password and Dashlane are also good. Here’s a guide to getting started.
Okay, I have a password manager. What about multifactor authentication?
Even if you have a password manager, there’s still a chance someone could break into, well, your password manager. Or a hack could reveal your password in plaintext, which has been known to happen before. Multifactor authentication (MFA) adds one more layer of security in here. (It goes without saying, hopefully, that you should turn on MFA for your password manager as well).
If someone attempts to log on to one of your accounts from an unusual source (usually an IP address you’ve never used, or a browser or OS you’ve never touched), MFA will require a second form of identification. In fact, you’ve probably been using a form of MFA for most of your life — when you present both your bank card and your PIN at the ATM, you’re providing two separate ways of authenticating your ID. With log-in management, the factors are usually something you know — your password — and something you have — generally speaking, your phone, operating an authentication app like the one Google makes.
So, for example, let’s say I’m some nefarious hacker, and I manage to swipe the username and password to your email account, even after you set up a password manager. Your email provider will notice that I’m using a different IP, different computer, and different browser than the one you normally use, and ask for a second way to prove that I am who I say I am. This can be a something like a physically printed-out onetime code you store somewhere safe, or an authentication app you keep on your smartphone, or even a USB physical key (more on that later).
How do I turn it on?
To turn on MFA for Google, head here
To do the same for Facebook, head here.
To turn it on for many of the other online services you use, Lifehacker has you covered for nearly everything you’d ever possibly use.
One quick thing to note here: In general, you should avoid any form of MFA that uses text messaging to send out your confirmation codes. It’s relatively easy to bypass this form of MFA (and has been done so quite successfully in the past). It’s also, disturbingly, the only form of MFA available for Twitter users (get ready for @RealDonaldTrump to get hacked — can’t wait for our first international thermonuclear war to be started by bored teeangers).
And finally, even if you’re already taking all these precautions with your laptop, you should really, really get a physical laptop key.
Like, what, a house key at Home Depot?
Kinda! Again, your laptop or desktop is just much more vulnerable than your phone. A physical key — basically little USB sticks that act as another way to authenticate your identity — add one more layer of security. Some popular ones include YubiKey and Nitrokey. You stick them into the side of your computer, and a lot of the authentication you may have handled via a smartphone app or onetime token is done for you. Plus it’ll make you kinda feel like a spy, though more like this kind of spy than this kind of spy.
Good god, okay, done. Is there anything else I need to do?
Yep! If you want to really go the extra mile, consider full-disk encryption. Your laptop most likely has a password in order to log in, but if it were ever seized, even if you had the strongest password in the world it’s really just a matter of time (think, like, hours or days) before forensic tools could brute-force a way in.
Full-disk encryption is different. It will convert everything on your hard drive into unreadable code to anyone who doesn’t have the proper password. (You can learn how to set it up for macOS here, and for Windows machines here.) While someone may be able to brute-force your log-in password, brute-forcing the encryption on your machine is one of those mind-bogglingly complex tasks that somewhat defy human comprehension. For instance, a brute-force crack on the full-disk encryption key on a Mac’s AES 256-bit encryption would just barely begin to get started before the eventual heat death of our universe. It’s pretty dang secure.
Sigh. Okay, but I can at least travel with this stuff, right?
I mean, you can, but … Even if you’re an American citizen, your Fourth Amendment rights are curtailed at the border, so if a customs or border-patrol agent decides they want to grab your smartphone or laptop and try to copy it, they can. They can also force you to put your fingerprint in to turn on the device (a very, very, good reason to turn fingerprint log-in off). If you refuse to give up passwords, or have full-disk encryption, they can choose to detain the device for as long as they like until they can complete a “routine search” of it. They do not need a warrant or court order to do this. Our best advice? If you’re worried about privacy and security, travel with a dummy Chromebook and burner phone, and delete everything before crossing back over.
This seems like an awful lot of work. How much of this do I really need to do?
Remember back at the top, when we said “use Signal on an up-to-date iPhone” and “use multifactor authentication and a password manager.” That really is all that the vast majority of people need to do (even if you consider yourself a real rebel).
Most of us, even super-brave journalists like myself, willing to speak truth to power about how Donald Trump has a very large rear end, can go through that threat matrix mentioned at the top and realize that we’re just not worth the time. The reason for this? As the CIA’s Vault 7 WikiLeaks exposed, the rise of good end-to-end encryption apps means that the vast net of data collection the NSA was using during the pre-Snowden days has been rendered, to a degree, useless. It’s very expensive, both in time and money, to get a good exploit on someone’s phone or laptop. You need to rise to a certain level before you’re worth the government’s time.
That said, it’s true that the power of the government’s surveillance capabilities are unknown. And iPhones are very secure — until the FBI decides it’s willing to spend whatever it takes to crack one open. End-to-end encryption certainly appears to be secure, but there’s no way to be certain — and government agencies are currently seeking ways to circumvent it.
So if you’re truly worried about government surveillance, it’s best to ditch electronics altogether. If you want to communicate with someone securely, only use handwritten notes and onetime pads. Only meet in public places. Read up on tradecraft. Watch Three Days of the Condor a bunch. But, considering you’re reading this on a screen of some sort via an internet connection, you understand there’s a middle ground. Use Signal. Keep your phone up to date. Keep your truly sensitive data off email, cloud services, and (ideally) encrypted. It may seem like an unnerving time out there for digital security — but the tools to protect privacy are advancing pretty quickly as well. It may be a scary time, but it’s also one pretty much anyone out there can manage.