It’s a disaster. That’s the expert consensus around the massive hack of Ashley Madison, the cheater-centric dating website popular with millions of Americans. Not only does the data dump — a stunning ten gigabytes compressed — promise strife and embarrassment for the millions who joined the site, it’s also a credibility-destroying humiliation for Avid Life Media, Ashley Madison’s parent company. But it’s also a rude awakening to all internet users, leaving even non-adulterers wondering whether our own sensitive online information is — or can be — well-protected.
The Ashley Madison debacle probably could have been prevented if Avid Life Media had been following up-to-date information-security standards. The company claimed its data was highly secure, yet it failed to use systems capable of detecting insider threats. That’s why, as Dtex Systems CEO Mohan Koo told industry news site SiliconANGLE, the breach’s source “is largely believed to have been a third-party contractor with privileged access to the company’s systems.” Basically, Avid Life Media firewalled its information from the outside world, without setting up the layered internal defenses that could monitor, log, and defeat malicious activity inside its trusted circles. (Pretty ironic for a company that helps people cheat.)
Since the hackers could operate from the inside out, they were able to access “PayPal accounts used by Ashley Madison executives, Windows domain credentials for employees, and a large number of proprietary internal documents,” plus “huge numbers of internal documents, memos, org charts, contracts, sales techniques, and more,” as Ars Technica noted. This kind of jackpot doesn’t happen by accident. The colossal scope of the Ashley Madison hack strongly suggests its entire security system was weak and outdated.
Consumer brands ought to be on notice: Going forward, customers are likely to ratchet up their perception of risk before handing over highly private details. They’ll be increasingly likely to demand specific assurances that the privacy of their data will be protected — and that the information security deployed by brands that safeguard secrets is well-developed enough to defeat hackers like those who penetrated Ashley Madison’s systems almost completely.
But weren’t we worried about this kind of stuff years ago? How did we get here?
Certainly Avid Life Media bears much of the blame for the Ashley Madison hack. But the roots of the trouble go much deeper, to a culture of convenience that dovetails neatly with the careless attitude that has developed among web users over the past decade or so.
In the web’s early days, the threat of hacks was exotic and unfamiliar enough to be terrifying. But as email filters and e-commerce standards improved, most of us felt we were sufficiently protected and began conducting business over the internet. Consumers grew more and more comfortable with handing over financial secrets, and this same attitude carried over to personal details. As younger users flooded into social media, the shared sense grew that one’s entire existence belonged online — friends, shopping habits, biographical details, and, of course, dating. Suddenly it was cool to trust companies with the equivalent of your FBI file. Not doing so, in fact, seemed like a foolish protest against a very cool, very useful future. With a vague, but powerful confidence, many of us assumed we were best off giving companies the benefit of the doubt.
The average Ashley Madison user clearly didn’t worry too much about handing over potentially life-destroying information to a relatively small and shady company. Rather than poking around for details about its information-security standards or procedures, people jumped right in — partly blinded by lust, but, even more so, blinded by habit. However imperfect, we all seemed to agree that transactional privacy had transitioned well enough into a world full of evil hackers. Those bad guys might have an eye out for cash or corporate data, but their agenda never seemed to intersect with our petty personal secrets.
That point of confidence began to look a bit tenuous after last year’s iCloud hack, colloquially known as the Fappening, which released hundreds of private nudes into the wild. Right then and there, it was clear that, for hackers, personal secrets were desirable and obtainable. But the victims were disproportionately female celebrities storing nude pictures. Ashley Madison’s disproportionately male nobodies making “discreet” connections didn’t think of themselves as the same kind of target — or even a target at all.
But the threat became more visible with China’s massive heist of data involving tens of millions of government employees. Included in the trove of data were employees’ answers to lie-detector tests, which revealed intimate psychological and romantic histories. Again, however, it was possible for average consumers to write this one off — in this case, chalking it all up to the exigencies of global politics and America’s ongoing cyberwar with the Chinese.
With the Ashley Madison hack, the new reality is now unmistakable: Hackers can, and do, upend average lives. It also brings into focus a new principle that consumers (of information, of goods, of social-networking services) should start using before disclosing anything online: Personal secrets that can potentially wreck major elements of our lives — our jobs, marriages, and reputations — are qualitatively different from other kinds of shared information we might simply prefer to keep private (say, family photos or consumer preferences).
The stark reality is that when our personal secrets are out, the entity that failed to keep them secure can’t compensate us. Secrets like this can’t be quarantined or deleted. They won’t be lost in the shuffle; as Ashley Madison users are discovering, they’re searchable. After our secrets are out, even the most contrite and resourceful businesses can’t make us whole.
A basic heuristic arises for putting personal information online. The right level of risk tolerance corresponds to the possibility of catastrophe. If a hack wouldn’t cause tangible damage to your life, it’s probably worth proceeding with business as usual. But if a hack could bring on that kind of disaster, don’t trust that little voice telling you to bumble along and chance it.
That’s not to say sensitive material should never go online — that’s unrealistic advice, and there are plenty of reasons to trust that it can be handled well. Online companies responsible for the protection of “big” medical and financial data are part of a sophisticated system of safeguards. They’re professionally certified by guidance-issuing federal agencies like the National Institute of Standards and Technology, or compliant with the world’s largest standards developer, the International Organization for Standardization. The cybersecurity teams they hire to monitor and audit their systems keep up with guidelines issued by an alphabet soup of governmental and nongovernmental organizations, ranging from regulations promulgated by the U.K.’s Financial Conduct Authority to the Project Control Index, issued by systems-efficiency firm Independent Project Analysis.
There are certifications for employees, too, including information-systems security professionals, information-systems auditors, information-security managers, forensic analysts, and even so-called “ethical hackers.”
Nevertheless, figuring out which companies are actually up to snuff can take some work on your part. See if you can find a jobs page that includes listings for certified InfoSec personnel, or a cybersecurity expert on the board of directors. Because InfoSec certifications aren’t widely advertised to customers — yet — your best indicator is still whether a company is publicly traded. If so, it’ll be held to the industry-standard laundry list of regulations, rules, and best practices. But remember that inside the industry, smart critics warn that companies can fall back on a box-checking approach that technically meets requirements, but leaves weak legacy systems or other vulnerabilities in place.
Until companies start sharing their cybersecurity credentials, sites like Ashley Madison ought to raise red flags. Not only do they lack the legal duty of care that publicly traded companies possess, they also likely haven’t staffed up on cybersecurity at the level of a company that would shoulder that burden. If a business isn’t woven into the regulatory fabric of corporate America — like a bank, a hospital, or a national merchant — it’s all too easy for that business to scrimp on cybersecurity and get a free ride off of our general sense of trust in the system.
The best possible outcome of the Ashley Madison hack is a new level of competence among all companies that store sensitive information, and a new level of confidence among those of us that do business with them. But it’ll take some work to get there. That’s why, going forward, companies compiling personal secrets will — and should — have to overspend on information security. Companies that can advertise their certifications should. Companies that don’t should offer clear disclaimers — and potential users should think about Ashley Madison before accepting those terms.