Lily Hay Newman 03.28.19 06:00 AM
WIDESPREAD ADOPTION OF the web encryption scheme HTTPS has added a lot of green padlocks—and corresponding data protection—to the web. All of the popular sites you visit every day likely offer this defense, called Transport Layer Security, or TLS, which encrypts data between your browser and the web servers it communicates with to protect your travel plans, passwords, and embarrassing Google searches from prying eyes. But new findings from researchers at Ca’ Foscari University of Venice in Italy and Tu Wien in Austria indicate that a surprising number of encrypted sites still leave these connections exposed.
In analysis of the web’s top 10,000 HTTPS sites—as ranked by Amazon-owned analytics company Alexa—the researchers found that 5.5 percent had potentially exploitable TLS vulnerabilities. These flaws were caused by a combination of issues in how sites implemented TLS encryption schemes and failures to patch known bugs, (of which there are many) in TLS and its predecessor, Secure Sockets Layer. But the worst thing about these flaws is they are subtle enough that the green padlock will still appear.
“We assume in the paper that the browser is up to date, but the things that we found are not spotted by the browser,” says Riccardo Focardi, a network security and cryptography researcher at Ca’ Foscari University of Venice, who also cofounded the auditing firm Cryptosense. “These are things that are not fixed and are not even noticed. We wanted to identify these problems with sites’ TLS that are not yet pointed out on the user side.”
The researchers, who will present their full findings at the IEEE Symposium on Security and Privacy, to be held in May in San Francisco, developed TLS analysis techniques and also used some from existing cryptographic literature to crawl and vet the top 10,000 sites for TLS issues. And they developed three categories for the types of vulnerabilities found.
Some flaws represent a risk, but would be difficult for an attacker to rely on alone, because they involve initiating the same query multiple times to slowly extrapolate information from small tells. These “partially leaky” bugs might help an attacker decrypt something like a session cookie, since the cookie is likely sent along with every site query, but they would be less effective for grabbing, say, passwords that a user generally only sends once in a given session.
The other two categories are more sinister. Vulnerabilities that are full-on “leaky” involve more deeply flawed encryption channels between browsers and web servers that would enable an attacker to decrypt all the traffic passing through them. Worst of all are the “tainted” channels the researchers observed that would potentially allow an attacker to not only decrypt traffic, but also modify or manipulate it. These are the kinds of “man in the middle” attacks that HTTPS encryption was precisely created to defeat.
In practice, the flaws the researchers found are not necessarily critical vulnerabilities, according to Kenn White, a security engineer and director of the Open Crypto Audit Project. Many of them are potentially exploitable, but might not be appealing targets for hackers, because they would take more effort and be more conspicuous to abuse in an attack than other common vulnerabilities. But he emphasizes that the findings are still important as part of larger initiatives to clean up the web.
“While ‘don’t handle cookies on your web server like it’s 2005’ and ‘use decent TLS’ is sort of obvious, this research highlights that those basic things are still a struggle for a surprisingly large number of high-traffic sites,” White says. “It’s vital that web developers employ modern HTTP antitampering techniques.”
The researchers say that beyond specific assessments of how many sites have TLS vulnerabilities, a crucial concept in this project has to do with the fundamental interconnectedness of the web and how small TLS flaws on one page have potential ramifications for many others. Example.com’s homepage may have have solid HTTPS, but if mail.example.com has problems and the two interact, the encrypted connections between them will be undermined.
“When you have domains that are related to each other, sensitive data and things like cookies may be shared between them, which means that when one of the hosts is weak the vulnerability may propagate,” Ca’ Foscari’s Focardi says. “On the web you have a lot of dependencies and relationships between URLs and hosts that can produce an amplification of a TLS vulnerability.”
The researchers identified almost 91,000 related domains that are either subdomains of or share resources with the top 10,000 sites. TLS vulnerabilities in these dependent sites could create a ripple of exposure in the overall population. So the 5.5 percent of the top 10,000 sites that have flaws actually comes from 292 of the top 10,000 sites that have direct TLS vulnerabilities and 5,282 related sites that, through their own TLS bugs, create potential exposures for the main 10,000. Of this total, more than 4,800 of the flaws are the most severe “tainted” vulnerabilities, 733 are the “leaky” bugs that allow for decryption but not manipulation, and 912 are the lower-severity “partially leaky” bugs.
The idea that interdependencies can create vulnerabilities is well known in web security research—it essentially boils down to “you’re only as strong as your weakest link.” And the findings from Ca’ Foscari fit into a broader body of research looking at how to detect and mitigate these types of exposures. The Ca’ Foscari researchers say they are working to develop a tool based on their findings that can help developers identify frequently overlooked TLS vulnerabilities.
Given that the whole point of the web is interconnectivity at a massive scale, it’s increasingly vital to be able to catch small oversights and weaknesses that could have an outsize impact on overall security.