Encryption has become a hot topic in recent years, and many popular messaging apps such as WhatsApp, Viber and Signal now protected from end-to-end.
While this has been a positive development for security and privacy, it has also brought new challenges for law enforcement and often made investigations more difficult. When data is properly encrypted, the authorities can’t access it without the key.
If terrorists and criminal gangs use these secure communication methods, police can’t set up a wiretap and spy on the connection like they used to. It’s protected by a lot of complex math, which won’t yield to warrants or threats, making the information essentially inaccessible.
The problems that law enforcement agencies face have been brought to the attention of national legislators in many countries, with advocates either pushing for or passing laws that aim to break encryption and help the authorities access data. The push has been particularly prominent in the Five Eyes partners of Australia, Canada, New Zealand, the UK and the USA.
This movement is problematic, because as we noted above, encryption’s only master is the mathematics that it’s composed of. While most of us would like to help the authorities chase after their criminal targets, there’s also a conflict between the needs of the authorities and the interests of global information security.
The central dilemma is that we can’t just break encryption or insert a backdoor in a way that only the authorities can take advantage of – doing so would weaken the entire system, making it possible for attackers to hijack the backdoor, allowing them to access everyone’s communications.
Because we all rely on encryption to keep our online lives safe, such a move would endanger everyone, and the potential negatives far outweigh the assistance that such measures would give law enforcement.
After all, criminals have an interest in not getting caught, so they would move on to other systems that would protect their communications. Meanwhile, the rest of us are likely to continue using the weakened systems, leaving us more vulnerable to attack.
The world of encryption and data security can be complex, so let’s first take a step back and look at why we need encryption, how it works, what a backdoor is in technical terms, as well as some examples of why inserting a backdoor is a terrible idea.
See also: Common encryption types
Why do we need to protect our data in the first place?
Life is full of either sensitive or valuable information that we don’t want others to know about. It’s not a new phenomenon – many of us have embarrassing childhood secrets that we would prefer others not to know, while safe-owners generally want to keep their combinations hidden.
Now that much of our personal and work lives are conducted online, it follows that the digital world also involves significant amounts of sensitive and valuable information that we need to protect. For example, many people now do their banking online. If no protection measures were in place, anyone who could make their way to your account would easily be able to access your money and make transfers.
It’s the same if you send a secret in an online message to your friend. If there weren’t any safeguards, essentially anyone with a little time and know-how could intercept it, then either use the details for their own gain or expose them to the world.
In the physical world, you can write your safe combination down and hide it in your house. Not only would someone have to break into your home to get it, but they would also have to know where it is. Together, this is a relatively secure system.
If you were telling your friend a secret in a face-to-face conversation, you could look around and make sure that no one was listening in. If you’re a high value target who is actively being monitored, you could both walk out to a random place in the forest and have your conversation there. Taking these steps would mean that you could safely tell the secret without any major threat of being overheard.
In contrast, things don’t really work that way on the internet. If no safeguards or monitoring tools were being used, you wouldn’t know if someone is listening in, and you would have no way to prevent attackers from scooping up your data.
As an example, you might think that you are safely talking to a friend online, when in reality you have been sending messages to an attacker instead. Attackers can secretly insert themselves into the middle of a conversation in what’s known as a man-in-the-middle attack.
Hackers take advantage of the internet’s cobbled-together structure, intercepting whatever communication they can to uncover data that they can sell or use to commit further crimes. Because of this threat, we use measures such as encryption and authentication to protect ourselves and keep our data secure. Without them, the internet would be a bloodbath and no one could use it safely.
How is our data protected?
Technology facilitates the new tools and platforms that have made our lives easier, but its progress also drives the advancement of new attacks. In our somewhat-twisted world, technology also provides many of the solutions, although it’s constantly battling to keep up with the latest threats.
When we type in our bank credentials or visit many major websites, the connection is encrypted with a security protocol known as TLS. This is essentially a set of standards that tells computers and servers how they should authenticate and encrypt data with one another, so that both parties are communicating in an interoperable and secure way.
TLS locks up the data that is transferred between parties, turning it into ciphertext. When data is encrypted this way, attackers can’t see the actual data that’s being sent across the connection, preventing them from collecting passwords and other confidential information.
Similarly, messaging platforms like WhatsApp feature end-to-end encryption, which means that even if someone intercepts the data, they can’t access the juicy details you’re sending to your friend. Everything gets sent as ciphertext, which is only decrypted at the end of the journey in your recipient’s application.
There are a range of other protocols, algorithms and techniques involved in securing various parts of our digital world. These center on the processes of authentication and encryption, but what exactly are they?
The crucial aspects of data security
If we want to protect information and keep it out of the hands of adversaries, then it needs to be made confidential. In this context, it simply means that the data has to be kept private, so that only authorized individuals can access it.
In data security, confidentiality is achieved through encryption, which is essentially a complex code that changes data into a mishmash of unreadable characters known as ciphertext.
When data is encrypted with a sufficiently secure cryptographic algorithm, it can’t be accessed by any person or entity unless they have the key that was used to encrypt it (to keep things simple, we’ll ignore more complex schemes like public-key encryption in this article). It’s easiest to think of keys as long and complex passwords. Although there are some differences, explaining them would take us off on a tangent.
When encryption is used, it’s important to make sure that only authorized people can access the data. In the case of encrypted folders on a computer, the owner may want to be the only authorized party.
In other situations, such as messaging applications, both sides of the communication need to be authorized to access it. There is also a wide range of circumstances where many people need to be authorized to access certain systems or data.
In each of these situations, access is controlled through authentication. It can involve users inputting their keys directly, or using a number of mechanisms linked to their encryption key. They are divided into the following categories:
- Knowledge – These are things like passwords, PIN numbers and security questions.
- Ownership –Things that you have. A good example is your phone, which can be part of the process through SMS authentication or authentication apps. Physical security tokens are another common type.
- Inherence – These are essentially things that you are. They mostly include biometric factors, such as fingerprints, vocal patterns, facial recognition, etc..
Authentication measures allow us to confirm the identity of someone who is attempting to make their way into a system or view data. If the individual can’t provide the required information, item or feature, then they will be denied access and the data will be kept confidential.
These processes prevent attackers or any other unauthorized personnel from entering protected systems and accessing encrypted data. Multi-factor authentication systems combine several of these processes, making it more difficult for attackers to gain entry into a system, which increases its security.
Other important aspects of data security include integrity, which indicates whether data maintains its original form, or if it has been tampered with or corrupted. There is also non-repudiation, which is a characteristic that makes it impossible for the author of data to deny their involvement.
These other properties can also be addressed with cryptography, but we won’t cover them in detail, because they aren’t as crucial for understanding how backdoors work.
It’s worth noting that none of these measures are foolproof, and attackers can often find their way around them. However, when they are implemented properly, with best security practices, you can be reasonably confident that these mechanisms are protecting your systems and data adequately.
What exactly is a backdoor?
Now that you have some background about why we need to protect our data, as well as the basics of how it is done, we can go over what backdoors actually are in more depth. Backdoors are basically any means that knowingly allows someone to get around the authentication or encryption measures that we use to keep our data and systems safe.
Backdoors can be known by either the developer or an attacker that inserts them. What separates them from exploits is that they have intentionally been put in place by someone.
Sometimes they are inserted purposefully during development. Often, this will be done for seemingly legitimate reasons, such as to help developers when troubleshooting problems. At other times, backdoors can be placed in systems for more sinister purposes, like gaining access to encrypted user data. They aren’t always built under official guidance – sometimes they are inserted secretly by malicious insiders.
On occasion, backdoors may begin their lives as unintentional exploits. In these scenarios, they may be discovered by the developers, then left in place to allow continued access. Sometimes backdoors may appear as though they were accidental, when in reality they were put there on purpose.
This gives the perpetrator plausible deniability – they can pretend that they didn’t knowingly insert the backdoor and can publicly deny that they have been taking advantage of it.
Hackers can also create backdoors through Trojan horses and other more advanced techniques. If they are well-resourced or have nation-state backing, these attacks can be incredibly sophisticated.
Why are backdoors dangerous?
If there’s a backdoor and an attacker knows about it, or someone unwittingly stumbles across it, it can give them access to the systems or data that are supposed to be protected by authentication and encryption.
Obviously, this is disastrous, because it sidesteps all of the effort that was made to secure the information, leaving the data out in the open and vulnerable to anyone who goes through the backdoor. Backdoors can come in a number of forms and give varying degrees of access depending on the situation.
Backdoors can be used to:
- Gain remote access to systems.
- Install malware.
- Access, steal, copy, alter or delete sensitive or valuable data.
- Change system settings.
There are some additional tasks that backdoors can be used for, but the most worrying situations in the above list are when they are used to gain unauthorized access to accounts or systems, and when they are leveraged to steal data such as company secrets, credit card details, and passwords.
If attackers attain this kind of unrestricted access, they can cause huge amounts of damage, or use any data they find to mount further attacks and criminal campaigns.
Where can backdoors be placed?
Backdoors can be inserted into both hardware and software. If they are placed into the hardware, this can happen during manufacture, at some point in the supply chain, or they can be surreptitiously added later on. Once a person or organization owns a device, backdoors may also be inserted by anyone who has physical access to it
Backdoors are a significant threat in software as well. They can be inserted at the lowest levels, such as in compilers and firmware, all the way up. Software backdoors can be placed during the initial development stage, pushed as part of updates, or even installed by an attacker with a Trojan.
To understand why legally mandated backdoors are a bad idea, we have to look at how such a system may be set up. One of the most practical solutions would be for each provider of encrypted services to have a master key that they can use to access the individual keys that protect the data of each of their users. Even if such a system was implemented in an ideal way, it would still lead to unreasonable security risks.
While much of the system could be automated, employees would have to be involved at some stage, and this is where major risks from errors or corruption could occur. Let’s say that a large tech company has to set up such a system, and it gets hundreds or thousands of requests to access encrypted user data each day, from various branches of law enforcement.
To handle this kind of volume, multiple employees would have to be involved in handling the master key and dealing with the private keys of the relevant individuals. When such a complex system needs to be accessed constantly, it’s not hard to see how mistakes could be made, which may end up allowing unauthorized access.
On top of the risk that comes from human error, we also have to consider the incredible value of such a repository. If it held the private keys for millions, or even billions of users and was a gateway to their data, every criminal gang and nation-state hacking group would be incredibly tempted to bust it open. They would be willing to spend hundreds of millions of dollars on attacks against the database.
With such a huge amount of resources being thrown at the problem, it may not take long before motivated groups defeat whatever security measures are in place. This could be done through either bribery, coercion or technical attacks, but the result would be the same – unbridled access to the treasure trove of data.
The world is already inundated with seemingly constant data breaches, so why should we insert backdoors that may lead to even more incursions?
Global security isn’t the only issue at play here. We also have to consider the potential for these systems to be abused. Australia’s recently passed Assistance and Access Bill is one of the most concerning pieces of legislation that could lead to the insertion of backdoors.
The bill itself is quite ambiguous, which is one of the most worrying aspects. On top of this, if backdoors are ever demanded, oversight is limited in the procedure. The authorities don’t even require a specific warrant for their demands (although a warrant must already be issued in the case), so a judge doesn’t determine whether the desired access measures are reasonable.
It gets worse, because much of the process is shrouded in confidentiality. Companies aren’t legally allowed to go public if they have been forced to insert a backdoor.
Australia is a relatively democratic country, so its approach is hardly a worst-case scenario. But what if authoritarian regimes ended up demanding this kind of access? If companies are forced by these regimes to decrypt user data, it could lead to human rights abuses being committed against the targets.
Examples of backdoors
Backdoors come in a variety of different configurations. They can be hardware or software-based, inserted for seemingly legitimate purposes such as developer access, hidden inside the code to enable spying, or even inserted by hackers to steal data or launch other cyberattacks.
The following are some of the most brazen backdoor examples that have been discovered, covering a broad variety of situations:
The Clipper chip
One of the earliest backdoor controversies surrounded the NSA’s attempted introduction of the Clipper chip in the nineties. The chip was designed to encrypt voice communications and other data, however, it purposely included a backdoor that enabled the authorities to decrypt messages.
Each chip had its own unique key that could unlock the encrypted communications, and these keys would be collected and stored by the government in escrow. Supposedly the keys would only be accessed with court approval, however many cypherpunks and civil libertarians were skeptical.
Adding to this unease was the secretive nature of the chip’s underlying security. While the chip and its backdoor were publicly known, it relied on an algorithm called Skipjack, which was classified at the time, preventing researchers from analyzing it for security holes. The only details that were made public were that the algorithm was similar to DES, symmetric, and had an 80-bit key.
Some outside researchers were eventually brought in to give an independent assessment of the chip. They found the algorithm to be relatively secure for its time period, without any glaring holes. Even academics who studied the algorithm after it was declassified didn’t discover any outrageous vulnerabilities.
Despite these assurances, the algorithm’s detractors still had a right to their skepticism. After all, the NSA has a long history of undermining cryptographic schemes and skirting around their edges. The surrounding secrecy certainly didn’t help to allay any fears.
While the encryption scheme turned out to be secure, the key escrow system was vulnerable instead. Named the Law Enforcement Access Field (LEAF), it required the correct 16-bit hash to gain access. The hash was too short, and other suitable hash values could be brute-forced with relative ease.
This led to the possibility of other hash values being accepted by the receiving chip, which could ultimately end up denying access to law enforcement. In essence, this meant that those with enough determination may have been able to disable the escrow capability, allowing them to encrypt their data through the chip, but preventing the authorities from being able to access it.
Of course, this played right into the hands of serious criminals and terrorists, who had the time and resources to get around the security mechanisms. If they were the main targets of the escrow system in the first place, this vulnerability made the entire system pointless. The escrow system would only be useful against those who didn’t have such capabilities or resources, such as normal, law-abiding citizens.
A number of other attacks on the escrow system were published, showing that it was neither secure, nor suitable for its supposed purpose. This culminated in 1997’s paper, The Risks of Key Recovery, Key Escrow, and Trusted Third Party Encryption, which attacked key escrow and exceptional access systems on principle, rather than solely in the Clipper chip’s implementation.
The paper argued, “Key recovery systems are inherently less secure, more costly and more difficult to use than similar systems without a recovery feature.” It elaborated on the security sacrifices and the convenience issues that result from these systems, contending that the scale and complexity would lead to “ultimately unacceptable risks and costs.”
This and other issues resulted in widespread resistance to the Clipper chip, and the chips were never adopted in any significant manner by either manufacturers or consumers. At the same time, emerging security systems such as PGP meant that safer encryption options were on the market, making a compromised system like the Clipper chip unnecessary.
Although the most damning paper against key escrow systems was published more than 20 years ago, many of its principles still hold true. With the resurgence of calls for encryption backdoors over the last few years, the paper’s original authors released a follow-up report, outlining why such systems were still a bad idea.
Backdoors can be slipped into the design of various components at a wholesale level, or they can be planted individually by adversaries. Inserting backdoors in either way is challenging, and one of the most prominent examples in recent years was probably a fabrication.
In October of 2018, Bloomberg published a cover story called The Big Hack, in which its reporters alleged that Chinese suppliers were planting spy chips in their products, which went on to be used by some of the world’s biggest tech companies.
The report was followed by strong denials from each of the companies involved, including Supermicro, Apple, and Amazon. There has since been an extensive review, and still not a single spy chip has turned up.
Bloomberg has stuck to its guns and refuses to retract the story, but at this stage, the most likely conclusions are that the story was made up, or that the journalists were fed misinformation by their sources.
NSA hardware backdoors
The NSA has an entire team devoted to sneakily accessing communications, known as the Tailored Access Operations (TAO) division. According to Spiegel, the division even has a catalog for its handiwork. Its hardware-related offerings include a rigged monitor cable that allows operatives to see “what is displayed on the targeted monitor”.
Another product is an “active GSM base station” that can be used to monitor mobile phones, while disguised bugs for listening in to computers are also available. The above-mentioned products can be purchased for $30, $40,000 and in packs of 50 for $1 million, respectively.
The catalog also included a number of different hardware backdoors that could get around the security of products from a variety of manufacturers. These included BIOS implants that could be used to undermine HP servers, devices that could be used against Cisco and ASA firewalls, and backdoors that worked around the security of Huawei routers.
Other hardware backdoors
Apart from the above instances, examples of hardware backdoors aren’t especially common. However, they remain a particular concern in military, intelligence and secret government contexts. There is some argument about how practical it is to use these kinds of modifications in spying, especially when considering the relative ease of alternatives like software backdoors.
In light of the extensive caution taken by organizations in the above-mentioned fields, the two most probable conclusions are either that these sectors are overly cautious, or that they don’t release much information regarding any hardware-based espionage attempts that have been discovered.
The latter conclusion is certainly plausible since there is a tendency for intelligence agencies not to display their knowledge of an opponent’s capabilities. This kind of bluffing can be advantageous because it gives the agencies leverage that helps them to closely monitor their adversaries, feed them disinformation and secretly build up defenses against the known capabilities.
Publicly disclosing any backdoors that have been discovered by government agencies would result in the information getting back to the adversary, so such a policy would take away the above-mentioned advantages. For these reasons, it’s not unreasonable to think that hardware backdoors are more common than the public is led to believe.
However, given the complexity and expense of these types of attacks, it’s likely that they are done in a targeted manner, instead of in a widespread fashion. A larger quantity of targets would also increase the chances of the hardware backdoors being discovered, so limiting the range of hardware-based attacks would assist in keeping them covert.
Back Orifice is a famed software backdoor that was released in the late nineties. It was originally unleashed to show the inherent security issues in Windows 95 and 98, but it could also serve as legitimate remote access software.
Alongside these applications, it could be used in a more nefarious way – as a Trojan that can take over targeted systems. When hackers tricked their victims into installing the program, it created a backdoor that enabled them to remotely access their machines, log keystrokes, steal passwords and control the various processes on the computer. Back Orifice was followed by Back Orifice 2000, which targeted Windows NT, Windows 2000 and Windows XP in a similar manner.
The NSA’s backdoor in a random number generator
One of the most audacious backdoor examples in recent times was planted by the NSA into the Dual Elliptic Curve Deterministic Random Bit Generator (DualL_EC_DRBG). It was purported to be a cryptographically secure pseudorandom number generator that wound up being made an industry standard by the National Institute of Standards and Technology (NIST).
Let’s backtrack a little to give you a deeper understanding of the issue. Random number generators are a crucial part of cryptography, and are used in a range of applications. Many of our algorithms rely on this type of generator to produce numbers that are sufficiently random – if the random number generator produces numbers that are too predictable, then hackers may be able to decrypt any data that was encrypted with the algorithm.
In the early 2000s, the NSA led an effort to standardize a new random number generator that uses elliptic curve cryptography, dubbed Dual_EC_DRBG. The agency first pushed to have it adopted by the American National Standards Institute (ANSI). According to one of the standard’s authors, John Kelsey, questions regarding a potential backdoor were first raised in a meeting in 2005.
The main issue was that some of the numbers had been carefully chosen in a way that made the output of the random number generator predictable. This setup enabled the NSA to decrypt data that was encrypted by protocols that used Dual_EC_DRBG.
To allay these fears, the authors of the standard made it possible for implementers to choose their own numbers, which would supposedly neutralize the backdoor. However, the fine print of the standard stated that choosing other values would not actually end up being compliant with the standard. In effect, implementers were forced to use the compromised numbers, enabling the backdoor.
Another study found that the random number generator was insecure for separate reasons. Cryptanalysis showed that the output from the generator was not truly random, leaving it vulnerable to other attacks.
Criticism and speculation continued in the following years, however the faulty algorithm still saw some mainstream adoption, notably as part of RSA’s BSAFE cryptography library. It wasn’t until September 2013 that the intentional nature of the backdoor was confirmed, as part of Edward Snowden’s leaked NSA documents.
Among the vast amount of leaked data was evidence of the NSA’s Bullrun program, which aimed to break encryption algorithms, insert backdoors and decrypt data through a variety of other means. One of the many revelations in the data dump was that the NSA had actively worked to insert a backdoor into the Dual_EC_DRBG standard.
Following the leak, NIST, the NSA and RSA released statements distancing their organizations from any involvement. Despite these denials, the subversion that had taken place was clear to those in the cryptography community.
At the end of the year, a Reuters report revealed that the NSA orchestrated a secret $10 million payment to RSA so that Dual_EC_DRBG would be included in BSAFE. Despite the damning reports, the organizations involved in the scandal continued to give carefully worded statements that downplayed their roles in the scandal, with general recommendations to implement more secure random number generators instead.
Up until 2015, Juniper Networks was still using Dual_EC_DRBG in the operating system for its NetScreen VPN routers. There were supposedly countermeasures in place to disable the backdoor, but toward the end of the year it was discovered that code which voided these defenses had been inserted by unknown attackers. This vulnerability made it possible to decrypt traffic that was encrypted on the NetScreen VPN routers.
WordPress plugin backdoor
In May 2019, researchers from Defiant found a backdoor in a WordPress plugin called Slick Popup. The flaw affected all versions up to 1.71 and could be used by attackers to access websites that ran the plugin.
In the latest versions of the plugin, the credentials were hardcoded with a username of slickpopupteam and a password of OmakPass13#. These credentials could be found by anyone with the technical skills, rendering the system’s safety check obsolete.
Hackers could use these values to log in to the websites of their targets, then build other backdoors and launch further attacks. It’s worth emphasizing that the flaw in the plugin could grant access to the entire site of anyone who deployed it. It’s another reminder that users always need to be cautious, because all of their defenses can be compromised by just one unreliable plugin.
Initially, the developer released a fix for the paid version of the plugin, but not the free version. Although the developer made downloads of the free version unavailable, those who were already using the plugin remained vulnerable
Recent government demands for backdoors
Despite backdoors being regarded as a bad idea by the vast majority of those in the industry, various governments continue to push for special access to encrypted data. Over time, the public seems to have become more aware of the inherent risks involved in backdoors, however a worrying trend has emerged where politicians now demand the same type of access under different names.
No matter what politically palatable terms are used, any proposition that resembles a backdoor involves subverting the existing security mechanisms of encryption and authentication, which in turn endanger the entire security ecosystem.
One example of the trend comes from FBI Director Chris Wray’s 2018 speech, in which he specifically stated, “We’re not looking for a ‘back door’”, only to follow the remark with, “What we’re asking for is the ability to access the device once we’ve obtained a warrant from an independent judge, who has said we have probable cause.”
Despite Wray’s best efforts, his two statements are contradictory. A backdoor system is the only reasonable way that the authorities could be granted special access to encrypted communications, whether the process involves judges and warrants or not.
The FBI isn’t alone in its conflicting demands. The UK’s equivalent, the GCHQ, has gone for a different approach which has the same kind of potential ramifications for global security. In an essay posted on the Lawfare blog, two technical directors working under the GCHQ argued for a system that was seemingly analogous to the old trick of placing alligator clips on a phone wire and listening in.
While their proposal may seem promising, it breaks down under scrutiny. The first major point of contention is that end-to-end encryption in our online world is quite different to the phone lines of yore. Physical access to a phone line is needed in order to bug it, whereas digital communications that haven’t been properly secured can theoretically be breached from anywhere in the world.
This makes an insecure online communication channel far more vulnerable than an insecure phone wire. A Mongolian hacker is hardly going to bother making their way across mountains and seas just so they can listen in on a phone line, but the geographical distance becomes irrelevant when internet-based attacks come into play.
The GCHQ plan advocates pressuring tech companies to “silently add a law enforcement participant to a group chat or call.” The essay asserts that such a setup would allow the authorities to listen in while maintaining end-to-end encryption, but it’s not that simple.
A group of tech companies, including Apple, Google and Microsoft, as well as leading cryptographers, summed it up best in their open letter protesting the proposal.
“The GCHQ’s ghost protocol creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused.
“These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.”
Despite the essay’s careful argument to the contrary, the GCHQ’s proposal wants to bypass the normal authentication mechanisms in communication platforms. This would essentially make it a backdoor by another name, and would carry the same risks.
The Germans haven’t left themselves out of the anti-encryption circus, mulling their own laws that would force tech companies to hand over the plaintext of encrypted conversations whenever they are ordered to by a court.
This is currently impossible in any worthwhile encrypted messaging system, so if the laws are enacted, the companies would have to modify their services and weaken the security. Just as we have discussed again and again, such a proposal would be detrimental to global security.
Australia has recently positioned itself as a world leader when it comes to poorly thought-out security laws. Late last year, it passed the Assistance and Access Bill, which was a vague and unpolished collection of legislation that may also force tech companies to weaken their security.
Unfortunately, the processes behind such demands are shrouded in secrecy, so the public may never know if or how the powers are being used, and what ramifications they may have.
How can we help to minimize the risks of backdoors?
Backdoors are a hot topic in the current era of technology, both as modes of attack and as part of proposals that supposedly aim to protect our societies from criminal threats. While there is no sign of either type going away, there are a number of things that we can do to minimize the risks of backdoors.
Oppose any laws that mandate backdoors
The most obvious move is to oppose any legislation that aims to add backdoors or otherwise compromise our security systems. No matter what kind of justification a government tries to use, whether it’s terrorism or crime, these proposals are likely to do far more harm to general security than they can ever compensate for in catching bad guys.
If your country is proposing laws that could be detrimental to security, you may want to get involved in the political process and demand that the laws be scrapped. This could involve going to protests, writing letters to your representatives or taking other political actions.
Despite your best efforts, such opposition may not always be effective – just look at Australia. Out of the 343 submissions made in response to the publication of the Assistance and Access Bill’s draft, all but one were critical of the bill. Even with such heavy opposition, somehow the legislation was still passed.
Despite the challenges, political action against such proposals is worth trying. Public opposition has won the debate before – you just need to look at the Clipper chip example from above for reassurance that it’s not always a hopeless endeavor.
Hardware backdoors can be incredibly difficult to detect, especially if they are introduced as part of a sophisticated attack. This leaves us vulnerable, especially when you consider that much of the technology supply chain is based in countries where adversaries have numerous opportunities to insert backdoors.
Relocating the supply chain is hardly a pragmatic approach. It would take decades to establish the necessary infrastructure, and the price of tech products would be significantly higher due to increased labor costs. However, it’s still worthwhile to consider it as a long term option, especially for critical infrastructure and hardware used in sensitive processes.
There are a variety of checks in place that seem relatively effective when it comes to high value targets such as military and government systems. While these don’t extend to regular users, everyday consumers shouldn’t be too worried about the possibility of hardware backdoors in their computers and devices. This is because most people don’t deal with information that is valuable enough to justify the costs of inserting wide-scale backdoors.
Software backdoors are more of a concern, because it’s much cheaper and easier to insert them. The risks of these backdoors can be partially minimized by using open-source software where possible. The open nature of the source code means that many people can independently look over it, making it much more likely for backdoors to be discovered.
Backdoors can also be limited by compiling software as reproducible builds. This process essentially establishes a chain-of-trust between the source code that humans can read, and the binary code that machines use for communication.
Designing software in this manner makes it possible to prove whether or not source code has been meddled with, making it easier to spot dangers such as backdoors.
Another important step for minimizing the risks is to update software as soon as possible. This is because when backdoors are discovered and made public, the next software update often contains a fix for it. If you hold off on the update, you could be more vulnerable, because the publicity surrounding the backdoor may cause other hackers to launch attacks through it.
What will happen if backdoors are introduced by governments?
If laws mandating backdoors are introduced, the odds are that it will be on a country-by-country basis. Let’s say that the US forces all of its companies to introduce backdoors that allow the authorities to access previously encrypted user data. If a company wanted to avoid such demands, it may be able to move its operations to another jurisdiction where it isn’t forced to comply.
Alternatively, users could migrate to a messaging service that’s based in another country that isn’t subject to such laws, allowing them to avoid the potential compromises that come from backdoors. If every country mandated backdoors, then decentralized encryption apps could pop up that aren’t based in any jurisdiction.
We have to be realistic and understand that there will always be ways to get around any possible encryption backdoor laws. The problem is, that only those with the strongest motivations – terrorists, criminals and the like – will be bothered to use these avenues.
In effect, legislating backdoors will only serve to compromise general security, and because criminals will move on to more complicated ways of securing their communications, such laws won’t give the authorities the result they are chasing in any significant way.
A large majority of experts in the field agree that purposefully inserting backdoors into our systems makes everyone more vulnerable. It’s hard to justify the potential dangers against whatever minimal benefits the authorities might gain. So why aren’t we listening to the experts?
- Why do we need to protect our data in the first place?
- How is our data protected?
- The crucial aspects of data security
- What exactly is a backdoor?
- Why are backdoors dangerous?
- Where can backdoors be placed?
- What could happen if the authorities made a backdoor?
- Examples of backdoors
- Recent government demands for backdoors
- How can we help to minimize the risks of backdoors?
- What will happen if backdoors are introduced by governments?