Encryption Uncoded: A Consumers’s Guide

Encryption Uncoded: A Consumers's Guide

Concerned by reports of hacking, data breaches and government spying, companies and consumers are looking for better ways to protect their data. Many are turning to encryption, a method of encoding messages that goes back millennia. Encryption is commonly used to secure online banking sessions and to protect credit-card data. But for the average computer user, it remains a mystery.

Here’s a brief guide to help readers unlock its secrets.

How does encryption work?

If you saw the recent movie “The Imitation Game,” you’ve seen a rudimentary, by modern standards, form of encryption. During World War II, the Germans used a machine to turn military messages into coded strings of symbols. These days, computers running complex mathematical formulas can do the same thing much faster, and the codes are much harder to crack.

What’s it used for?

If you’ve ever done banking online, you may have noticed a “lock” icon in the address bar, or that the bar turned green. That means the browser session is encrypted by your bank.

Consumers can download a growing crop of encryption tools for texting, browsing sessions and video and phone calls. Users usually must download an app or install software that scrambles messages as they are sent. (The recipient needs to be using the same app or software to unscramble the message.)

Apple has started encrypting personal data on its latest mobile operating system, iOS 8. This means an outsider who hacks into a device or into Apple’s servers would see a string of unreadable characters instead of actual messages or FaceTime videos.

Can I encrypt email messages?

Yes, but it’s tricky. Sender and receiver must use the same type of encryption. If you have encryption switched on, but the friend you’re emailing doesn’t have it, he or she won’t be able to read your message.

Since the revelations of former National Security Agency contractor Edward Snowden about electronic eavesdropping by the NSA, big tech companies have made moves to add encryption. Yahoo Inc. and Google Inc. both have announced plans to begin encrypting emails of users of their services, but the projects are moving slowly.

Can encryption really protect me from getting hacked?
Maybe. If a hacker obtains the encryption keys, or the formula that unlocks the code, all that encrypting was for naught. And that happens all the time in corporate data breaches, says Avivah Litan, a vice president and senior analyst focusing on security issues at market-research firm Gartner Inc. For example, as part of the 2007 breach at TJX Cos., hackers stole a TJX point-of-sale card-reader system and brought it home. The hackers were able to break the code used to encrypt card transactions and stole data from tens of millions of customer accounts.

How can I get started?

In addition to Apple’s built-in encryption in its new mobile devices, Android users can download WhatsApp, which encrypts text messages. WhatsApp, a company owned by Facebook Inc., says it is working on offering encryption for all communication sent between WhatsApp users, including images, audio and text.

A number of vendors—including Voltage Security Inc., Protegrity and RSA Security, a unit of Corp.—offer encryption of corporate data, including email and credit-card records. Silent Circle’s Blackphone is a phone for corporate users that can send encrypted voice calls, text, emails and other data—if both parties are using a Blackphone.

Why isn’t everything encrypted?

There are plenty of reasons. Encryption is time-consuming and difficult to implement. It’s hard to properly manage who has access to encryption keys, and it slows system performance.

Online Extortionists Are Using Encryption as a Ransom Weapon

Online Extortionists Are Using Encryption as a Ransom Weapon

Most of the time we discuss encryption as a way to protect ourselves online , but an increasingly popular form of digital attack uses it as an extortion tool. Criminals are stealing personal files, encrypting them, and hold them hostage until their targets pay for the decryption key.

A report from security firm Symantec details a sharp rise “crypto-ransomware,” its term for this devious form of online crime, noting that these incidents were 45 times more common in 2014 than 2013, with over 340,000 people and organization unable to access files that had been encrypted by extortionists. Usually the extortionists ask their targets to pay in Bitcoin on a website accessible by Tor.

To infect computers, would-be criminals will send malicious e-mail attachments that look like bills or invoices. If you are foolish enough to open the attachment, you’re snared. It’s possible we’re seeing a rise in crypto-ransomware attacks because phishing emails where you’re tricked into opening a malware attachment or bad link are a major way that people get hacked .

There’s a growing underground economy devoted to carrying out crypto-ransomware attacks, with groups like Cryptolocker and Cryptowall selling their services. Your main line of defense is backing up all your files, since you won’t need to pay to get them back if you can just restore them. There are also services popping up to thwart crypto-ransomware, like Decryptolocker, which used a version of Cryptolocker to figure out how to decrypt files that Cryptolocker holds hostage. A service called Cryptoprevent is designed to stop this type of ransomware from a variety of different attackers.

Ransomware is still a relatively rare and aggressive cybercrime, so the likelihood of someone crypto-ransoming your vacation photos is low. No need to panic. Much more common: Phishing attacks of all kinds. A security report released by Verizon today underlines how often people fall for them. With phishing attacks, prevention is even simpler than backing up your files: Just don’t click on sketchy shit!

Encryption not the way to tackle DStv: DOC

Encryption not the way to tackle DStv: DOC

Government should make better use of regulatory tools and legislation to foster a more competitive environment in South Africa’s pay-television industry rather than requiring that conditional access technology be included in state-subsidised set-top boxes.

That’s the view of Solly Mokoetle, the head of the digital migration project at the department of communications (DOC).

“The issue of control access is that of pay-TV operators,” says Mokoetle.

Government’s role in the digital migration process, he says, is to ensure that it happens as fast as possible so that the “digital dividend” spectrum can be released to telecommunications operators for the roll-out of broadband.

South Africa’s digital migration project has ground to a halt as broadcasters MultiChoice and the SABC on one side and e.tv on the other battle each other over whether the set-top boxes government intends subsidising for 5m poorer households contain an access control system based on encryption.

E.tv and many black-owned prospective set-top manufacturers are in favour of encryption. The broadcaster says it’s needed to ensure that free-to-air players can get access to the latest content to compete more effectively with MultiChoice’s dominant DStv platform; MultiChoice argues it’s the wrong choice for South Africa and would amount to unfair competition as it would allow pay-TV players an easier entry into the market.

Earlier this month, government abandoned its commitment to access control, saying broadcasters could use encryption but that it would not be a standard feature of the subsidised boxes.

Mokoetle tells TechCentral that the main priorities for digital migration are ensuring that concerns with interference on South Africa’s border areas are dealt with; expediting the manufacture of set-top boxes; ensuring that the Post Office is able to deliver boxes timeously; making certain that installers are trained to install antennae and boxes; and making sure that those who have the capacity to manufacture set-top boxes are appointed.

Mokoetle says the policy agreed to by cabinet in December 2013 — under former communications minister Yunus Carrim — was not the final policy.

That policy was put out for comment for 30 days and the comments received were meant to be taken into consideration in drawing up a policy to be sent to cabinet for approval, says Mokoetle.

The amended policy was gazetted last Wednesday by new communications minister Faith Muthambi and is final, says Mokoetle.

He says government has erred by focusing on the issue of set-top boxes for so long. “We are going to miss the 17 June deadline.”

In terms of that deadline, South Africa agreed with the International Telecommunication Union (ITU) that it would terminate analogue TV broadcasts by that date. After 17 June, the ITU will no longer protect South Africa from radio frequency spectrum interference from neighbouring countries.

“We are trying to understand the implications of the ITU directives. Practically, we have established that the spectrum plan on analogue will no longer be protected — it will be wiped out. If you have any services running on that frequency you may interfere with your neighbours’ signal or vice versa,” Mokoetle says.

“South Africa cannot do anything about this but they [our neighbours] will have recourse with the ITU. However, the truth of the matter is that many of those countries themselves are not ready to move on digital migration. The problem is not from government, but will come from mobile operators wanting to launch LTE broadband services. We have established that one of the mobile operators in Lesotho will affect our transmitter network.”

Mokoetle was appointed as chief operating officer of the SABC in 2001 and has been involved in the digital migration process since 2004.

He was initially behind the SABC’s support of an encryption system (to collect licence fees), but this was later slapped down.

Mokoetle was appointed chief content operator of Telkom Media in 2007 and CEO of SABC in 2010. Since then, he has worked within the digital migration environment across Africa, having been involved in projects in Ghana, Uganda and Lesotho.

Encryption today: how safe is it really?

Encryption today: how safe is it really?

When checking your email over a secure connection, or making a purchase from an online retailer, have you ever wondered how your private information or credit card data is kept secure?

Our information is kept away from prying eyes thanks to cryptographic algorithms, which scramble the message so no-one else can read it but its intended recipient. But what are these algorithms, how did they come to be widely used, and how secure really are they?

Coded messages

The first cryptographic methods actually go back thousands of years to the time of ancient Greece. Indeed, the word “cryptography” is a combination of the Greek words for “secret” and “writing”.

For example, the Spartans famously used a system where they wrapped a piece of papyrus around a staff of a certain girth, and wrote their message down the length of the staff. When the papyrus was unravelled, the message was jumbled until it reached its destination and was wrapped around another staff of the correct circumference.

Early encryption algorithms like these had to be applied manually by the sender and receiver. They typically consisted of simple letter rearrangement, such a transposition or substitution.

The most famous one is the “Caesar cipher”, which was used by the military commanders of the Roman emperor Julius Ceaser. Each letter in the message was replaced in the encrypted text – the ciphertext – by another letter, which was shifted several places forward in the alphabet.

But over time such simple methods have proved to be insecure, since eavesdroppers – called cryptanalysts – could exploit simple statistical features of the ciphertext to easily recover the plaintext and even the decryption key, allowing them to easily decypher any future messages using that system.

Encryption today: how safe is it really?

Modern computing technology has made it practical to use far more complex encryption algorithms that are harder to “break” by cryptanalysts. In parallel, cryptanalysts have adopted and developed this technology to improve their ability to break cryptosystems.

This is illustrated by the story of the Enigma cryptosystem used by the German military during the Second World War, as dramatised most recently in the movie The Imitation Game.

Enigma’s relatively complex encryption algorithm was implemented using electromechanical computing technology to make it practical for German military communications. An extension of the same technology was used by the “bombe” machines of the British cryptanalysts to make it practical to break the cipher.

Encryption today: how safe is it really?

Current cryptosystems

The cryptosystems in wide use today have their origins in the 1970s, as modern electronic computers started to come into use. The Data Encryption Standard (DES), was designed and standardised by the American government in the mid 1970s for industry and government use. It was intended for implementation on digital computers, and used a relatively long sequence transposition and substitution operations on binary strings.

But DES suffered a major problem: it had a relatively short secret key length (56 bits). From the 1970s to the 1990s, the speed of computers increased by orders of magnitudes making “brute force” cryptanalysis –- which is a simple search for all possible keys until the correct decryption key is found –- increasingly practical as a threat to this system.

Its successor, the Advanced Encryption Standard (AES), uses minimum 128-bit keys by contrast, and is currently the most popular cryptosystem used to protect internet communications today.

Key problem

The AES also has limitations. Like all earlier cryptosystems, it is known as a symmetric-key cryptosystem, where the secret key is known to both the sender who encrypts the message (lets call her Alice), and the receiver who decrypts the message (lets call him Bob).

The secret key, being secret, cannot simply be exchanged over a public communication channel like the internet. If that was intercepted, that would compromise all future encrypted messages. And if you want to encrypt the key, well that produces another problem of how to secure that encryption method.

So, Alice and Bob must first use a private communication channel, such as a private meeting in-person, to exchange the secret key before they can use the cryptosystem to communicate privately. This is a significant practical hurdle for internet communications, where Alice and Bob often have no such private communication means.

To overcome this hurdle – known as the key distribution problem – an ingenious different type of cryptosystem, called an asymmetric-key, or public-key, cryptosystem was devised in the 1970s.

In a public-key cryptosystem, the receiver Bob generates two keys: one is a secret key that Bob keeps to himself for decryption; while the second is a public encryption key that Bob sends to Alice over a public channel. Alice can use the public encryption key to encrypt her messages to Bob. But only Bob can decrypt it with his private key. It thus provides a solution to the key distribution problem of symmetric-key cryptosystems.

In practical applications, due to the higher computational demands of public-key systems compared to symmetric-key systems, both types of cryptosystems are used. A public-key cryptosystem is used only to distribute a key for a symmetric key system like AES, and then the symmetric key system is used to encrypt all susbequent messages.

Consequently, the resulting privacy depends on the security of both symmetric and public key cryptosysems in use. The most commonly used public-key cryptosystems in use today were devised in the 1970s by researchers from Stanford and MIT. They are known as the RSA cryptosystem (from the initials of the designers, Ron Rivest, Adi Shamir, and Len Adleman) and the Diffie-Hellman system, and make use of techniques from an area of mathematics known as number theory.

New bugs uncovered in encryption software

New bugs uncovered in encryption software

New bugs in the widely used encryption software known as OpenSSL were disclosed on Thursday, though experts say do not pose a serious threat like the “Heartbleed” vulnerability in the same technology that surfaced a year ago.

“Heartbleed” triggered panic throughout the computer industry when it was reported in April 2014. That bug forced dozens of computers, software and networking equipment makers to issue patches for hundreds of products, and their customers had to scour data centers to identify vulnerable equipment.

Cybersecurity watchers had feared the new round of bugs would be as serious as “Heartbleed,” according to experts who help companies identify vulnerabilities in their networks. The concerns surfaced after the OpenSSL Project, which distributes OpenSSL software, warned several days ago that it planned to release a batch of security patches.

“You need to take all vulnerabilities seriously, but I’m kind of disappointed. There’s been a week building up to this,” said Cris Thomas, a strategist with cybersecurity firm Tenable Network Security Inc.

The OpenSSL project released updates for four versions of the software, covering 12 security fixes for vulnerabilities reported to them in recent months by several cybersecurity researchers. The threats include one that makes affected systems vulnerable to so-called denial-of-service attacks that disrupt Web traffic, though none threaten the “crypto” technology used to encrypt data, Ristic said.

Ivan Ristic, director of application security with Qualys Inc, said he was not too concerned about the new bugs because most involved programming errors in a new version of OpenSSL, which is not widely used.

“It doesn’t seem a big story,” Ristic said. “I think people feared it would be bad, which is where all the hype came from.”

Can software-based POS encryption improve PCI compliance?

Can software-based POS encryption improve PCI compliance?

In the wake of the recent Verizon report that shows that 80 percent are out of PCI DSS compliance between audits, some vendors are urging the PCI Council to consider approving software-based point-to-point encryption, in addition to the current hardware-based standard.

PCI-approved, hardware-based P2PE allows merchants to drastically shrink the systems subject to compliance, reducing both risks and costs, and will make it easier to stay compliant.

Self-destructing hardware is a “security bonus,” but in general, hardware-based P2PE technology is not as useful for merchants, says Shift4 CEO Dave Oder, whose company is one of the largest software-based P2PE providers.

MORE ON CSO: What is wrong with this picture? The NEW clean desk test

“The vast majority of retailers who have P2PE in use today are using a software-based decryption method provided by Shift4 or one of our competitors,” he said.

According to Oder, software-based P2PE, combined with tokenization, is a secure alternative to hardware-based encryption, and should be allowed under the PCI DSS standard.”The trouble is, PCI is refusing to validate certain types of security solutions even though they are more secure and more useful to merchants than what is currently validated,” he said.

Hardware-based encryption creates a potential single point of failure and is not designed to handle the level of transaction volume and uptime required in the payments industry, he said.

“The PCI Council has not released a software-based P2PE standard that would allow for both decryption and key management outside of a hardware security module,” he said. “Much of the industry is waiting for that and the delay is harming merchants.”

According to Shift4 marketing manager Nathan Casper, merchants with no encryption at all have a self-assessment questionnaire with more than 280 requirements. Merchants with hardware-based encryption have one with just 19 questions. Merchants with software-based encryption get the 280-question form — but only answer those same 19 and put “not applicable” to the rest.

“The part that makes this frustrating to these large merchants is that they are almost always required to employ the assistance of a Qualified Security Assessor to oversee their assessment,” he said. That’s tens of thousands of dollars, or more, spent on someone checking the same “N/A” box 261 times.

Another vendor promoting a software-based encryption alternative is Irvine, Calif.-based Secure Channels, Inc., which offers both hardware and software-based solutions.

“There are software based solutions where the decryption key is hidden in the packet,” said Secure Channels CEO Richard Blech. “There are means contained in the software to have a secure key exchange that completely bypasses the need for a hardware security module. Merchants are being harmed without this solution.”

However, according to Sam Pfanstiel, director of solutions at Atlanta-based Bluefin Payment Systems LLC, there is an excellent reason to stick with the hardware-based requirement.

“Through software-based encryption, you’re performing encryption in memory, and that memory is highly susceptible to memory scraping,” he said. “That is a vector of attack that has been used in almost every cardholder data breach of the last 18 months.”

Hardware-based encryption, by comparison, puts the encryption mechanism — the plain text data — inside a hardware security module that self-destructs if tampered with.

“Bluefin stands firmly on the belief that only hardware-based encryption provides adequate controls to address the attack vectors prevalent in the industry today,” he said.

Bluefin used to be on the other side, he added.

“When the PCI standard was first released, we had a software-based solution in place, and had to look at what PCI was recommending,” he said. “We decided that the new standard represented better cardholder protection.”

Two and a half years and several million dollars of investment later, Bluefin has replaced its software-based encryption with hardware.

“Ease of deployment is only a concern for encryption providers who fail to comply with the new standards and continue to use older technology to perform their encryption and decryption,” said Pfanstiel.

Today, there are currently over 160 validated devices that support hardware-based encryption, he said. “And the list grows every day.”

Computer-stored encryption keys are not safe from side-channel attacks

Computer-stored encryption keys are not safe from side-channel attacksFigure A: Tel Aviv University researchers built this self-contained PITA receiver.

Not that long ago, grabbing information from air-gapped computers required sophisticated equipment. In my TechRepublic column Air-gapped computers are no longer secure, researchers at Georgia Institute of Technology explain how simple it is to capture keystrokes from a computer just using spurious electromagnetic side-channel emissions emanating from the computer under attack.

Daniel Genkin, Lev Pachmanov, Itamar Pipman, and Eran Tromer, researchers at Tel Aviv University, agree the process is simple. However, the scientists have upped the ante, figuring out how to ex-filtrate complex encryption data using side-channel technology.

The process

In the paper Stealing Keys from PCs using a Radio: Cheap Electromagnetic Attacks on Windowed Exponentiation (PDF), the researchers explain how they determine decryption keys for mathematically-secure cryptographic schemes by capturing information about secret values inside the computation taking place in the computer.

“We present new side-channel attacks on RSA and ElGamal implementations that use the popular sliding-window or fixed-window (m-ary) modular exponentiation algorithms,” the team writes. “The attacks can extract decryption keys using a low measurement bandwidth (a frequency band of less than 100 kHz around a carrier under 2 MHz) even when attacking multi-GHz CPUs.”

If that doesn’t mean much, this might help: The researchers can extract keys from GnuPG in just a few seconds by measuring side-channel emissions from computers. “The measurement equipment is cheap, compact, and uses readily-available components,” add the researchers. Using that philosophy the university team developed the following attacks.

Software Defined Radio (SDR) attack: This comprises of a shielded loop antenna to capture the side-channel signal, which is then recorded by an SDR program installed on a notebook.

Portable Instrument for Trace Acquisition (PITA) attack: The researchers, using available electronics and food items (who says academics don’t have a sense of humor?), built the self-contained receiver shown in Figure A. The PITA receiver has two modes: online and autonomous.

Online: PITA connects to a nearby observation station via Wi-Fi, providing real-time streaming of the digitized signal.

Autonomous: Similar to online mode, PITA first measures the digitized signal, then records it on an internal microSD card for later retrieval by physical access or via Wi-Fi.

Consumer radio attack: To make an even cheaper version, the team leveraged knowing that side-channel signals modulate at a carrier frequency near 1.7 MHz, which is within the AM radio frequency band. “We used a plain consumer-grade radio receiver to acquire the desired signal, replacing the magnetic probe and SDR receiver,” the authors explain. “We then recorded the signal by connecting it to the microphone input of an HTC EVO 4G smartphone.”

Cryptanalytic approach

This is where the magic occurs. I must confess that paraphrasing what the researchers accomplished would be a disservice; I felt it best to include their cryptanalysis description verbatim:

“Our attack utilizes the fact that, in the sliding-window or fixed window exponentiation routine, the values inside the table of ciphertext powers can be partially predicted. By crafting a suitable ciphertext, the attacker can cause the value at a specific table entry to have a specific structure.

“This structure, coupled with a subtle control flow difference deep inside GnuPG’s basic multiplication routine, will cause a noticeable difference in the leakage whenever a multiplication by this structured value has occurred. This allows the attacker to learn all the locations inside the secret exponent where the specific table entry is selected by the bit pattern in the sliding window. Repeating this process across all table indices reveals the key.”

Figure B is a spectrogram displaying measured power as a function of time and frequency for a recording of GnuPG decrypting the same ciphertext using different randomly generated RSA keys. The research team’s explanation:

“It is easy to see where each decryption starts and ends (yellow arrow). Notice the change in the middle of each decryption operation, spanning several frequency bands. This is because, internally, each GnuPG RSA decryption first exponentiates modulo the secret prime p and then modulo the secret prime q, and we can see the difference between these stages.

“Each of these pairs looks different because each decryption uses a different key. So in this example, by observing electromagnetic emanations during decryption operations, using the setup from this figure, we can distinguish between different secret keys.”

Computer-stored encryption keys are not safe from side-channel attacksFigure B: A spectrogram

Any way to prevent the leakage?

One solution, albeit unwieldy, is operating the computer in a Faraday cage, which prevents any spurious emissions from escaping. “The cryptographic software can be changed, and algorithmic techniques used to render the emanations less useful to the attacker,” mentions the paper. “These techniques ensure the behavior of the algorithm is independent of the inputs it receives.”

Interestingly, the research paper tackles a question about side-channel attacks that TechRepublic readers commented on in my earlier article, “It’s a hardware problem, so why not fix the equipment?”

Basically the researchers mention that the emissions are at such a low level, prevention is impractical because:

Any leakage remnants can often be amplified by suitable manipulation as we do in our chosen-ciphertext attack;

Leakage is often an inevitable side effect of essential performance-enhancing mechanisms.

Something else of interest: the National Institute of Standards and Technology (NIST) considers resistance to side-channel attacks an important evaluation consideration in its SHA-3 competition.

 

Encryption is gone, communications minister Muthambi restates

Encryption is gone, communications minister Muthambi restates

Government-provided set-top boxes for digital terrestrial television will not contain conditional access based on encryption, and prospective pay-television operators wanting to use such a system will have to deploy their own boxes to subscribers.

That’s according to a statement, issued at the weekend by communications minister Faith Muthambi, in which she makes it clear that conditional access will not feature in the final amended policy on broadcasting digital migration.

The move appears to be a victory for MultiChoice and the SABC, which have opposed encryption in the free-to-air boxes that consumers will need to receive digital terrestrial broadcasts.

The set-top boxes, which will be provided free of charge to as many as 5m households (previously the plan was to provide a subsidy), will still contain a control system. But it won’t employ conditional access and so can’t be used by pay-TV operators. Instead, the minister says, it’s simply a security mechanism that, among other things, will prevent set-top boxes from being used outside South Africa’s borders.

It appears, although it’s not completely clear yet, that the decision means that there will be no restriction on the use of internationally manufactured set-top boxes in South Africa and that modern TVs with integrated digital receivers — those based on the DVB-T2 digital broadcasting standard — will work in South Africa.

In her statement, Muthambi says the control system agreed to by cabinet “does not mean a conditional access system … [or] an encryption of the signal to control access to content by viewers”.

Rather, it is a “security feature to encourage the local electronics manufacturing sector”.

“The set-top box must have minimal switching (on/off) security features to protect the subsidised set-top boxes from theft or leaving South Africa’s borders,” she says.

It must have capabilities to provide government information and services, she adds.

“The new policy position does not in any way prohibit any broadcaster who will want to include conditional access in the provision of broadcasting services to its customers. It is the firm view of the department that broadcasters who will want to do that should make their own investment in the acquisition of a conditional access system.”

MultiChoice, which owns DStv, has long argued that providing a conditional access system in government-subsidised set-top boxes would amount to unfair competition as it would allow prospective pay-TV rivals to launch services without the heavy upfront investment associated with building such a platform.

It has argued, too, that encryption in free-to-air set-top boxes is complex and ultimately runs counter to consumers’ interests.

But rival e.tv has argued, among other things, that encryption is vital to ensure free-to-air broadcasters can secure the latest international content to compete more effectively with DStv.

In her statement, Muthambi also confirms government’s new position is that set-top boxes will be provided free of charge to 5m poor television households. Previously, a partial subsidy had applied.

Distribution of the free boxes will prioritise households in border regions to minimise signal interference from neighbouring countries. After 17 June, the International Telecommunication Union, an agency of the United Nations, will no longer protect countries that have not completed their migration projects from this interference.

Building backdoors into encryption isn’t only bad for China, Mr President

Building backdoors into encryption isn't only bad for China, Mr President

Want to know why forcing tech companies to build backdoors into encryption is a terrible idea? Look no further than President Obama’s stark criticism of China’s plan to do exactly that on Tuesday. If only he would tell the FBI and NSA the same thing.

In a stunningly short-sighted move, the FBI – and more recently the NSA – have been pushing for a new US law that would force tech companies like Apple and Google to hand over the encryption keys or build backdoors into their products and tools so the government would always have access to our communications. It was only a matter of time before other governments jumped on the bandwagon, and China wasted no time in demanding the same from tech companies a few weeks ago.

As President Obama himself described to Reuters, China has proposed an expansive new “anti-terrorism” bill that “would essentially force all foreign companies, including US companies, to turn over to the Chinese government mechanisms where they can snoop and keep track of all the users of those services.”

Obama continued: “Those kinds of restrictive practices I think would ironically hurt the Chinese economy over the long term because I don’t think there is any US or European firm, any international firm, that could credibly get away with that wholesale turning over of data, personal data, over to a government.”

Bravo! Of course these are the exact arguments for why it would be a disaster for US government to force tech companies to do the same. (Somehow Obama left that part out.)

As Yahoo’s top security executive Alex Stamos told NSA director Mike Rogers in a public confrontation last week, building backdoors into encryption is like “drilling a hole into a windshield.” Even if it’s technically possible to produce the flaw – and we, for some reason, trust the US government never to abuse it – other countries will inevitably demand access for themselves. Companies will no longer be in a position to say no, and even if they did, intelligence services would find the backdoor unilaterally – or just steal the keys outright.

For an example on how this works, look no further than last week’s Snowden revelation that the UK’s intelligence service and the NSA stole the encryption keys for millions of Sim cards used by many of the world’s most popular cell phone providers. It’s happened many times before too. Ss security expert Bruce Schneier has documented with numerous examples, “Back-door access built for the good guys is routinely used by the bad guys.”

Stamos repeatedly (and commendably) pushed the NSA director for an answer on what happens when China or Russia also demand backdoors from tech companies, but Rogers didn’t have an answer prepared at all. He just kept repeating “I think we can work through this”. As Stamos insinuated, maybe Rogers should ask his own staff why we actually can’t work through this, because virtually every technologist agrees backdoors just cannot be secure in practice.

(If you want to further understand the details behind the encryption vs. backdoor debate and how what the NSA director is asking for is quite literally impossible, read this excellent piece by surveillance expert Julian Sanchez.)

It’s downright bizarre that the US government has been warning of the grave cybersecurity risks the country faces while, at the very same time, arguing that we should pass a law that would weaken cybersecurity and put every single citizen at more risk of having their private information stolen by criminals, foreign governments, and our own.

Forcing backdoors will also be disastrous for the US economy as it would be for China’s. US tech companies – which already have suffered billions of dollars of losses overseas because of consumer distrust over their relationships with the NSA – would lose all credibility with users around the world if the FBI and NSA succeed with their plan.

The White House is supposedly coming out with an official policy on encryption sometime this month, according to the New York Times – but the President can save himself a lot of time and just apply his comments about China to the US government. If he knows backdoors in encryption are bad for cybersecurity, privacy, and the economy, why is there even a debate?

QR codes with advanced imaging and photon encryption protect computer chips

QR codes with advanced imaging and photon encryption protect computer chips

QR, or Quick Response, codes — those commonly black and white boxes that people scan with a smartphone to learn more about something — have been used to convey information about everything from cereals to cars and new homes.

But, University of Connecticut (UConn) researchers think the codes have a greater potential: protecting national security.

Using advanced 3-D optical imaging and extremely low light photon counting encryption, Board of Trustees Distinguished Professor Bahram Javidi and his research team have taken the ordinary QR code and transformed it into a high-end cybersecurity application that can be used to protect the integrity of computer microchips. The findings were published in IEEE Photonics Journal.

“An optical code or QR code can be manufactured in such a way that it is very difficult to duplicate,” said Javidi, whose team is part of UConn’s Center for Hardware Assurance, Security, and Engineering (CHASE) in the School of Engineering. “But if you have the right keys, not only can you authenticate the chip, but you can also learn detailed information about the chip and what its specifications are.

“And, that is important to the person using it.”

Corrupted and recycled integrated circuits or microchips posed a significant threat to the international electronics supply chain. Bogus or used computer chips may not matter much when they cause poor cell phone reception or an occasional laptop computer crash in personal use. But the problem becomes exponentially more serious when counterfeit or hacked chips turn up in the U.S. military.

The problem has been exacerbated in recent years by the fact that much of the national production of microcircuits has moved offshore, where prices are lower but ensuring quality control is more difficult.

In 2012, a Senate Armed Services Committee report found that more than 100 cases of suspected counterfeit electronics parts from China had made their way into the Department of Defense supply chain. In one notable example, officials said counterfeit circuits were used in a high-altitude missile meant to destroy incoming missiles. Fixing the problem cost the government $2.675 million, the report said.

Unlike commercial QR codes, Javidi’s little black and white boxes can be scaled as small as microns or a few millimeters and would replace the electronic part number that is currently stamped on most microchips.

Javidi says he can compress vital information about a chip — its functionality, capacity, and part number — directly into the QR code so it can be obtained by the reader without accessing the Internet. This is important in cybersecurity circles, because linking to the Internet greatly increases vulnerability to hacking or corruption.

To further protect the information in the QR code, Javidi applies an optical imaging “mask” that scrambles the QR code design into a random mass of black-and-white pixels that look similar to the snowy images one might see on a broken TV. He then adds yet another layer of security through a random phase photon-based encryption that turns the snowy image into a darkened nighttime sky with just a few random stars or dots of pixilated light.

The end result is a self-contained, highly secure, information-laden microscopic design that is nearly impossible to duplicate. Only individuals who have the special corresponding codes could decrypt the QR image.

And that is important to all of us.