Iran blocks encrypted messaging apps amid nationwide protests

For the past six days, citizens have taken to the streets across Iran, protesting government oppression and the rising cost of goods. Video broadcasts from the country have shown increasingly intense clashes between protesters and riot police, with as many as 21 people estimated to have died since the protests began. But a complex fight is also raging online, as protesters look for secure channels where they can organize free of government interference.

Iran blocks encrypted messaging apps amid nationwide protests

Even before the protest, Iran’s government blocked large portions of the internet, including YouTube, Facebook, and any VPN services that might be used to circumvent the block. The government enforced the block through a combination of centralized censorship by the country’s Supreme Cybercouncil and local ISP interference to enforce more specific orders. The end result is a sometimes haphazard system that can still have devastating effects on any service the regime sees as a threat.

For years, Iran’s most popular encrypted messenger has been Telegram. While some cryptographers have criticized Telegram’s homebrew cryptography, local Iranian users have cared more about the app’s independence from the United States. (The app’s core development team is based in Russia, making it less vulnerable to US government requests.) The app’s massive group chats proved popular, and the government was content to target individual users, occasionally hacking accounts by intercepting account reset messages sent to the user’s phone number.

As protests intensified, Telegram has become both a tool for organizers and a target for the regime. On Saturday, Telegram suspended the popular Amad News channel for violating the service’s policy against calls to violence. One conversation was publicly called out by Iran’s Minister of Technology for recommending protesters attack police with Molotov cocktails. According to Telegram founder Pavel Durov, the government also requested suspensions for a number of other channels that had not violated the policy on violence. When Telegram refused, the government placed a nationwide block on the app.

The government also banned Instagram, although government representatives insist both bans are temporary and will be lifted once protests subside.

The most popular alternative among US activists is Signal, which offers similar group chat features with more robust encryption — but Signal is blocked in Iran for an entirely different reason. The app relies on the Google AppEngine to disguise its traffic through a process called “domain fronting.” The result makes it hard to detect Signal traffic amid the mess of Google requests — but it also means that wherever Google is unavailable, Signal is unavailable too.

At the same time, Google appears to have blocked Iranian access to AppEngine to comply with US sanctions. After years of diplomatic pressure, US companies face significant regulations on any technology exported to Iran, and it’s often unclear how those rules extend to cloud services like AppEngine. Still, researchers like Collin Anderson say Google could find a way to whitelist Signal in Iran if the company wanted to. (Google declined to comment when reached by The Verge.)

Still, the blocks leave organizers in a difficult place, with no clear way to coordinate activity across groups that often sprawl to hundreds of thousands of people. WhatsApp is still available in the country, although bans on the service have been proposed in the past.

Bitcoin Exchange Has Been Forced to Close After Second Cyber-Attack

Bitcoin Exchange Has Been Forced to Close After Second Cyber-Attack

A South Korean Bitcoin exchange has been forced to close after suffering another major cyber-attack.

Youbit claimed it was “very sorry” but has filed for bankruptcy after it suffered the cyber-attack, less than eight months after the first.

In a statement in Korean on its homepage the firm said it had lost 17% of its assets in the raid, with all deposits and withdrawals now halted.

However, customers will get back the majority of their investments — with the firm promising to use cyber-insurance cover and money gleaned from selling its operating rights to pay them back.

It explained in the translated statement:

“Due to bankruptcy, the settlement of cash and coins will be carried out in accordance with all bankruptcy procedures. However, in order to minimize the damage to our members, we will arrange for the withdrawal of approximately 75% of the balance at 4:00 a.m. on Dec 19. The rest of the unpaid portion will be paid after the final settlement is completed.”

The incident highlights the increasing scrutiny being placed on crypto-currency exchanges by cyber-criminals keen to make a fast buck.

In April, Youbit lost 4,000 Bitcoins ($73m) to hackers, with South Korea’s Internet and Security Agency (Kisa) blaming the rogue nation over the border for the raid.

North Korean hackers are also thought to have been targeting crypto-currency insiders in London in a bid to steal credentials.

The hermit nation sees crypto-currency as one way to keep funds flowing into the country in the face of tightening sanctions put in place as a result of its continued nuclear testing.

Leigh-Anne Galloway, cyber-resilience lead at Positive Technologies, argues that Bitcoin exchanges need to get the basics right when it comes to cybersecurity.

“Firstly, server infrastructure and the applications that host cryptocurrencies need to be seen as a security risk — as this is a vector for attack we have seen time and time again. No matter how secure a currency is, if the web application, mobile application, server or network the currency operates on is vulnerable, the contents are at risk,” she explained.

“Secondly, there needs to be a greater focus on preventing social engineering attacks — protecting against website clones and educating users to avoid malicious websites and apps as quick as possible.”

It is difficult for the FBI to crack most smartphone encryption

It is difficult for the FBI to crack most smartphone encryption

The FBI is struggling to decode private messages on phones and other mobile devices that could contain key criminal evidence, and the agency failed to access data more than half of the times it tried during the last fiscal year, FBI Director Christopher Wray told House lawmakers.

Wray will testify at the House Judiciary Committee Thursday morning on the wide range of issues the FBI faces. One of the issues hurting the FBI, he said, is the ability of criminals to “go dark,” or hide evidence electronically from authorities.

“The rapid pace of advances in mobile and other communication technologies continues to present a significant challenge to conducting lawful court-ordered access to digital information or evidence,” he said in his prepared remarks to the committee. “Unfortunately, there is a real and growing gap between law enforcement’s legal authority to access digital information and its technical ability to do so.”

Wray said criminals and terrorists are increasingly using these technologies. He added that the Islamic State is reaching potential recruits through encrypted messaging, which are difficult for the FBI to crack.

“If we cannot access this evidence, it will have ongoing, significant effects on our ability to identify, stop, and prosecute these offenders,” he said.

He noted that in the last fiscal year, the FBI was unable to access data on about 7,800 mobile devices, even though they had the legal authority to try. He said that was a little more than half of the mobile devices the FBI tried to access in fiscal year 2017.

Wray said the FBI tries to develop workarounds to get at the data, but doesn’t always succeed.

Wray also made it clear that the FBI is not asking for more legal authority to access mobile devices, but said, without being specific, that new ways must be found to let the FBI access this data.

“When changes in technology hinder law enforcement’s ability to exercise investigative tools and follow critical leads, those changes also hinder efforts to identify and stop criminals or terrorists,” he said.

He added that the FBI is “actively engaged” with companies to discuss the problem that “going dark” has on law enforcement, and the agency is working with academics and technologists to find “solutions to this problem.”

Wray is likely to be questioned on a wide range of topics at Thursday’s hearing, including new complaints from Republicans that Wray and other Justice Department officials have ignored requests for information about their actions in the Russia election meddling probe.

Republicans this week started writing a contempt resolution against Wray and others after the Justice Department failed to answer questions from lawmakers about why a top FBI agent was removed from the Russia probe. It was later discovered that the agent sympathized with Hillary Clinton and opposed then-presidential candidate Donald Trump.

Texas Church Shooting: More Calls for Encryption Backdoors

Texas Church Shooting: More Calls for Encryption Backdoors

US Deputy Attorney General, Rod Rosenstein, has decided to use the recent mass shooting at a Texas church to reiterate calls for encryption backdoors to help law enforcers.

The incident took place at the First Baptist Church in Sutherland Springs, killing at least 26 people.

Deceased suspect Devin Kelley’s mobile phone is now in the hands of investigators, but they can’t access it — a similar situation to the one following the mass shooting in San Bernardino which resulted in a court room standoff between Apple and the FBI.

It’s now widely understood that there’s no way for an Apple, Facebook or other tech provider to engineer backdoors in encrypted systems that would allow only police to access content in cases such as these, without putting the security of millions of law-abiding customers at risk.

However, that hasn’t prevented Rosenstein becoming the latest senior US government official to call on technology companies to implement backdoors.

“As a matter of fact, no reasonable person questions our right to access the phone. But the company that built it claims that it purposely designed the operating system so that the company cannot open the phone even with an order from a federal judge,” he told a meeting of local business leaders in Maryland.

“Maybe we eventually will find a way to access the data. But it costs a great deal of time and money. In some cases, it surely costs lives. That is a very high price to pay.”

For its part, Apple has maintained that it works closely with law enforcement every day, even providing training so that police better understand the devices and know how to quickly request information.

However, it is standing firm on the matter of backdoors, aware that breaking its own encrypted systems for US police would likely lead to a stream of requests from other regions including China.

It’s also been suggested that cyber-criminals or nation state actors could eventually get their hands on any backdoors, which would be catastrophic for Apple and its users.

Top10VPN.com head of research, Simon Migliano, called for cool heads on the issue.

“The US Deputy Attorney General bemoans ‘warrant-proof encryption’ but fails to understand that there is no other type of encryption. As all privacy and security experts agree, to undermine encryption with ‘backdoors’ is to open a Pandora’s Box that puts at risk the entire online – and therefore real-world – economy.

“End-to-end encryption secures our banking, online shopping and sensitive business activities. Any kind of ‘backdoor’ would fatally undermine security in these areas. As we learned to our cost with the leak of CIA tools earlier this year, once an exploit exists, it’s only a matter of time until it leaks and cybercriminals have yet another tool at their disposal.”

Three Defenses to Solve the Problem of Storing Password

Three Defenses to Solve the Problem of Storing Password

One of the biggest concerns around managing the passwords of an organization’s employees lies in how to store those passwords on a computer.

Keeping every user’s password in a plain text file, for example, is too risky. Even if there are no bugs to recklessly leak the passwords to the console, there’s little to stop a disgruntled systems administrator taking a peek at the file for pleasure or profit. Another line of defense is needed.

Let’s hash it out

Back in the 1970s, Unix systems began to ‘hash’ passwords instead of keeping them in plain text. A hash function is used to calculate a value (like a number) for each password or phrase, in such a way that, while the calculation itself may be easy, carrying out ‘in reverse’ – to find the original password – is hard.

By way of illustration, suppose we take an English word, and assign each letter a value: i.e. A=1, B=2, C=3 and so on. Each adjacent pair of letters in the word is then multiplied together, and added up. The “hash” of the word is this total so, using this method, the word BEAD has a hash value of (BxE) (ExA) (AxD) = (2×5) (5×1) (1×4) = 19. FISH scores 377, LOWLY scores 1101, and so on.

Using this system, the password file would store a number for each user, rather than the password itself. Suppose, for example, the password file entry for me has the number 2017. When I log in, I type in my password, the computer carries out the calculation above and, if the result is 2017, it lets me in. If, however, the calculation results in another value, access is denied.

As all that’s stored in the password file is the value 2017, and not my actual password, it means that if a hacker steals the entire contents of the file, there is still a puzzle to solve before they can log in as me.

Verbal attack

Although hashed passwords may be more secure than plaintext, there still remains a problem. The aim of a dictionary attack is to obtain a list of all English words and calculate their hash values, one by one; if my word is in there, it will be found eventually. However, while this may sound like a painful amount of work, the point is that it won’t just crack my password – it will crack every password.

An index is created in such an attack, which is then sorted by hash value, with individual words added to the index as their hash values are calculated: BAP goes on page 18, for example, BUN goes on 336, and CAT on page 23. ‘Reversing’ the hash function is then just a matter of looking up the word in the index – simply turn to page 2017 and you’ll find my password.

During World War II, the cryptanalysts at Bletchley Park did literally that: they worked out every possible way in which the common German word ‘eins’ could be enciphered using the Enigma machine, and recorded the Enigma settings as they went. The results were then sorted alphabetically into the so-called ‘eins catalogue’ meaning that, if the codebreakers could guess which encrypted letters represented the plaintext ‘eins’, they were then able to simply rummage through a battered green filing cabinet and pull out the key.

Salt in the wound

The next layer of defense against a dictionary attack is to use what’s called salt. A random variation to the calculation is applied differently for each user’s password in a salted hash scheme. One user could have A=17, B=5, C=13, and so on, for example, and another could have A=4, B=22, C=17. The password file would then store the salt (the A, B, C values) and the hash result. The computer could still carry out a quick calculation to check the password, but the variation means that the same password would have a different hash value for a different user.

It would therefore be impossible to compile a single dictionary that could successfully reverse the hash for everyone.

Finally, the best modern systems use a so-called iterated hash. The idea of this is to make the hash function itself harder to calculate by re-hashing the data thousands of times. This does slow down the computer checking the passwords, but anyone trying to search for a password will also be slowed by the same factor. The end result is essentially a computing power arms race between system administrators and hackers although, if you’re Amazon or Microsoft, it’s a fight you’re well placed to win.

Protecting user passwords is critical to the security of an organization’s confidential files and information. It’s vital therefore that steps are taken to protect passwords, encrypting them to such a degree that even the most determined criminal will find it impossible to decipher.

Quantum Computing will not be able to crack Encryption Keys until the 2030s

In September, Satya Nadella announced that Microsoft is working on a quantum computer (QC) architecture. Since then, Intel also has announced it is working on a QC architecture. Microsoft and Intel join Alibaba, Google, IBM, Tencent and a host of academic and national research labs (including China, the European Commission, Russia and the US) in a quest to build working QC hardware and software that can solve real-world problems.

What is quantum computing and why will it make a difference?

Quantum Computing is a practical application of quantum physics using individual subatomic particles at sub-Kelvin temperatures as compute elements. It presents many research and development challenges, but the potential payoff is orders-of-magnitude faster compute acceleration for specific types of problems.

QC is like computing with a graphics processing unit (GPU) accelerator, in that GPUs and QC systems must be connected to a traditional processor that can run an operating system and schedule programs to run on the accelerator.

QC has the potential to quickly solve problems that are impossible to calculate in useful timeframes (or even human lifetimes) today.

One of the marquee potential applications for QC is breaking cryptographic keys—in other words, compromising security encryption that protects sensitive data. While a lot has been written about that, it is unlikely QC will be capable of cracking encryption keys until the 2030s. Here’s why it will take so long.

Challenge 1: Programming QC

QC architecture is based on “qubits” instead of binary computer bits. I am not a quantum physicist, so I’m not going to tell you how or why a qubit works. The analogy I use to describe how a QC program works is that multiple qubits interact like the waves generated by throwing a handful of small floating balls into a pool of water.

Assume that the distances between balls and the timing of when each ball hits the water are purposeful: the relative position of each ball and the order in which they hit the water is the program. The intersecting wave fronts between the balls then changes the up/down position of each of the balls in interesting patterns. At some point the position of each ball is measured, and that collection of measurements is the result of a QC program.

My analogy is easy to visualize but far too simple. It doesn’t explain how to write a QC program, nor does it tell you how to interpret the results.

However, that lack of connection to real-world programming talent and domain experience is actually just like real QC architectures! I’m not joking. Look at IBM’s Quantum Experience Composer, as an example. It looks like a music staff. But I’m not a musician, or in this case I’m not a quantum physicist who understands IBM’s QC system. For a mainstream software professional, it’s difficult to understand how to use IBM’s composer and how it is useful in solving a real-world problem. Programmers can place notes on the staff, but those notes won’t make any sense. Even after reading the detailed instructions, programmers will not be able to translate a problem in their real-world domain into a program in the QC domain.

The challenge in finding a quantum physicist who understands how to program a specific QC architecture and who understands the problem you want to solve is much worse than finding a Masters- or PhD-level data scientist to analyze all that big data you’ve been hoarding. It would be like trying to find a needle in a thousand haystacks.

Because of this challenge, QC ecosystems will have to create application programming interfaces (APIs) and then create libraries of useful functions with those APIs to hide QC complexity and enable programmers to use QC systems without knowing how QC systems work or how to compose programs for QC. For example, IBM’s QISKit enables QC acceleration through Python language APIs. However, those APIs still depend on programmers understanding quantum physics. The next step is to create libraries of useful QC acceleration functions.

Challenge 2: Getting a stable result from a QC program

One of the key challenges for QC is to make sure that the qubits are all working properly when a program starts and that they continue to work correctly until each qubit’s end-of-program state has been observed.

This is a lot harder than it sounds.

First, it requires freezing the qubits to nearly “absolute zero” just to have a fighting chance of keeping them in proper working order until a calculation is finished. Absolute zero (0°Kelvin / -459.67°Fahrenheit / -273.15°Celsius) is an ideal absence of any heat or movement at all; it is impossible to achieve in our universe, due to fundamental laws of thermodynamics. Qubits require 0.01°K / -459.65°F / -273.14°C, vanishingly close to absolute zero. That is a lot colder than deep space and expensive to achieve.

Because it is so difficult to get qubits to behave properly for long enough to finish a program, even at these low temperatures, QC architectures need to design error detection and correction into each qubit. Qubits with error detection and correction are interchangeably called a “fault-tolerant” qubits or “logical” qubits.

Directly observing a qubit ends a program. QC architectures must entangle extra qubits with a computing qubit, so a QC program can infer the state of a computing qubit without directly observing it (and thereby stopping a calculation). If an error is observed, then the erroneous qubit state can be corrected and the QC calculation completed.

Today, a lot of extra physical qubits are needed to create a logical qubit, on the order of 10s to thousands of extra physical qubits depending on the architecture. A single physical qubit is possible, if the structure of the qubit itself is fault-tolerant. Microsoft is claiming a breakthrough in materials-based fault-tolerant qubit design called “topological” qubits. Microsoft’s topological qubit contains only one physical qubit, based on a pair of Majorana fermion particles, but that breakthrough has not yet been confirmed by outside labs.

Challenge 3: Assembling and programing qubits as a QC accelerator

Today’s state-of-the-art is that no one has publicly shown even a single functional logical qubit. All demonstrations to-date have only used physical qubits. Public demonstrations are getting more complex as labs learn to orchestrate the manipulation and measurement of tens of physical qubits. For example, a Russian team implementing 51 physical qubits now leads the field.

Solving useful real-world problems, such as breaking 128-bit encryption keys, will require assembling and orchestrating thousands of logical qubits at near absolute zero temperatures. It will also require learning how to write complex programs for QC architectures. There are QC algorithmic frameworks for writing programs that can help speed up cracking encryption keys, such as Shor’s and Grover’s algorithms, but QC researchers still don’t understand how to frame those algorithms as an expression of qubit interactions (intersecting wave fronts in my example above).

Researchers are learning to build QC systems that can reliably orchestrate thousands of logical qubits. And they are learning how to usefully program those qubits. Then they must build a software ecosystem to commercialize their QC systems. Of course, it also requires building thousands of qubits.

Using graphics processing units (GPUs) compute as a model, QC must implement layers of software abstraction and easy to use development environments, so average programmers can use QC systems as compute accelerators without having to understand how to program any specific QC system.

Caution: QC objects through the looking glass are farther than they appear

There are some near-term applications for physical qubits: mostly solving optimization and quantum chemistry problems. Many of these problems can be solved using hundreds to thousands of physical qubits.

A raft of companies that are heavily invested in deep learning are also counting on physical qubits to accelerate deep learning training. Alibaba, Google, IBM, Microsoft and Tencent are all focused here. Integrating QC into the deep learning model creation process would be a neat way of side-stepping challenge #1 (programming), because QC programming would be hidden from human programmers by deep learning abstraction layers.

Many of the companies investing in physical qubits are striving to commercialize their QC architectures within the next five to ten years. This seems doable, given the level of investment by some of the larger competitors but still relies on several research breakthroughs, and breakthroughs cannot be scheduled.

All the QC researchers I have talked with say that shipping a commercial QC accelerator based on logical qubits is still at least 15 years away, pointing to commercialization in the early 2030s at the soonest. There is still a lot of fundamental science left to be done. Commercializing that science will take time. So too will building a programming ecosystem to make QC accelerators accessible to a wide range of programmers.

Breaking the code on quantum cryptography futures

The US National Institute of Standards and Technology (NIST) is working on detailed recommendations for a post-QC cryptography world. NIST issued a formal call for proposals last December; November 30, 2017 is the deadline for submissions. NIST’s intent is to issue draft standards on post-quantum cryptography in the 2023-2025 timeframe, about halfway through an industry consensus minimum 15-year QC development and commercialization period.

NIST has quantum physicists on staff. Many of its customers build and deploy systems that will spend decades in the field. Between now and NIST’s draft post-quantum cryptography standards, NIST published a concise summary of interim cryptographic safety measures.

QC will not break encryption keys this decade. Without massive research and development breakthroughs, the QC researchers I have talked with do not believe that QC will break encryption keys during the next decade, either.

It will happen at some point, but there are reasonable steps that can be taken now to keep data safe for at least a couple of decades. In a few years NIST, and presumably sibling governmental organizations across the globe, will publish stronger recommendations that will directly address post-quantum computing cryptographic safety.

Still confused? You are in good company. A key fact to remember is that QC is still at the beginning of a very long road to commercialization.

FBI couldn’t retrieve data from nearly 7000 mobile phones due to encryption

FBI couldn't retrieve data from nearly 7000 mobile phones due to encryption

The head of the FBI has reignited the debate about technology companies continuing to protect customer privacy despite law enforcement having a search warrant.

The FBI says it hasn’t been able to retrieve data from nearly 7000 mobile phones in less than one year, as the US agency turns up the heat on the ongoing debate between tech companies and law enforcement officials.

FBI Director Christopher Wray says in the first 11 months of the fiscal year, US federal agents were blocked from accessing the content of 6900 mobile phones.

“To put it mildly, this is a huge, huge problem,” Wray said in a speech on Sunday at the International Association of Chiefs of Police conference in Philadelphia.

“It impacts investigations across the board – narcotics, human trafficking, counterterrorism, counterintelligence, gangs, organised crime, child exploitation.”

The FBI and other law enforcement officials have long complained about being unable to unlock and recover evidence from mobile phones and other devices seized from suspects even if they have a warrant. Tech firms maintain they must protect their customers’ privacy.

In 2016 the debate was on show when the Justice Department tried to force Apple to unlock an encrypted mobile phone used by a gunman in a terrorist attack in San Bernardino, California. The department eventually relented after the FBI said it paid an unidentified vendor who provided a tool to unlock the phone and no longer needed Apple’s assistance, avoiding a court showdown.

The Justice Department under US President Donald Trump has suggested it will be aggressive in seeking access to encrypted information from technology companies. But in a recent speech, Deputy Attorney General Rod Rosenstein stopped short of saying exactly what action it might take.

Wi-Fi’s Most Popular Encryption May Have Been Cracked

Wi-Fi's Most Popular Encryption May Have Been Cracked

Your home Wi-Fi might not be as secure as you think. WPA2 — the de facto standard for Wi-Fi password security worldwide — may have been compromised, with huge ramifications for almost all of the Wi-Fi networks in our homes and businesses as well as for the networking companies that build them. Details are still sketchy as the story develops, but it’s looking like a new method called KRACK — for Key Reinstallation AttaCK — is responsible.

WPA stands for Wi-Fi Protected Access, but it might not be as protected as we’ve all been assuming. It looks like security researcher Mathy Vanhoef will present the (potentially) revelatory findings at around 10PM AEST Monday — although it’s been worked on for some time; Vanhoef first teased the revelations 49 days ago.

In the source code of a dormant website called Krack Attacks apparently belonging to Vanhoef, a description reads: “This website presents the Key Reinstallation Attack (KRACK). It breaks the WPA2 protocol by forcing nonce reuse in encryption algorithms used by Wi-Fi.” Vanhoef’s website also lists a paper to be released at CCS 2017 detailing the method for key reinstallation attacks, co-authored with security researcher Frank Piessens.

Part of the potential flaw in WPA could be that, the researchers have previously suggested in a 2016 paper, the random number generation used to create ‘group keys’ — the pre-shared encryption key shared on non-enterprise WPA/WPA2 wireless networks — isn’t random enough, and can be predicted.

With that prediction of not-so-random numbers in place, the researchers have demonstrated the ability to flood a network with authentication handshakes and determine a 128-bit WPA2 key through sheer volume of random number collection. Though it’s not yet clear, the re-use of a non-random key could allow an attacker to piggyback their way into a wireless network and then snoop on the data being transmitted within.

However, it may not be the apocalypse that some are suggesting. Given that the publication of this vulnerability has been withheld, a fix may already be in the works — or already completed — from major wireless vendors.

Most home and business wireless routers currently using WPA2 should be relatively easy to upgrade to address the potential security issue, but the millions of Internet of Things wireless devices already in the world will be hardest hit — devices that are un-upgradeable, but will still need to connect to insecure networks or using soon-to-be-deprecated methods. This could get messy.

Back in the day, the original Wired Equivalent Privacy (WEP) encryption standard was cracked to the point of off-the-shelf tools breaking it in as little as a minute.

If you go war-driving today around your city or town, it’s still likely you’ll find wireless networks ‘protected’ by WEP, because end users still don’t know that it’s unsafe. It was superseded by WPA and WPA2 in later years, but we might be on the search for a new Wi-Fi encryption method in the years to come: KRACK may mean that the fundamental privacy we expect of a network protected by WPA2 is no more.

North Korean Hackers Blamed for Bitcoin Attacks

North Korean Hackers Blamed for Bitcoin Attacks

North Korean state hackers are increasingly looking to steal crypto-currency to fund the regime and circumvent tightening sanctions, according to FireEye.

The security vendor’s ‎senior cyber threat intelligence analyst, Luke McNamara, revealed a spike in spear-phishing attacks targeting South Korean Bitcoin exchanges since May.

The timing is important because April saw the US announce increased economic sanctions against North Korea.

“The spear-phishing we have observed in these cases often targets personal email accounts of employees at digital currency exchanges, frequently using tax-themed lures and deploying malware (PEACHPIT and similar variants) linked to North Korean actors suspected to be responsible for intrusions into global banks in 2016”, he explained.

Those raids are thought to have been the work of sophisticated North Korean state group Lazarus.

“Add to that the ties between North Korean operators and a watering hole compromise of a bitcoin news site in 2016, as well as at least one instance of usage of a surreptitious cryptocurrency miner, and we begin to see a picture of North Korean interest in cryptocurrencies, an asset class in which bitcoin alone has increased over 400% since the beginning of this year”, said McNamara.

By compromising an exchange, the attackers could steal cryptocurrencies from online wallets, swap them for more anonymous digital currencies or send the funds to wallets on different exchanges to withdraw as fiat currencies.

The latter tactic takes advantage of the fact that, in some countries, anti-money laundering rules around online currencies may be relatively lax, McNamara argued.

The news comes as Kaspersky Lab revealed a huge increase in the number of computers attacked with malware designed to conscript them into a botnet and silently install cryptocurrency mining software.

Hackers are using two armies of botnet controlled machines to mine Bitcoins and the like, with the Russian AV vendor observing criminals making off with more than £151,538 ($200,000) from a botnet of just 5000 PCs.

In 2013 Kaspersky Lab protected around 205,000 users globally targeted by this type of threat. In 2014 the number jumped to 701,000, and it has more than doubled again in the first eight months of 2017 to reach 1.65 million.

“The major problem with malicious miners is that it is really hard to reliably detect such activity, because the malware is using completely legitimate mining software, which in a normal situation could also be installed by a legitimate user,” argued Evgeny Lopatin, malware analyst at Kaspersky Lab.

“Another alarming thing which we have identified while observing these two new botnets, is that the malicious miners are themselves becoming valuable on the underground market. We’ve seen criminals offering so-called miner builders: software which allows anyone who is willing to pay for full version, to create their own mining botnet. This means that the botnets we’ve recently identified are certainly not the last ones.”

Romanticizing Bugs Will Lead to Death of Information Security

Too much focus on vulnerabilities and their impact is leading information security into a slow death.

Romanticizing Bugs Will Lead to Death of Information Security

Speaking in the keynote address at 44CON in London, security researcher Don A. Bailey said that while “we’re getting good at reducing problems and addressing problems, information security is dying a death it has earned.”

Focusing on bugs and vulnerabilities, Bailey said that his initial perception of information security was about reducing risk for consumers, but that perception was “so off base as all we do is talk about bugs but we are blind to what they mean and are composed of.

“We see new technology coming out, the punditry reel starts spinning with a cool new ‘whatever’ and we ignore technology and where it comes from and how it is sold and what manufacturing looks like, and we ignore the engineers that put effort into building the technology.”

Calling the concept “bug fetishizing’, Bailey pointed at the Blueborne vulnerability, which has received fresh attention this week after Microsoft issued a patch for it. Bailey argued that while the bug is massive, it has been around for a while and it is super easy to remediate it.

“People use it to raise money and we see it in the community all the time and not only by start-ups, but to raise money creating an environment in how cool a vulnerability is,” he said.

“I get a bit tired of hearing about these issues over and over as there is nothing new about Bluetooth vulnerabilities, it is the same old crap as we found a couple of years ago. This is nothing new and not pushing things forward.”

Bailey highlighted what he called the “romantic nature of bugs” and their “reproduction”, saying that we “see vulnerabilities in the wild and they are reproduced a million times” which is not reducing vulnerabilities in any way.

He also said that we are taking extremely small issues and blowing them up, and also focus more on intricate vulnerabilities than the defenses against them.

“Finding bugs that are useful is a great thing, but doing something with it is another thing; we want real models in information security and IoT that we can resolve.”

Bailey concluded by saying that information security is in a worse state than 10 years ago, and 10 years ago there were probably 10 consultancies and now, only a few organizations are doing groundbreaking research.

“Companies say specialize in information security but outsource for skills and don’t feel like paying someone for expertise when they can hire, with reputable universities pumping out graduates with information security degrees. It is true we need more people but who needs them: consultancies who break ground, or companies who need more people – a fraction of a % are doing groundbreaking research and that is why information security is dying.”