the rise of instant messaging apps and their use by extremist groups
In August 2017, the UK Home Secretary Amber Rudd stated that secure messaging apps should not be a hiding place for terrorists. This article explores the use of messaging apps by extremists, the challenges faced by law enforcement and the response of the platform providers.
Instant messaging apps enable quick conversation between two users over the internet. When selecting a messaging app, users are selecting a ‘client’ that handles their message by breaking it into packets. These packets are then passed through the client’s server and when it reaches the recipient, the intended message is displayed. Beyond simple text, these messages can contain locations, pictures, documents, postal addresses, phone numbers and online banking details.
Encryption is used to encode and protect these messages from unintended recipients, which is a desirable asset to certain users. Picture a safe; encryption means only those with a key can open the safe. In the messaging app this would be the users of the app (sender and receiver) and the client (the app themselves). End-to-End encryption takes this a step further by limiting those with a key to the individual users alone; despite information passing through the client’s server, they do not have access to the data packets.
Today there are a range of messaging apps users can choose from and their popularity is growing; in January 2017 leading messaging app WhatsApp recorded 1 Billion monthly users. The apps offer users a cheaper alternative to SMS messaging, with additional functions that surpass the offering of standard text messaging. File sharing, group chats, sending and receiving payments, video/voice calls and photo sending are all capable within apps such as WhatsApp, Line, Telegram, Kik, WeChat, Viber, and many others.
Since Edward Snowden’s revelations the popularity and availability of messaging apps that offer encryption to the general public has been notable. In 2015, Ex FBI Director James Comey recognised that the use of messaging apps by extremist groups was on the rise. An extremist ‘Going Dark’ by using an encrypted messaging service presented a real problem to authorities, meaning terrorists (and criminals) have the ability to use apps to communicate and coordinate without interception.
Encrypted messaging apps offer a ‘safe’ environment for extremist groups and individuals to engage in activities such as attack plotting, fundraising, recruiting, broadcasting, information sharing and logistics planning. Whilst the once popular terrorist communication platforms Facebook and Twitter significantly reduced misuse by removing content, messaging apps now offer an appealing alternative. Authorities have little means of intercepting such communications unless the user’s mobile device is obtained.
Whilst many apps offer varying levels of encryption, Telegram has other features that make it more appealing; channels and broadcasting features have made the Telegram app a popular choice by Daesh; these capabilities enable widespread audience contact for multiple purposes. Daesh users are able to counter the majority of law enforcement interception by setting up multiple ‘mirror’ channels containing the same content; should one be suspended a backup replaces it. It is also known that users will create a benign user or channel, allow it to build an audience of around 1000 before switching the content to Daesh propaganda. This enables a wide audience reach, which is hard for law enforcement to keep track of.
There are few barriers to entry when it comes to using Telegram; the telephone number required to create an account does not have to be the same number the account is used for. This means users can use one sim card to create and another to operate the account, posing obvious difficulties for law enforcement when it comes to tracing a particular user or groups of users.
In their most recent statement on the issue, Facebook made their approach clear; “*There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them*”. Telegram’s response to misuse by extremist groups is perceived as more ‘haphazard’; it is seen to be one of the least proactive in its prevention efforts to curb misuse. In a statement from creator Pavel Durov titled ‘Don’t Shoot the Messenger’, whilst expressing shock at the app’s use by extremists, Telegram stood by protecting people’s privacy and advised that banning encrypted messaging apps would only move extremist use elsewhere.
Durov’s point is valid; part of the reason Telegram is now popular with Daesh users is a direct result of larger social media platforms such as Twitter and Facebook being less accommodating. These platforms now take a more proactive response to removing their content and making it difficult for Daesh to operate. If encrypted apps were made more hostile to user’s activity, the concern is that extremist users would move into more specialised environments, making them even more challenging to monitor. Telegram’s features already make removing content difficult, even when that content is extremist in nature.
There have been calls for ‘back doors’ into encrypted apps for law enforcement to allow them to access and intercept messages, however this is deemed high risk and could jeopardise the cybersecurity of all users, legitimate or not. An argument used by multiple tech companies, including Apple Director Tim Cook who likened the proposal to putting a key under a mat; is that opening access for those it’s intended for will undoubtedly also open it for criminals who will exploit it. A similar argument points out that the millions of users who operate encrypted apps legally, and may do so in order to avoid a persecution or abuse by states, should not be exposed to the risk of their communications being illegally obtained. Tech companies looking to retain customers consider the privacy of their other users when tackling extremist misuse of their platforms.
The Home Secretary’s comments highlight the frustration law enforcement and tech companies are dealing experiencing; maintaining their users privacy without creating the perfect environment for terrorist activity. The law surrounding the issue (The Investigatory Powers Act 2016) is vague in terms of how far tech companies have to go in aiding law enforcement.
At present, one solution involves focusing on the devices users have, rather than the messaging apps they are using. Following the San Bernardo shootings, the FBI made great efforts, including reportedly enlisting outside assistance, to access the phone used by the attacker. Director at the time, James Comey recognised this approach might be a sign of things to come and that the FBI may require further help in the future to access information contained on cell phones.
The challenge here is that removing content and accessing devices are reactive in nature; they involve accessing information after an incident has occurred. A different and more proactive approach to the solution could see artificial intelligence and machine learning having a role to play. Proactive algorithms have the potential to make objective assessments of individual or group behaviour, so could identify risk. This solution however is likely to increase the debate around surveillance and state-monitoring of private communications, as well as the current limitations of algorithm development. Law enforcement and security agencies would benefit from engaging with this potentially proactive solution and resolving the issues around AI reinforced policing, rather than pursuing an uphill battle against increasingly robust and ubiquitous encryption.Back to Blog