Cambridge Analytica & Facebook, compromised data – more reminders!

Facebook

Do we need yet another wake-up call regarding keeping our data safe?  The latest scandal involving Cambridge Analytica’s mining of Facebook profiles, which has been running for a few weeks now and shows no signs of abating, is a sign of rising public consciousness that personal data is important, it is valuable. The case highlights just how social media companies seem to please themselves when it comes to who has access to what.  At the very least, social media companies take a commercial view which is in their own interests and not in the interests of their customers/users – and who can blame them – it’s how they make a profit.

While those that need to have sensitive and/or commercial communications probably won’t be using Facebook to do so, they might be using consumer grade apps such as WhatsApp (owned by Facebook) or others.  The messages sent on these services are encrypted, but, as we’ve said before, the associated metadata still gives away a lot of valuable information.  To illustrate this point, by profiling the metadata associated with a conversation between two people, it is possible to identify who is the most important, ie. Boss and sub-ordinate, simply based on the frequency, length, number and response times of replies. Using these techniques it is possible to map a whole organisation!

This is a timely reminder that if you’d rather keep your sensitive communications private you need to be aware of where your metadata is held and who might have access to it. Relying on social media companies that makes their money through third parties advertising to the user base, is never going to be good for users – it is the price you pay for a ‘free’ service.

Services provided by security vendors don’t rely on selling advertising to make a profit, they are in business to protect their customer’s data, and their reputation lives or dies by their ability to do so.  Something worth remembering next time you need to send a work/business related communication.

Built-in versus bolt-on – why security should never be an after thought

Key and Keyhole

We are all looking to do more, be more productive, efficient and organised. With a plethora of unified communication solutions promising to boost productivity by using time in a smarter way, it’s easy to see how these applications are appealing. But are they secure?

Not all applications are created equally

We often hear of high profile security breaches and the resulting financial and reputational issues they cause. This alone should be motivation for product creators to implement adequate security controls into their solutions. However, speed to market and functionality improvements can often take precedence over security.

When purchasing a new car, we take for granted that safety features have been built in, we don’t ask whether we need to retrofit seatbelts and air bags. Car manufacturers have reinvented the way cars are designed, with passenger safety at the heart of the critical thinking design process. The net result is a product that is secure by design with features that work in unison.

Education not blame

Too often employees are cited as the ‘weakest link’ and are blamed for being the cause of security incidents. In reality, these incidents are often caused by users just trying to get their work done, but in the face of complex and poorly designed applications, they are being put in the position of understanding and making complex security decisions beyond their realm of expertise. Secure communications should be just that, secure by default. Security should be there without the user having to think about it, they are not the experts and we should not expect them to make decisions like one.

For example, a secure messaging application might be required to block pasting text out of the app and perhaps even pasting in. However, from a usability point of view, if the message is a phone number or email address, the user probably wants to be able to paste that across into their dialler or email app, rather than having to retype it. Security and usability have to be carefully balanced.

Businesses need to ensure their employees have the right tools required to carry out the job. If users need to have conversations where the content must remain confidential, then organisations need to provide the appropriate solution that enables this transparently. Which means by default removing burden from the user and ensuring that information is not put at risk.

The way forward

It’s time to stop apportioning blame and seeking to ‘fix the user’ but instead design technology to fit the business process and how people behave, rather than asking employees to adjust themselves.

Users shouldn’t have to be security experts and bear the burden of using solutions where security has been bolted on as an after thought. Employees should take security seriously and be an educated user – but they shouldn’t need cyber security credentials to do their day job.

Choosing a secure communications solution such as an Armour product is a positive way to address this issue. Armour Mobile solutions are cost-effective, easy to use with technology that is always designed to be government-grade level secure – proven assurance to our customers that we take security seriously.

It’s time for the tech industry as a whole to step up and start thinking about the needs of the user and not hiding behind ‘user error’.

CallKit – the good, the bad and the ugly

CallKit integrates VoIP services with other call-related apps on the Apple device, using the same native interface, making it easier for users as they use the same dialer for all calls.  However, it’s not plain sailing and CallKit does have its limitations.  Here’s our take on it…

The Good

CallKit provides a more typical Apple interface, which is great for the user experience and provides anonymity when receiving secure calls, particularly when in a public place, because all calls look the same.  It provides integration features with other types of incoming call, which means that Armour users are able to prioritise their secure calls over a standard call, and so avoid interruptions.

The Bad

Calls made with CallKit appear in the regular iOS call log, which used to be synced to iCloud.  The sync to iCloud may be turned off, but can you rely on users to remember to do that? Importantly, this means that meta data for secure calls also appears on the standard phone log – which is far from ideal.  To identify the incoming caller, their information would need to be in the Apple push, which may mean that it requires access to the secure contacts database, which could result in call details being stored outside of the secure database, all of which would contravene a CPA certified solution.  And, all of which could give away valuable metadata to an attacker.

CallKit provides the user with an incoming call interface on the lock screen, however, if your secure comms app is held behind a secure login, it may not initiate for the incoming call.

The Ugly

The user interface is limited to Apple’s standard phone app, which means that additional functionality (i.e. buttons for messaging, video and conferencing controls) can’t easily be displayed.  CallKit also has limited ability to deal with video calls, for example, video needs to be enabled at both ends for the call to take place (whereas Armour Mobile will allow one-way video calls, since this better fits with the security and usability requirements of our customers).

Users may require the ability to disable CallKit.

Our overall take on CallKit is that while it can cause more problems than it solves, it does solve some specific issues in specialist use cases, and for this reason we will be including CallKit in an upcoming version of Armour Mobile, so that our clients have the choice.

In the midst of a Cyber Attack who you gonna call – and how?

Who you gonna call

Don’t rely on the very IP channel that has just been hacked, because your adversaries will be monitoring it!

If (when!) your organisation succumbs to a cyber-attack, the first thing you need to think about, when assessing the situation and putting together a plan for recovery and future mitigation, is exactly how you are going to communicate.  Whether it is the IT department discussing the technicalities, or communicating with senior managers and the board to keep them abreast of events, the last thing you should do is use the very platform that has just been compromised, ie, your corporate network.

In layman’s terms, if your email has been hacked, sending an email to your friends asking for help is nonsensical – your email alerts the hackers to the fact you’ve detected their presence.  And, you can’t tell if any of the responses are genuinely from your friends or from the hackers messing with you.

It is very common when hackers have compromised a system for them to watch carefully for the responses from any IT resources that are tasked with countering their attack. Typically this includes watching and subverting any communications channels that IT may be using.  It’s not unusual for hackers to send spoof messages to try and assess just how well the IT team understands the nature of the attack, to capture new passwords or other changes to security, and prevent key messages from being delivered.

During the initial investigation phase of a cyber attack it is difficult to know what systems have been compromised, so it is best not to rely on any of them, if possible.

By protecting the communications of the IT and digital forensics team, you are blocking a very useful source of information from being intercepted or modified by the hackers. In addition, by using a secure communications platform, such as Armour Mobile, and having the secure comms hosted by a third party, you are further isolating the IT team’s comms from the potentially compromised systems that they are trying to recover.

For third party ‘blue teams’ brought in to handle such hacking situations it makes perfect sense for them to bring their own secure comms solution with them – and this is a question that you should be asking any would-be supplier when tendering for such services.

Armour is now working with a number of organisations that can provide specialist technical consultancy and cyber advisory services, from penetration testing and assurance, to incident management and response, and technical security research.

When it’s sent, it’s out there, right? – Wrong!

Message Burn

With Message Burn you get to choose how long your messages last. 

When you send a sensitive message how can you be sure that only the intended recipient sees it, and that it is not lying around on a phone somewhere for others to find at a later date?

While for the majority of chitchat on consumer-grade messaging apps it really doesn’t matter, when you are sending more sensitive, work-related communications, who sees it and what happens after that can literally be a matter of life and death in some cases (for example, a journalist in an unfriendly regime meeting an interviewee, or in the case of covert ops).

With a facility like Message Burn, users can limit the life of their sensitive data at rest.  Users can set a time for their message to dissolve, disappear or as the name implies, ‘burn’. This can be either a future date and time, or an amount of time after the message has been read by the recipient. While some other enterprise apps allow one or the other, Armour Mobile provides the flexibility of both options for the user via an intuitive interface.  The ‘burn’ time can be set for each individual message.  So, for example, a user may send several low sensitivity messages without any burn time, and then one highly sensitive message regarding, say, a meeting time/venue, or a sensitive contact name, with a very short burn time.

The burn time can be applied to messages, and their attachments (which can be pictures and/or files), for one-to-one messages and to group chats. To ensure that messages to important users aren’t accidentally sent without appropriate ‘burn’ protection, you can also define default message destruction settings for any user or group, so that accidentally pressing the send button never results in sensitive data hanging around for any longer than it should – incidentally, this meets one of the key requirements for GDPR, should that be a concern for your organisation.

Message Burn will be incorporated into Armour Mobile in the next major release.  For more details contact us now: sales@armourcomms.com

 

Comparing ‘Consumer’ to ‘Enterprise’ Messaging apps is like comparing ‘road cars’ to ‘racing cars’

So what exactly are the dangers of consumer (ie. free) apps?  And what do Enterprise-grade apps provide that the free apps don’t?  Sometimes when your end-users want to download a consumer app and start using it, it isn’t always clear what extra benefits enterprise-grade apps provide, so here we compare the two.

First a note about Encryption

Free apps have encryption and so to do enterprise apps.  There is so much more to security than encryption.  Encryption is (or should be) a given, it is rarely the weakest link, and therefore rarely the attack vector.  The dangers in using free apps for business revolve far more around how your sensitive data is managed, where it goes and who has access to it.

Secure Numbers

Consumer apps need a GSM number to use as the ‘secure number’.  This number is used to send activation codes in clear text via an SMS message.  This is easy to intercept and can compromise any security before it is even activated.

Enterprise apps can use GSM numbers as the secure number too, or a randomly assigned number for the ‘secure number’.  But activation is NOT via an insecure SMS, it can be via a variety of secure activation methods so it is very much harder to compromise.

Armour Mobile

We are able to utilise existing GSM numbers, or use another ‘secure number’. The process for activation and provisioning of Armour Mobile can be designed around the user’s specific requirements, using secure activation methods.

Harvesting your data

Consumer apps run on the vendor’s infrastructure only, and even if the content is protected, the metadata of each call or message is visible to the vendor. This can be cross matched with other user ID owned by the provider to build up a detailed picture of user habits, geo-location, and common friends/contacts, which can be used for profiling and targeted advertising.  Or sold to third parties for a similar purpose.

Enterprise apps run on a subscription business model, so there is no need to harvest user metadata in order to make a profit.  Serious cyber security vendors have no interest in selling data or advertising, their emphasis is on security and maintaining their credibility and brand value.

Armour Mobile

As well as our secure Cloud option, for fast provisioning, Armour Mobile is also available as an ‘on-premises’ option, meaning that not only is the content of the calls/messages secure, but nobody outside of the organisation has access to the metadata.  This ensures complete security and privacy regarding when, where and who users are communicating with.

Sharing your Contacts

Consumer apps typically upload users’ native contacts list to their global database upon activation. This enables them to cross match friends/contacts so that the user knows who else is using the same app. While this is certainly very user friendly, it does mean that the vendor has your GSM number, and also those of all your contacts for potential marketing purposes. All of those users will also have had their details cross matched to their social media profiles, so that the vendor can start to build up really detailed knowledge of the user, their contacts, what they like, and what they look like.  Yes, we are talking facial recognition here!

For more detail on this worrying scenario, read our blog Whose list are you on?

Enterprise apps do NOT need to upload the native phone directory.

Armour Mobile

With Armour Mobile you are able to import a bespoke directory of secure contacts for your users. In some cases real time integration between the app and the organisation’s internal Active Directory is possible. For certain public sector/government organisations there is also the option to link to address books of other departments that are also using Armour Mobile.

Securing your Communities

Consumer apps run on the vendor’s cloud and work in a single global group community where anyone can call anyone if you know their number. This is great for private communication between friends, but it is less than ideal for enterprise users.  Furthermore, it can put users at risk of phishing scams sent from within the messaging app, which can be perpetrated by anyone who has access to a list of valid GSM numbers, whether obtained legally or from the dark web.

Even when running in the ‘cloud’ Enterprise apps can offer cryptographically segregated user groups or ‘communities’ that are ring fenced from all other user groups.

Armour Mobile

We are able to offer to the option for different communities to be white listed to enable communication between communities for collaborative working purposes.  For On-premises installations, communities can be used to offer segregation between different departments or user groups, for increased security.

Third party certification

Consumer apps are rarely, if ever, subject to any independent certification of their security procedures.

Good enterprise apps are certified by Government cyber security experts or international bodies such as NATO.

Armour Mobile

Using a FIPS-140-2 validated crypto core, Armour Mobile has been awarded many other certifications including CPA (Commercial Product Assurance) from the National Cyber Security Centre (NCSC) and is included in the NATO Information Assurance catalogue.

Intelligent Support v Automation

Consumer apps typically have no human interaction during the activation process, which means no voice on the end of the phone for technical support if required.

Enterprise apps usually have an account manager assigned during the sales and trial process, with a technical support email and phone line available after the sale.  This is invaluable if, for example, a board level exec, senior manager or VIP user is having issues that need resolving quickly.

Armour Mobile

We provide a range of support services that enable organisations to be up and running with Armour Mobile secure communications within hours for our Cloud solution. We are also able to provide bespoke solutions tailored to specific high security requirements, based on individual use cases.

Management of sent and received files

Some consumer apps store sent and received files on the mobile device’s SD card, unencrypted, and then don’t delete them later. Sometimes this is the case, even when the delete option has been set. The files may remain, in an unencrypted form even if the app is uninstalled.

Enterprise apps that focus on security will keep sent and received files encrypted, only exposing them in unencrypted form to be read briefly by the third party viewer that displays them. Any such files are then removed as soon as the user has finished viewing them.

Armour Mobile

All files are kept encrypted, with data encrypted at rest as well as in transit. In addition, Armour Mobile will not run on a jail-broken phone meaning that security apps performed by the app stores and native in-built security remains intact.  Armour Mobile also isolates the microphone to prevent data leakage.

In Summary

When dealing with sensitive business communications of any type (voice, message, text, video, attachments) you need to be sure of exactly where your data and meta data is going, and who can see it.  You also need to think about what other information that you may be giving away, for example, your contacts list, and other personal information from social media that can be used for profiling.

And one final thought – if you don’t want the world and his wife to see your corporate communications, you need to use an enterprise-grade app, like Armour Mobile, rather than a consumer app downloaded for free.  In this instance, you really do get what you paid for.

Security Spectre Causes Meltdown – What’s Going On?

Spectre

You may be aware of significant security concerns raised in the last few days regarding the “Meltdown” and “Spectre” flaws identified in a variety of processors found in PCs, smartphones, servers and other products. This is an advisory to all our customers regarding Armour’s assessment of the effect of these issues.

What’s going on?

Firstly, a brief outline of these issues:*

“Meltdown” is the name given to a side-channel attack on memory isolation that affects most Intel chips since at least 2010, as well as a few Arm cores. “Meltdown” allows a normal (user) application to read (private) kernel memory, potentially allowing the app to steal passwords, cryptographic keys, and other secrets. It is easy to exploit, but easy to patch – and workarounds to kill the vulnerability are available for Windows and Linux, and are already in macOS High Sierra, for Intel parts. There are Linux kernel patches available for the Cortex-A75.

“Spectre” affects, to varying degrees, Intel, AMD, and Arm. Depending on your CPU, “Spectre” allows normal apps to potentially steal information from other apps, the kernel, or the underlying hypervisor. “Spectre” is difficult to exploit, but also difficult to fully patch, so could pose an ongoing threat for some time.

One always needs to ask whether a theoretical vulnerability can be exploited in the real world: in this case the (multiple) teams who reported these problems have proof-of-concept exploits to demonstrate the vulnerabilities so the threat is definitely real.

Although you might initially be concerned about the vulnerabilities this introduces to your personal computer or mobile phone, the wider danger is where data from many users is processed on the same machine, as happens in almost every cloud-based system where multiple applications (often from different companies) run alongside each other, but separated within ‘virtual’ environments (or ‘containers’). These vulnerabilities could allow a malicious application to examine the private data (e.g. customer passwords or cryptographic keys) for another company’s application when present on the same physical machine.

How does this affect Armour customers?

There are three key ways these vulnerabilities need to be addressed:

  • Vulnerable Devices – it’s common sense, but we recommend that all customers ensure that their individual devices (PCs, smartphones) have the latest operating system security updates – not all systems have fixes for “Meltdown” or “Spectre” yet, so keep an eye out for further updates.
  • Vulnerable Servers – follow the same principle as for other devices; make sure you apply the latest operating system updates. (It is possible that patching for these vulnerabilities may have some performance impact, but this has still to be fully evaluated.)
  • Virtualisation – Armour’s server components can be run in a virtual environment, which could be affected by these vulnerabilities; however, it’s important to note that the Armour security architecture already minimises any potential effects:

Customers running an on-premises Armour system have total control over how and where the Armour components are run: if there are no third-party applications or organisations running in the same virtual environments, then the Armour components can’t be attacked by these vulnerabilities.

The really sensitive data (e.g. cryptographic keys) in any Armour system are not exposed to the front-end servers (which is where an attacker might try to insert malware to exploit these vulnerabilities) because this information is stored in the ‘inner’ (more secure) servers.

* For more detail, we suggest you check your preferred, technical web site, as understanding of these issues, their effects and how to counter them, is continually evolving at this time; the formal vulnerability description is on the CERT web page under ID 584653 and MITRE vulnerabilities CVE-2017-5753 and CVE-2017-5715 (for “Spectre”) and CVE-2017-5754 (for “Meltdown”). Of course it’s obligatory for any cyber issue to be given its own web page and fancy icon, hence you could look at https://meltdownattack.com/ or https://spectreattack.com/, though these both direct you to the same joint page.

KRACK WiFi hacks and your mobile phone – How smart are you?

Cracked Wifi

When news of the KRACK vulnerability in Wi-Fi networks protected by WPA2 hit the headlines a while back there was widespread concern that so many devices were affected, particularly those unloved, back room (internet of things) type devices that are often forgotten about and therefore rarely patched or managed.  While KRACK (Key Reinstallation Attack) is not as much of a problem as at first reported (miscreants need to be within Wi-Fi distance to execute the attack, and it mainly only affects Android and Linux users due to peculiarities in the way that Windows and iOS use WPA2) it does serve to highlight just how complex our networks and technology stack have become.

A couple of weeks later we heard about Eavesdropper, a vulnerability caused by software developers hardcoding credentials into mobile apps, that could potentially result in large-scale exposure of data and metadata in about 700 mobile apps.

Mobile Security = Enterprise Cybersecurity

All of this brings me to the point that I made in my presentation at DSEI, with the escalation in complexity of technology, and the pervasive nature of wireless connectivity of all kinds, mobile devices are now a key part of enterprise cybersecurity.  Mobility increases productivity, communication and collaboration, but it also increases risk. Smartphones and tablets are the new end point, handling increasing amounts of sensitive, corporate data – according to Gartner 27% of corporate data traffic will bypass perimeter security by 2021.

Data is Valuable

There is much more valuable data held on mobile phones than most users would credit. Documents, chat/messages, videos, voice calls and messages, address book, calendar and location are all data, all valuable and to the right criminal, it is well worth stealing.

For everyday users of Wi-Fi KRACK is unlikely to pose much of a threat, however, for those that may be actively targeted due to the work they do, government officials, journalists, law enforcement, covert opps, board level executives, high net worth individuals/royalty/celebrities, it could be an easy way to hijack sensitive and therefore valuable information.

For those holding security conscious positions, selecting the right apps and security solutions can make all the difference when a new vulnerability is uncovered.  In the case of Armour Mobile users, even if Wi-Fi traffic is intercepted using KRACK, all that can be seen is encrypted data. The most that the hacker will be able to deduce is that the user has Armour, they certainly won’t be able to listen in.

Certified Apps, Additional Assurance

The WPA2 KRACK vulnerability is one of a myriad of ways that mobile data can be intercepted, but if users have end-to-end encryption, and apps are from a trusted, certified source, so that you know exactly who developed them and where the data sits/goes, most users will be protected from a lot of these issues. This also helps to minimize the likelihood of malware getting on your mobile device, because once a device is infected, even securely designed apps can be at risk of attack.

Knowing and trusting the provenance of your apps, and that the app developer employs industry best practice should be another key point. Software that has been certified by an independent third party (such as NCSC) provides additional assurance that you are buying exactly what you think you are buying. It also provides a level of assurance that the app is being carefully monitored and should any vulnerabilities be found, you will be notified in good time, and patches will be made available as soon as possible.

The mobile is the new end point, it has improved productivity immeasurably, but so too has the risk. Your data is too valuable to trust to ‘free’ security. Be smart with your users’ smartphones and ensure you only use certified apps.

Is there an ‘eavesdropper’ in your mobile apps?

Just recently a story caught my eye that illustrated like no other the importance of trusting your software developers, and really checking the provenance of any apps that you use.

The story, broken by Appthority, was about a vulnerability dubbed ‘eavesdropper’ that could have resulted in a large-scale exposure of data and metadata in mobile apps. The vulnerability is caused by software developers carelessly hardcoding their credentials into mobile apps that use the Twilio Rest API or SDK. Twilio has responded quickly to news of the vulnerability and reached out to all the developers with affected apps, of which there are apparently 700, some 170 of which are still available on the app stores.

Appthority claim that over a lifetime of poor coding practice, developers using the same credentials can expose massive amounts of sensitive data including call records, minutes of calls, minutes of call audio recordings, and SMS and MMS texts.  We’ve written before about the importance of protecting metadata, and once again, here is another instance where metadata has potentially been compromised.

While Apple are fairly aggressive at pushing security updates to end users, once Android devices have ceased to be the latest model, the same cannot be said. Android devices are notoriously under-patched and under-maintained – a headache for any IT department with users that insist on using older Android devices for business use.

This is another example, if any were needed, of the advantages of using an app that is reviewed and certified by a recognized and trusted authority. This type of vulnerability, caused by poor practice, is exactly the type of flaw that NCSC looks for during its certification process.

Unlike some other suppliers in the ‘secure communications’ space, Armour would never use any third-party analytics or tracking libraries and our app does not communicate with any such third-party servers. It’s for the same reason (the trust of our users) that we don’t outsource any of our development work and only use carefully selected third-party libraries (which are also constantly monitored for security updates). Nor will you find any bitcoin miners slipped into the app when you are not looking!

There is a reason why some of these apps are free to use.  It is worth keeping in mind that if you want genuine security, you do need to pay a little for it.