Mobile code is pretty much in everybody’s pockets these days. Almost everyone carries a mobile phone with them. And because things like antivirus applications for mobile devices are scarce (even nonexistent for iOS users), many people believe that mobile devices don’t face as many risks as their desktop (or laptop) counterparts. But while the stakes may differ, they’re quite real and, in time, may even become more ubiquitous than those associated with traditional computing.
In this post, we look at the top seven mobile code threats and provide insights on mitigating them.
1 – Intentional or unintentional platform misuse
Regardless of the brand of mobile device that you use, iOS or Android, both platforms provide development guidelines to app developers. Many of these guidelines are security-related. However, many app developers unintentionally violate the platform’s guidelines through human error. And some app developers may do this deliberately.
Whether intentional or not, this threat boils down to misusing any platform feature of either iOS or Android or failure to implement the platform’s security controls. And they can lead to the following issues:
- Misuse of iOS’s Touch ID or Face ID features, potentially resulting in unauthorized access.
- Improper use of iOS’s secure keychain by storing session keys within the app’s storage (rather than in the keychain), which could compromise the user’s session.
- An app requesting excessive or improper permissions could result in privilege escalation, potentially granting the app access to more of the device’s data than it should.
- Android intents, which are used to prompt an action or to request data from another app on the device, could reveal sensitive information or enable unauthorized access to the device if they’re marked as public.
- Platforms should implement sandboxing, which restricts an app’s ability to communicate with other apps (this is already the case on iOS). Platforms should also implement restrictive default file permissions.
- On the developer side of things, devs should apply the most restrictive class for iOS keychains and strictly adhere to the platform’s best practices to avoid weak implementation of any kind of functionality.
2 – Insecure data storage
Apps can store data. And if that data isn’t sufficiently protected, it could be accessed if your phone is ever lost or stolen. Even without losing your device, malware could end up on your device, and if your data isn’t properly secured, that malware may be able to funnel it to the attackers.
Of course, the bullet-proof mitigation to insecure data storage is for the app not to store any data at all. But that may not be feasible for many, if not most, apps. So, as a dev, having your app store user data is OK, as long as you follow the guidelines below.
- Assume every phone your app is installed on is jailbroken/rooted. While there’s nothing wrong, per se, with jailbreaking or rooting your phone (if you know what you’re doing), there are fewer benefits than ever to doing that. And less experienced users might not understand the security implications of jailbreaking/rooting. It bypasses the operating system’s sandboxing and provides full file system access. In many cases, a jailbroken/rooted phone will also sidestep the phone’s default encryption. In short, don’t assume users will not have file system access and build your apps accordingly.
- As you build your app, make sure you understand whether encryption is correctly applied in the file locations relevant to your app and that you also understand how the encryption keys are protected and where they’re stored.
- Try to harden your code against tampering by implementing obfuscation, protection against buffer overflows, etc.
- Avoid storing/caching data whenever possible.
3- Insecure communication and data transfer
Insecure communication could leak sensitive information even if the app itself were built according to the platform guidelines. Things like man-in-the-middle attacks and authentication bypass become possible if the app initiating the communication with a remote server doesn’t have the proper authentication and encryption safeguards in place. While these safeguards are crucial in essentially any context, they become absolutely critical in the context of shopping and banking apps, for example.
- Implement TLS/HTTPS for any communication over the web.
- It’s also recommended to implement certificate pinning, an additional method of validating the server certificate that adds an extra layer of security. On top of performing the default verifications of the certificate presented by the server, like validating the certificate chain to a root certificate. With certificate pinning, the application will also verify other characteristics of the certificate, such as its serial number and public key. Certificate pinning is more robust than the traditional method insofar as one no longer needs to only rely on certificate authorities (CA) to validate the certificate.
- Code your app to notify users if an invalid SSL/TLS certificate is detected or if the certificate chain verification process fails.
4 – Failure to sanitize user input
If your application enables user input that interacts with your backend (remote server) without proper sanitization applied to that input, you’re opening the door to a variety of attacks:
- Cross-site scripting (XSS) attacks
- Remote code execution (RCE) attacks
- Path traversal attacks
- Full path disclosure attacks
- And more
If user content can be uploaded to your server without validation – think user comments or reviews – then malicious actors could upload malicious scripts in the comments/reviews. Once uploaded, they would be fed back to unsuspecting users reaching that page. The user’s browser would automatically execute the scripts because it would consider them coming from a trusted source, and you would be at risk of falling victim to the attacks listed above.
- The principal mitigation will be somewhat obvious: if you allow user input in your app, sanitize it.
- Consider all user input to be untrusted. Treat input from all users the same way, whether authenticated users, internal users, or public users: don’t trust it.
- Make sure to set up whitelist checks when working with files or directories coming from user-controlled input.
5 – Insecure authentication
It’s critical to get authentication right when coding an app. Before granting access to the user, an app (mobile or otherwise) needs to authenticate the user. But a host of vulnerabilities can enable an attacker to bypass the authentication mechanism. Things like allowing backend API service requests without requiring an access token, storing passwords locally on the device, or allowing weak passwords to be set are all things that can render a mobile app’s authentication mechanisms vulnerable to bypass. An attacker exploiting those vulnerabilities could perform a privilege escalation attack, leading to sensitive data theft and other good things…
- Steer clear of local authentication methods. It’s better to delegate this responsibility to the remote server. It should be configured in such a way as to only allow the download of application data only after successful authentication.
- Don’t use weak authentication methods, such as device identity. Force the use of multi-factor authentication (MFA).
6 – Weak or poorly implemented cryptography
The title of this section pretty much tells the story. The two principal factors that can compromise your device’s encryption and reveal sensitive information are:
- The use of weak encryption algorithms
- Flaws in the implementation of the cryptographic process
The above can occur for many different reasons, including:
- The encryption parameters are not properly hard-coded in the application leading to a potential bypass.
- The encryption keys are improperly managed.
- The use of custom encryption (not peer-reviewed) or deprecated encryption and hashing algorithms, such as MD5 and SHA1
- Only use strong cryptographic standards and encryption protocols that are recommended by the National Institute of Standards and Technology (NIST).
- Only store sensitive data (such as encryption keys) in the device’s secure enclave, which is only accessible to protected processes. Barring that, simply avoid storing sensitive information on the device whenever possible.
7 – Insecure authorization
This point is tied to point 5: insecure authentication. Authentication and authorization are two different things. However, one implies the other. Authentication determines one’s authorizations. We’re talking here about user permissions. Once you authenticate yourself, your user is assigned permissions to access specific network locations and files. But, as we know, not all users are equal. Access control lists (ACL) are used to set each user’s permissions on the network. Your receptionist’s user account probably doesn’t need access to payroll-related files.
However, poorly implemented authorization schemes can legitimately verify a user’s identity while failing to validate that user’s permissions level. Your authorization scheme needs to enforce identity as well as permissions. Failure to do so will grant legitimate users and hackers alike access to sensitive information and open the door to privilege escalation attacks.
Validate and enforce the permissions granted to an authenticated user by referencing the information present on the backend systems (the authorization server(s)) rather than using the information (identifiers) supplied by the mobile device. And you want to make sure this happens for each user.
8 – General code quality
The eighth point is a bit of an umbrella category for various mobile code issues regarding improper coding of the client apps we install on our smartphones and tablets.
Coding an application is complex and detailed work. A single extra character in the code can lead to unwanted behavior within the app. And some of that unintended behavior may be dangerous and could expose your sensitive data to malicious actors. Poor coding practices can open the door to all sorts of exploitable vulnerabilities.
On top of that, many apps (most of them?) rely on third-party libraries and SDKs to build their applications more quickly and easily. This can save the app developer some time insofar that third-party SDKs natively integrate functionality without requiring the developer to code that functionality explicitly – the SDK takes care of it.
However, these SDKs and libraries may well contain some bugs, and they may not have been adequately tested. You’re at the mercy of their vendors’ quality control when you use third-party tools because you don’t own the code. And most of the time, when an application is found to harbor code-level bugs, the only solution is to rewrite some of the code and push an update of the app to users.
Code-level bugs in mobile applications can lead to:
The mitigations here are simply general common-sense measures for any development environment:
- Make sure to test for buffer overflows, memory leaks, etc. using automated tools
- Perform source code reviews and endeavor to write code that’s easy to understand, that’s consistent across the organization, and document your code properly.
So there you have it. Mobile code vulnerabilities are a very prevalent threat. And that threat will only grow as mobile devices continue to supplant more traditional computing devices. And we should really be paying attention to mobile code vulnerabilities because mobile devices tend to provide less user control than more traditional computing platforms, so you really need to be aware of what you’re exposing yourself to when using a mobile device.
Hopefully, the mitigations listed above will help you in that regard.
As always, stay safe.