We love our apps, but can we trust them?

Apps augment our lifestyles and enhance the way we live and work. We use them every day across platforms and devices, and we may even love some of these apps. There is little question that apps have become an indispensable aspect of our daily lives. Yet, we often overlook the fundamental fact that we should not trust them.

Earlier this year, a hacker going by the handle, Jmaxxz, discovered a flaw in the MyCar app for iOS and Android devices. The app allows users to unlock and lock their cars remotely or find it in crowded parking lot. Jmaxxz had purchased the app for his girlfriend so she could pre-heat her car during the freezing winter, so it would be warm when she got in. However, like many an app, this app came with a vulnerability which he could not overlook—it had administrator credentials hard-coded within the app as well as the server. Essentially, this means any black hat who desired a new car could get one by exploiting this vulnerability.

This incident merely illustrates the problem with the app landscape we so rely on today—the ignorance is bliss attitude of users who use them without regard to their own security and privacy—until it is too late, and precisely why the question of trust is relevant here.

This is what happens when users trust apps unconditionally.

The disheartening road to tech without trust

The New York Times read and studied 150 privacy policies and found that the app landscape is filled with legalese that read like gobbledygook, designed to either exhaust consumers into resignedly clicking on “agree to terms,” or deleting the app altogether. The paper found that Facebook’s privacy policy took about 18 minutes to read, but also required sophisticated reader comprehension capabilities to understand it.

Part of the complexity of the language is a result of the evolving landscape. Tech companies are being scrutinized more than ever, with regulatory policies like the California Consumer Privacy Act of 2018 and General Data Protection Regulation (GDPR), putting pressure on them to be more transparent under the terms of use. GDPR, for instance, requires companies globally to undergo a review of their privacy statements to be delivered in “transparent and intelligible form, using clear and plain language.” As a result, companies are also introducing layers of detail into these consumer-facing documents. For instance, the NYT found that Google had grown its policy from a two-minute read in 1999 to a 30-minute one in 2018.

For far too long, as users, we have been too forgiving about the kinds of personal data we share with brands and developers in exchange for access to ostensibly free experiences or services, not realizing that we have become the product.

Keep alert with zero trust

In recent years, there has been a growing movement across organizations where the first-line policy is based on the zero-trust model. The irony is that while major brands are losing public trust, companies too find it increasingly difficult to trust the public. After all, malicious actors are hiding in a sea of legitimate users who also walk the same digital aisles and doorways.

This is the phenomenon of the age of trust we find ourselves in today. So, how should we be viewing the apps we love and use daily when they are being weaponized—when the boundary between what we use and fear are simply both sides of the same coin?

The American writer, Suzanne Massie, taught the late President Ronald Reagan the Russian proverb Doveryai, no proveryai—it means: Trust, but verify. As users of the Internet, we often forget to authenticate the veracity of what we come across. In truth, verification needs to happen at both the organizational and user levels.

For organizations and developers, one way is to view the public as your ultimate auditor—especially so since compliance is based on producing evidence to document the consistent and thorough performance of security over time. By doing so, this provides transparency and consistent assurance to users, which might be able to alleviate the consequent reputational risk in times of breaches and vulnerabilities.

Concurrently, organizations and developers alike will also need to master the balance of user experience and security. Over the years, password requirements have become more stringent, with multi-factor authentication increasingly implemented on more systems. Security must stop placing the burden of trust on the user. If you want to earn the trust of users, you may wish to update your trust model before customers and governments do it for you.

On the flip side, users should take the trouble to read the terms and conditions of use, validate the services, and verify the amount of controls they are delegating the app before clicking on “agree”. If an app requires device management privileges, one should challenge and validate before granting permissions. Also, it might be a good idea to turn off automatic updates and avoid pressing “update all” because updates can sometimes come at a high price, particularly when they introduce bugs. Read the notes of any given app update to see how impactful the fixes are before you update it. Perhaps, you may even wish to search the web to see if the latest version of an app has significant issues that might impact you or your organization. Bottom line is, you can love your apps, but perhaps you may only trust them conditionally.