How AI protects data and identities

Last updated: 09 October 2019

In today’s world we are constantly surrounded by a huge range of digital services that bring us countless benefits in just about all areas of daily life. But alongside them is another output: a truly tremendous explosion in the volume of data.

That’s not a surprise given the nature of today’s constantly connected lives – but the rate at which it is growing is staggering. Analysts at IDC predict that the global datasphere is expected to grow from 23 Zettabytes (ZB, with 1ZB the equivalent of a trillion gigabytes) in 2017 to 175ZB by 2025, with almost 30% of that being real-time information from connected devices. Harnessing this data, and making sense of it, is a huge challenge which man alone might never be able to achieve – which is where Artificial Intelligence (AI) technology comes in.

Thales’ technology helps thousands of organizations take critical decisions in split seconds, whether it’s to allow an online payment or re-direct a plane to a better flight path. We’ve been investing in AI for the past few years and it’s already enhancing many of our solutions.

Preventing identity theft

One of the key factors driving AI’s rise to prominence within my part of the business is the evolving nature of identity, and in particular the convergence of our online/digital selves with the physical side. In this context, AI can ensure coherence and consistency – that people are who they say they are, and that their documents are real – and also provide a better (and safer) online experience for consumers.

In the civilian world, there are many scenarios – such as renting a car, opening a new bank account or getting on a plane – when we need to prove our identity. AI can speed this process up, make ID verification more accurate by linking ID documents with their owners, and make it much more convenient. Imagine, for example, that you want to rent a car and make an online booking from your couch at home.  Rather than wait to check your ID manually when you pick up the car, the rental firm can instead ask you to upload two photos: one of your ID document, and secondly, a selfie. AI can be used to match your selfie with the photograph on your ID, preventing against identity theft. On arrival at the rental firm, another quick facial recognition check is all that’s needed.

Protecting air passengers

Similar processes can be adopted in other industries, such as air travel. When checking in online from home, airlines can again ask for a picture of your passport and a selfie. AI can ensure the photos match, as well as determining if the passport is a genuine document. Once at the airport, AI embedded within facial recognition systems can help to manage and speed us queues at security checkpoints. From a safety perspective, AI could also be used to detect anyone carrying a weapon into the terminal, or if someone experiencing medical problems falls over.

Analyzing telecom networks

Even those who don’t work in the telecom industry will recognize that mobile operators and handset manufacturers alike collect a vast amount of data from their networks and users. AI can bring huge value by giving them the ability to correlate and analyze that data in real-time – and then use that insight to take strategic decisions, automate manual processes, reduce costs and anticipate network behavior.

AI can also be used to plan and operate networks more effectively by automating key portions of the incident management process. This allows a network operator to identify network issues in real-time before they affect customers, resolving them faster and with fewer resources. Imagine, instead of having to complain about a network blackspot or outage, that your operator saw you’d had a dropped call, proactively addressed the network issue, and offered you a small rebate by way of apology? It could represent a whole new wave of more customer-centric behavior, supported by AI and is one of the services our sister company Guavus is developing.

Reducing financial fraud

Some banks are already using this technology to accelerate customer onboarding for financial services (as part of a “Know your Customer” process), by checking documentation against photo or video selfies in more or less real time. Instant, secure access to the next generation of financial services for customers, irrespective of their location.

AI is also set to play an increasing role in fighting fraudulent banking activity. As I wrote in an earlier post, AI can be linked to risk analysis services to create a personalized risk assessment for each individual, with each authentication need. This will help to allow genuine transactions while identifying fraudulent behavior by continuously learning from different data sets such as the location of the payment, device used and even behavior such as how the person types on their mobile screen. If everything looks OK for this person at this moment in time with this transaction, then the customer will need to do less to be authenticated (as the magic is happening silently in the background). However, if it’s an unusual amount, time of day, payee, or some other factor linked to that individual, then it can dynamically trigger the need for a secondary or tertiary authentication measure. And by integrating biometric authentication into the service, it makes the whole process a lot simpler and more secure than passwords.

Fit for the future

There are of course challenges to AI’s development. Firstly, there is a scarcity of relevant talent to continue the production and refinement of AI algorithms. Secondly, we are experiencing a rise in the number and quality of attacks looking to subvert these fast-emerging algorithms – most notably in the field of ‘deep fake’ images and other biometric manipulations. Right now these are largely restricted to jokes and pranks, but criminals are starting to wise up. Just this month, the first ever case of AI-based voice fraud saw a company lose $243,000 after a faked message purporting to be from the CEO tricked other senior executives. As this technology evolves, AI will need to get smarter to keep up.

Lastly, and perhaps most critically, there are challenges in considering how AI is developed and used. We must ensure its ethical use, and we must ensure it works without bias. This is why Thales has developed the TrUE AI approach. This stands for Transparent AI, where users can see the data used to arrive at a conclusion, Understandable AI, that can explain and justify the results, and finally an Ethical AI, which follows objective standards protocols, laws and human rights.

When developing our solutions, we follow several core principles, including security (ensuring appropriate levels of data encryption and access for any data), privacy (using algorithms which anonymize data) and what we call ‘frugal AI’ – systems that can be trained with less data. We’re also exploring how more and more data can be treated and processed locally – such as within a video camera, or onboard a connected car – rather than in the cloud, to avoid creating data honeypots that could be very attractive to criminals.

Staying on top of these issues as we continue to develop AI will be crucial to building trust, among both businesses and the customers they serve. But one thing is certain: without AI, we will struggle to make sense of the huge explosion in data from our connected world – the volume is just too vast. But working sensibly with AI, we can make everyday life more convenient and secure for all.

 

This article originally appeared on Philippe Vallée’s LinkedIn profile.

 

Leave a Reply

Your email address will not be published. Required fields are marked *