Weird AI: five cool use cases for artificial intelligence

Artificial intelligence is finding its way into more and more aspects of our lives, particularly in enterprises. Many AI use cases are fairly well known – from chatbots and RPA to cashierless grocery stores and autonomous vehicles. But AI tech is manifesting in other interesting – and sometimes unexpected – ways. Here are five of our current favorites.

Photo by Franck V.

Whisky recipes

Sweden-based Mackmyra Whisky recently announced it was collaborating with Microsoft and Finnish tech firm Fourkind to create the world’s first AI-developed whisky. Which may sound odd – not to mention unappealing, whether you’re a whisky drinker or not – but it’s actually an interesting illustration of how AI helps enterprises crunch existing data to produce faster results they may not have been looking for.

In the case of whisky, it’s all about the blend – the flavor, color and aroma of a given whisky is the result not just of the specific ingredients, but also the wooden cask it’s stored in, how old the cask is, what else was stored in it previously, etc. Master Distillers spend lots of time tweaking and experimenting with these variables to come up with different flavors.

Mackmyra is taking its existing recipes, cask types, sales data and customer preferences, and feeding all that into its cloud-based machine learning models. According to Microsoft, the AI can produce “more than 70 million recipes that it predicts will be popular,” to include new combinations that human Master Distillers might never have thought up.

Of course, Mackmyra still uses human Master Distillers to vet and test the results, but the recipe remains an AI concoction. Fourkind reckons the same techniques can be applied to other beverages, as well as things like perfumes and candies. As for the AI whisky, the first batch goes on sale in Q3 this year, if you want to have a go.

Predicting sports injuries

At the recent Rise tech conference in Hong Kong, I attended a keynote from Marta Plana, the director of Barca Innovation Hub (BIHUB), which is the technology innovation arm of FC Barcelona. The fact that a football club would even have a tech arm was interesting enough – even more interesting was Plana’s demonstration of how BIHUB is using AI and wireless technology to predict and prevent player injuries.

The AI algorithm processes data collected from wireless tracking devices worn by players during games and workouts, and uses metrics like meters covered at high speed, the number of accelerations or decelerations and distance travelled to calculate the likelihood of muscle injury for each player. According to a research paper which tested the algorithm on a professional football team for an entire season, the algorithm predicted more than 50% of muscle injuries. The algorithm also recommends prevention strategies to athletic trainers, physicians and coaches as they plan workloads for their players.

Initially BIHUB relied on GPS trackers to collect data, but recently the hub partnered with wireless startup Wimu to use ultra-wideband technology, which is more accurate than GPS. BIHUB says the technology can also help optimize player performance.

That’s just one of several ways BIHUB is leveraging big data analytics in the name of sports performance. Others range from obvious things like determining which attack strategies are most effective against a given team to personalized hydration profiles in the form of “Gatorade smart bottles” that tells players when they need to drink to hydrate their bodies.

(Relatively) painless insurance claims

Insurance is one of those strange businesses where you pay for a service you hope you never have to use – partly because it involves something bad happening to you, and partly because the claims process is a major, time-consuming headache.

For some time, AI has been touted as a way to solve numerous pain points in the insurance sector, but automation of claims processing is emerging as a promising use case for AI. For example, if you’re involved in a fender bender, currently you have to take the car to a repair shop to get a damage estimate, then you have to wait for an insurance agent to come look at your car, evaluate the shop’s estimate, fill out the paperwork and take everything back to the office to decide whether the claim is valid.

With AI, all of this can be automated via a chatbot that can not only process all of the submitted information, but even detect whether the claim is fraudulent. In India, ICICI Lombard says it’s working on a feature for its mobile app that enables policyholders to take photos of car damage with their smartphone and upload them to the cloud, after which the AI algorithm analyzes the photos and makes a judgment on the repair claim in minutes.

That’s not only good for customers, but also for insurance companies who adopt the technology – Juniper Research estimates that introducing AI in the claims process will generate annual cost savings across property, health, life and motor insurance of over $1.2 billion by 2023, a five-fold increase over 2018.

Speech-to-text transcription

For years, the digital Holy Grail for journalists has arguably been a speech-to-text app that magically transcribes digital recordings of interviews into text. Now, thanks to AI and the cloud, it exists. And it’s not just journalists using it. started off as a live transcription app for conference calls. The audio is livestreamed to the cloud, where Otter’s software listens and transcribes the words, and sends them back to your screen. The AI algorithms behind Otter’s speech-recognition are trained to deal with things like different accents and background noise.

When I spoke to founder and CEO Sam Liang last year, he said the idea was to create an app that could not only transcribe conference calls in real time, but also make the transcript searchable with tags highlighting repeated keywords. Since then, Otter has expanded the service to include MP3s – just upload the audio, and after a few minutes Otter posts a transcription of it. I’ve tried both the live transcription and the MP3 feature, and while the accuracy is never 100%, it’s typically around 90%, depending on how thick the speaker’s accent is, how far away they are from the microphone, et.

Meanwhile, Otter is hoping to be more than a glorified transcription service. According to Forbes, the company is positioning itself as a collaboration partner in the same domain as Slack, Zoom and Dropbox. In fact, Zoom is now using Otter for its transcription feature. Also, the producers of the Rise conference in Hong Kong used Otter to create transcriptions of its Centre Stage keynotes.

Universal translators

Ten years ago, the idea of a universal translator device – literally a device that hears a spoken language and translates it into your own language, and vice versa – was strictly in the realm of science-fiction. In 2019, we’re actually spoiled for choice. At this year’s CES event in Las Vegas, a number of translation devices were competing for attention in various form factors, from headphones (such as Waverly Lab’s Pilot headphones and TimeKettle’s WT2 earbuds) to dedicated devices from companies like Travis, iFlytex and Sourcenext.

Translators vary in terms of price and how many languages they support, but the technology behind them is similar to what does for its transcription app – audio is sent to the cloud and processed by AI algorithms, which send the translated text back to the device. The AI is also capable of learning as it goes to better understand pronunciation, etc. According to an AFP report, most of their sales come from businesses such as hotels, restaurants and taxi operations.

As cool as universal translators are, there are a few drawbacks. Apart from the limited number of languages supported and accuracy issues, a chief problem is latency – not just in terms of the connection speed between your device and the cloud, but the time it takes to convert speech to text and back again. Even though it only takes a few seconds, that’s more than enough delay to make a live face-to-face conversation awkward.

Google aims to fix that with a project called “Translatotron”, which aims to ditch the text-to-speech step and translate speech-to-speech using a neural network. Even more ambitiously, Translatotron can purportedly replicate your own voice to repeat your words in the translated language. Google researchers recently published a report showing that while the process is far from perfect, it’s feasible enough to keep working on it.