Smarter Typing  

Unknown 1

Texting is one of the most popular mobile services, but typing on the go can still be a drag sometimes. What if there were an engine that already knows what you want to say before you do? Swiftkey is an Android App and underlying technology in many mobiles that makes fast and accurate typing an ease. Thanks to an artificial intelligent engine, it learns to predict the words you want to type next. We talked with Ben Medlock, co-founder and CTO of Swiftkey about advancements in machine and language learning, the tricky parts of human intelligence and what's ahead of us in the fields of mobile and AI.

Swiftkey learns from the messages you have written, but it's already pretty good in guessing what your next word is when you have just installed it. Can you explain how the technology behind the surface works?

Swiftkey uses predictive models that are trained on very large quantities of text that we gather from very different sources. On top of that we have a personalization mechanism that learns from the way you use language. It analyses texts you have written in the past, such as email or facebook posts, if you allow it to. The accuracy to the predictions is a result of the unification and blending of these different models working together to predict the most likely thing you want to say.

Is this kind of machine learning in anyway comparable to the way a child picks up a language?

It's interesting because it's quite different to the type of learning that a child goes through. A child learns from lots of different types of stimuli. It learns to use language by correlating visual inputs with oral as well as written inputs, whereas for text processing using machine learning we tend to use very large quantities of text data, at least at the moment. So we are biased towards learning from text, but then the machine has a lot of text to learn from.

Is it harder or easier for a machine to learn a language?

I think humans are still much better in learning languages than machines. To get to a stage where we are able to mimic the language learning ability of the human brain will still take quite a bit of time.

As we talk about smart phones and artificial intelligence: Are the systems we use today truly intelligent or is it more precise to say they are programmed to process data in a way that seems intelligent?

That's a perennial question, isn't it? It depends on how you define intelligence. I think it is a very difficult term to define. And that's because we have so many intuitions about what it means in different contexts. If the way you define intelligence is by human understanding, then we are a very long way off. The fundamental question in understanding is the question of self-awareness. There are some theories about what self-awareness is and what it means but the reality is we are nowhere near to creating it at the moment.

On the other hand, a more practical way of defining intelligence is essentially the accuracy with which machines can perform a task that benefits their human uses. So you can say that a machine is unintelligent when it does something that creates more work for the user. That's a potentially useful way of thinking about intelligence because it's much more measurable. So an intelligent keyboard is one that supports the user in terms of the creation of text. If you define intelligence in that more practical way, we are making really good progress. But there's still a long way to go.

What about creative thinking? Would it be possible, for instance, to build a Swiftkey version for creative writing?

Yeah, that's something we have thought about and we have actually built a number of language models over the last few years that capture the way different people use language. For example, we trained the engine on the sonnets of Shakespeare. And then one of our staff used that to actually help him to write a new sonnet. So you can definitely use these statistical models to enhance creativity. And that's a really interesting area, although what we are mostly focused on in the product is enhancing efficiency and functionality, but creativity is a really interesting area as well.

Swiftkey Keyboard

Speaking of efficiency and functionality – when a software works so flawlessly, people tend to not realize how complex the processes in the background are anymore. Can you share some examples to illustrate the complexity behind the surface?

That's true. A good example of something that Swiftkey does, yet people don't necessarily understand, is that everytime you tap on the screen, it collects a sample and it's constantly retraining probability distribution that represent the way you perceive the keyboard to be. So, for instance, if you are always tapping to the left and below the visual character symbol for a certain key, we are learning that from the key presses. For every character on the keyboard there's a different probability distribution that has different characteristics, a bit like a fingerprint for the way you interact with the keyboard. The analysis of this probability distribution works alongside the language modeling and improves accuracy, but of course, most people are completely unaware that this is happening. What you see is just a static keyboard view, but the actual position of the characters and how they are skewed is constantly evolving.

What are further applications or developments in the fields of AI you are currently excited about?

There's an area called deep learning which is essentially the field of stacking artificial neural networks in such a way that they can learn representations of the world for use in particular tasks. I think this is really the frontier of applied machine learning at the moment: In order to build machines that are significantly more intelligent than the ones that we have today what we need to do is to learn how to represent the world in ways that help us to solve problems more accurately. The field of machine learning has been successful because it allows machines to break away from the kind of explicit programming that most of computing has been based on for the last 50 years. And deep learning is a way of taking that a step further. So I am pretty excited about what that will lead to in terms of helping us to solve specific inference problems more effectively.

Are there any examples of applications of deep learning for your area?

Yes, the one we are particularly interested in is language modeling. That means we learn mathematical representations for words that go beyond just the strings of characters that words have been represented by primarily in language modeling. And you already see some quite significant improvements to things like voice recognition through the use of this kind of technology. And of course we are interested in bringing that into typing as well.

DLD turns 10 this year. As we always like to look forward, which trends and developments do you see coming within the next decade or the near future within your field?

Within the field of mobile technology, there's a big trend towards systems and interfaces that adapt to users rather than just the other way around. The technologies we use such as machine learning and AI are able to adapt to the way an individual user behaves without the user having to explicitly instruct the machine. This power should lead to a new generation of interfaces that are much more dynamic. We would like to see that trend in bigger and most significant ways across the software that people are using on their mobile.

The same is true for the devices. We have been through a process of hardware homogenizing. I think a trend for the next ten years is for devices to increasingly become objects that people can identify with, that are more personalized and fit the body better such as curved screens that wrap around wrists. As soon as we have flexible screens and less technical constraints, it will be interesting to explore different geometries like circles, spheres and individual forms people are drawn to.

Ben Medlock will talk AI at DLD14. Tune in on the beat of our community on the DLDpulse and find regular updates on the DLD14 programme and speakers here.

Mentioned in this article

Ben medlock web quadrat
Ben Medlock
Founder & CTO