Just a day after I wrote the below Facebook reveals it is getting into the bot creation ecosystem with Bot Messenger.


This is a quick follow-on from my previous blog post Your colleague the chatbot.

In addition to Siri, Cortana and Google Voice we need to add Amazon’s Echo to the list of voice driven systems. Echo is an example of an ambient listening system with a number of built in commands but that can also be extended via ITTT to potentially interrogate and control a large number of services.

In addition, since I wrote about this last Microsoft has published an open source toolkit for producing new software bots.

I’ve also discovered the Bot Builder wiki and heard about a convention in London for bot writers.

I’ve also been thinking about my statement about this area of technology being stuck in the uncanny valley. My proposed solution in my last post was to avoid the problem by retreating to a text-only medium like Slack. I’ve come to the conclusion that, while that will still be a very strong development area over the next few years, the prize of spoken “conversational AI” is so huge that there are massive incentives for organisations to get it right.

The question is how do we move from the current low-fi bots that are purely action orientated and stateless to something that feels more like having a conversation with a person? Having pondered on that thought for a bit I’ve come up with only one conclusion. The answer is “painfully”.

People have become used to interacting with technology that is well designed and slick. The intermediate steps between now and decently conversational AI are going to be littered with the rocks and boulders of the uncanny valley. However, there are some things that will speed the pace of development.

For one, any system will have the ability to interact massively in parallel will millions of potential users. As it will be delivered via the web it can be extremely quick to update and A/B testing will be easy to arrange. As I implied above, I think it is likely that many organisations will compete in this area meaning that, even if their code is not open source, they will still learn from the design mistakes their competition make.

The idea of a self-improving, multi-layered neural net based, system learning from a vast number of parallel interactions actually makes me almost start to consider the possibility of a “hard take-off” singularity – if we choose to define that as achieving something that can pass the Turning Test in less than 20 years. Of course, that’s before we’ve even discussed the fact that it will be plugged into wikipedia, google translate, google’s scanned book repository as well as potentially vast numbers of cameras and other sensors.

Until recently I thought that the most interesting areas in technology for the next few years would all be robotics based – drones, self-driving cars, Boston Dynamics humanoid robots, etc. Now, I’m seriously starting to think that the toddler-like steps of a number of new “minds”, aided by massive numbers of people all over the world, may instead be the dominant theme of the early 21st century.