As ever good geek know in the beginning was the command-line. Back before these young ‘uns and their ‘windows’ and their ‘desktops’ everything was done by typing  commands into a lovely green screen terminal. As is so often the case, everything old is new again – but this time round its thanks to the ever-increasing adoption of Slack.

I’m a big fan of Slack – while its ability to quickly join together people from widespread groups is nothing particularly new, the interface is great and the company ethos is reflected in the upbeat quirky feel of the tool.

But it’s beyond the standard chat-room features where Slack starts to come into its own. It’s been designed from the ground-up to be highly extensible and already has an impressive list of 3rd party tools that integrate quickly and easily with it.

At the moment the power of these tools is relatively limited. They can show data from multiple sources (twitter, github, salesforce, google analytics,  etc) or handle simple direct commands (add a card into Trello, start a Skype call, etc) or even integrate into further cloud-block-building systems such as IFTTT or Zapier. However, I think we’re at the very beginning of the power of these integrations.

Back when the command-line was the only way to interface with computers Joseph Weizenbaum invented the first chatbot – ELIZA. ELIZA is a simple program and is leagues away from passing the Turing Test. However, in a very real way it’s the great-grandparent of our ubiquitous modern companions Google Now, Cortana and Siri.

Obviously those tools work via voice recognition / generation. We’re not (yet) at the stage of developing that at the level in the film Her (or any of the other many recent films or TV shows on similar themes). The technology is still definitely stuck in the uncanny valley.

But, that problem goes away when we go back to a text-only interface.

I predict that as the capabilities of these software bots increase we will eventually start to find it difficult to tell them apart from a real person – especially if our interactions are constrained within a particular domain.

The tools enabling this are already freely available, whether developers choose to use Google’s Tensor Flow, IBM’s Watson or one of the large number of open source alternatives.

Many of these systems are based around neural networks or other kinds of deep learning that require lots of input data to build underlying models from. Exactly the kind of data they could gather by being plumbed into a busy domain specific chat-room.

In addition, speaking as someone who works for a very large organisation which is seriously considering introducing web-chat, doing the same thing with customer / citizen conversations in order to replace call centre operatives seems like a no-brainer.

So, in five years from now, when you’re arguing over a command-line interface about who broke the software build or where your tax rebate has gone, the combination of a 50 year old style of interface plus state of the art technology might mean that, unbeknown to you, you’re not speaking to a person at all.

Update: Since I published this today I’ve already seen more evidence in the form of this online lawyer-bot.

Advertisements