Right now, I’m exploring the use of chatbots in the library. As with any technology, there are pros and cons. Essentially, chatbots simulate human conversation in text or voice. More often than not, you will encounter chatbots in large banks or major corporations.
Most chatbots are designed to handle specific tasks, not full conversations. For example, a bank chatbot might be designed to handle requests to check a bank balance, find most recent transactions, or deposit checks. However, if you make a request outside the chatbot’s specialized task, the system breaks down and leads to customer frustration. Better designed bots will send the human to a real human agent after a certain point. This is the computerized chatbot frustration with which most people are familiar today.
This is changing. In the recent past, people could always tell when they encountered a bot in a text chat or phone operating system. However, natural language processing (NLP) is getting better. NLP is the heart of modern human-computer interaction, and will eventually allow computers to understand human requests without humans having to phrase our requests in a specific format. For librarians, that means Boolean search format, or choosing the right keyword alternative to narrow down search results, or uncover a certain navigation menu.
Humans naturally want to communicate using natural speech, including colloquialisms, slang, various accents, and more. Computers are getting getting better at this because machine learning systems are improving. NLP is a subset of machine learning (ML).
Machine learning-based systems like chatbots learn from large sets of data. ML algorithms are designed to study large amounts of data and find patterns. The system learns from the information that is available, and will make predictions based on the patterns it finds in the data. Similar to how humans learn. We do the best we can with the information we have available at the time. Processes improve as information improves.
As you can imagine, machine learning systems are improving because our available data sets are improving. For example, the natural language processing system from Google has access to text conversations, voice assistant archives, email conversations and more. Is it any surprise that their NLP systems are one of the best?
With more data, the machine learning algorithms have better information and can make a more informed guess about what the person means with a certain request. For this reason, we can all expect chatbots to become more popular and take over more routine tasks. Taking orders in a restaurant, or scheduling meetings would be a prime example. There are already bots being refined to accomplish these tasks.
The chatbot I’m working on now does not use more advanced machine learning. It uses a rule-based system where I build a decision-tree and manually tell the system which pieces of information to draw from to answer specific questions. In a rule-based system, I have to have a much better guess as to what a customer will ask, and how people will want to interact with the bot to accomplish specific tasks.
After practicing with the rule-based system, I can explore a more advanced ML-based system. When starting with new technology, it always helps to walk before I run. The fall hurts less if I’m not going at break-neck speed. I’m using Dialogflow, powered by Google, if you’re curious.