The Tech Panda asked DJ Das, Founder and CEO of ThirdEye Data. Das is currently working on a hybrid intelligence chatbot to help people tackle depression and anxiety in times of social distancing. How do ethics of Artificial Intelligence (AI) apply to hybrid intelligence and its human-in-the-loop model? AI has been under the radar for years. While an AI is a fast learner that emulates what it’s taught and executes at machine speed, there are concerns like it taking over human jobs completely, inheriting trainer bias and data privacy issues. Hybrid intelligence, a system that keeps humans in the loop, while the AI learns on the job, so as to have an efficient and sensitive system, has emerged as a middle path in this sector. How do the ethical concerns of AI apply here?

I do not believe that humans can be taken out by chatbots. There will always be a need to talk to another fellow human being

For now, it is to be a basic chatbot with a human interface and the ability to bring in a human. Going forward, as the AI learns about how humans deal with questions, it will look into thousands of parameters that will help it become more ‘human’. Das believes it will take a year, but this chatbot will become more like a real person. Does that mean the chatbot could take over the task completely, replacing human involvement? Das says complete dependence on the AI will never happen. Only the percentage of dependence on humans will lessen. “Though I’m a technologist, I do not believe that humans can be taken out by chatbots. There will always be a need to talk to another fellow human being. Humans can never get out of the AI equation. And neither do I want them to. I just want to have a tool which can serve many more people than is humanly possible,” he says. The idea is that with a technology to converse with thousands of people at the same time, human beings will be spending much less of their time engaging with one person, while the chatbot does most of the work. So over time, the dependency on humans will decrease but they will never be replaced.

People Always Click ‘Contact Us’

Das and his wife have worked in analytics around big data since 2011. In the last three years, they have also ventured into AI and chatbots, created to provide customer service to store visitors and answer questions in a contextual manner. That’s when they observed a particular aspect of user behaviour.

We kept seeing that people would always click the contact us button, even though the chatbot would answer pretty well and was getting very good results, people would like to talk to humans. That’s when we got thinking that probably people seek a feature that includes a human

“We kept seeing that people would always click the contact us button, even though the chatbot would answer pretty well and was getting very good results, people would like to talk to humans. That’s when we got thinking that probably people seek a feature that includes a human,” Das explains.

 

That’s why they built a digital feature into the tool that allowed a human to be involved whenever the user demanded.

 

A Solution for COVID Born Social Isolation

 

COVID has brought in social isolation emanating from remote working and social distancing. Das, who lives in the Bay Area of California thought of the psychological repercussions of this.

 

“I’ve been working from home for the last four months now. It’s not easy. Everyone is craving for human interaction or to talk to somebody. And it has actually given rise to a number of mental issues too. There have been detailed reports where it shows that not only the Bay Area, but all over the world, people are suffering from mental issues because of less contact with humans,” he says.

 

I’ve been working from home for the last four months now. It’s not easy. Everyone is craving for human interaction or to talk to somebody.

 

The chatbot, which is a mix of AI and human, starts talking to a user, but if the level or context of the conversation reaches a level that requires human interference, it allows a human to enter the loop. They have plans to launch a new site, which is going to take their chatbot technologies into the hybrid zone.

 

When people come to talk on the site, they will initially be talking with a chatbot. While the project is still at the early stage of the backend development of the AI model, Das hopes the chatbot will be able to communicate, showing understanding for human concerns, thoughts, and emotions.

 

To support the same, they are using technology like sentiment analysis to understand where the person is coming from. Users also have the ability to invite humans, who are anonymous regular people. While these conversations take place, ThirdEye can train the AI.

 

“The system is observing how a human is actually conducting the conversation. Over time it will start to emulate that. So that’s the Holy Grail. That’s what I’m trying to get to,” says Das.

 

Human or AI: Can’t Escape Bias

 

With humans playing a role, will we see more bias seeping into hybrid intelligence than in just AI?

 

Das says one can’t actually escape bias, whether it is human or AI.

 

“Bias exists in both worlds, in natural intelligence as well as artificial. How the AI model running behind the chatbot gets trained, determines the bias,” he says.

 

Human bias is created from our upbringing, education, the people we meet, and our behaviour, which we will transfer to an AI the longer we train it. That’s why Das is trying to launch the chatbot in the market sooner than later, because he doesn’t want to taint or bias the model.

 

Bias exists in both worlds, in natural intelligence as well as artificial. How the AI model running behind the chatbot gets trained, determines the bias

 

He wants the model to be exposed to as many different kinds of people as possible. Through interactions and conversations and exposure to different kinds of emotions, he expects the AI to have lower bias.

 

“Through the diversity of training, I want to lower the risk of bias in the AI model,” he says.

 

Privacy: ‘We Are Actually Reducing the Risk’

 

Privacy issues come into an AI conversation sooner or later. So how will it apply to hybrid intelligence? Is data privacy more at risk?

 

“We are actually reducing the risk,” says Das.

 

Since chatters and responders in ThirdEye’s project are both anonymous, privacy issues are decreased, an issue that the company takes quite seriously.

 

“We work in the field of data, so we understand data privacy issues at its core. With data customers like Microsoft, Southern California Edison, Amgen, and IAD Bank, handling data in the right manner is what we have learned in the industry. So we are deploying those learnings to ensure that the data is completely kept private. We lower the risk that we have of infringement and privacy violations,” he says.

 

Where Human and AI Meet

 

Hybrid intelligence is the area where human and AI meet. At the same time, it draws on the human strength of sensitivity as well as the artificial strength of efficiency. However, what of their weaknesses, namely inheriting bias, handling large amounts of user data, and giving over completely to the artificial?

 

If handled responsibly, hybrid intelligence will keep all of these in check and allow it to healthily grow into different sectors.

 


 

This Post Originally Appeared in TechPanda.

 

  •