• Facebook
  • Twitter
  • LinkedIn
  • share

Part 1: In Search of the Soul – Finding Human in AI

Author: , Date:
As digital interactions become more seamless and intangible, the differences between technology and its human users become more pronounced; the more we learn about what it is capable of, the more we become aware of everything it is not.

​When Marvin Minsky referred to “Learning “as a “Suitcase Word,” one couldn’t be more certain about a life full of mistaken extrapolations, restrained imagination and a series of common mistakes. Collectively, they contribute towards successful distraction from a productive future. That’s precisely why Artificial Intelligence (AI) was born in the first place. 

 

According to an October 2017 report by Gartner, AI was predicted to make way for at least 2.3 million jobs by the end of 2020. What made the scenario depressing is how only 17 percent of developers in forward-thinking organizations got a chance to work with the technology last year. True to the inherent nature of emerging technology, AI too went through a period of growing pain, which left techies wondering whether intelligence can be engineered.

 

Human or not so Human 

Remember Alice and Bob, the famous Facebook chatbots? Ever wondered why they were shut down? Reportedly, these two chatbots were highly driven by AI and since their inception, carried a secret conversation in their own language. The conversation transcripts were un-alarming, but Facebook thought it to be wise to shut them down for good. In another instance, Las Vegas saw the emergence of self-driven shuttle made a great deal of fanfare. However, within a couple of hours, it was met with an accident as a semi truck collided while backing up. The driver was cited by the police, but it was found that the self-driven shuttle has given up on the sensors as soon as it realized the truck was backing up to it. Now, the crash could have been avoided if the sensors were programmed to react differently in such a scenario, as a human being would have done. 

 

Ever partied alone? Well, Alexa did. Amazon’s Echo gave a sleepless night to the German cops barging into an apartment following a complaint lodged by neighbors. Reportedly, Alexa was in party mood and blasting out music when nobody was home. As AI is being weaved into mainframe business process and key drivers for success, such dents in the very fabric of its nature is totally uncalled for and raise serious concerns about its arguably rewarding benefits. How far does it stand or will it ever emerge to show its human self at all? You see the greatest challenge with artificial intelligence & machine learning is that it is being compared to the most sophisticated, intelligent machine of all time: humans. 

 

So how do you humanize a concept like technology that is inherently perceived to be cold and inhuman? The design of AI experiences must always be rooted in the most fundamental quality that defines us as humans: empathy. It is empathy that enables every other core human ability: the ability to learn, to adapt, to compensate, to troubleshoot and problem-solve. 

 

The need to create successful AI experiences that can boast of similar abilities, calls for tools and frameworks that understands human decision and response matrices while also having a clear demarcation of tasks that need automation and augmentation.

 

  • AI that Automates – An article on TechCrunch talks about Adobe Photoshop’s new “Select Subject” tool that uses AI to accurately identify and select objects within an image. While it can be written off as just another useful feature integrated into an already useful application, its core is deeply empathetic to the typical Photoshop user who has spent hours painstakingly contouring an object with the Wand or Lasso tool.

 

  • AI that Augments – Let us consider Google’s intelligent camera for human-centered AI design. As per this article, Google showcases its intelligent camera Google Clips. With the ability to focus on the people that matter and the intelligence to decide what makes for a memorable photograph, it allows the user to be part of his captured moments instead of behind the camera. 

 

  • Troubleshooting Like a Human - The current problem with AI, regardless of its medium, is that it often doesn’t make it very far past the first stumbling block or breakpoint. This is an inherently anti-human trait because humans have the ability to assess an unexpected problem and solve it on-the-go. 

 

  • Maintain Transparency - No user likes the idea that someone somewhere has a database of what books he reads, what brand of shampoo he uses or by how many inches his waistband may have expanded in the last few months. E-commerce sites often tell you why they’re making certain recommendations to you – because of a similar item you bought, or because other users like you have bought it too. While seemingly trivial, such a move demonstrates transparency and gives the user a greater degree of control. 

 

The core philosophy is simple: always come back to what a human tries to get out of his interactions with another human. Detect or predict how the user feels, and create mechanisms to counteract those feelings in as human a way as possible. 

 

Consider the first wave of AI a worldwide prototype. Now the ground has been tested, giving us data to learn from. With this, we’re done with Round 1. Our next step is to focus on what can we learn from how people currently respond to AI? Stay tuned for my next blog (Round 2) coming this week. It will involve how to use insights to go back to ‘empathy’ and start all over again.

 

You can also read this article featured in the SiliconIndia Magazine. If you'd like to share your thoughts, leave me a message here or on LinkedIn/Twitter.​