Vince Lynch is CEO of IV.AI, a global AI company that helps businesses better connect with their customers. It processes 150 languages so all customers can be heard regardless of their origin. Clients include SMBs and larger companies such as Netflix, Toyota, Estée Lauder, Telefonica, and Capital One. When we were asked to help promote the “Resident Evil” film franchise for Sony Pictures a couple of years ago, we came up with the idea of altering the fictional artificial intelligence character (The Red Queen) into a real AI character — for which the fans could interact. It was a fun concept that was quite successful, but it created some serious challenges and reminded us how hard it is to build truly meaningful AI.Creating AI, including smart speakers like Alexa and smartphone assistants like Siri, is challenging. These devices offer a helpful utility function and are good for amusement, but they are created and trained by humans, which can introduce biases and a power dynamic that should be addressed.The Red Queen AIEngagement was what we were aiming for when we started on the Red Queen AI. We began by collecting all the scripts that had been created by the writers of the films in the series. We trained the AI to learn the character using natural language processing techniques and then generated new dialogue written entirely by the AI to see how it would work.The first few AI outputs were a nightmare. There wasn’t enough training data in the model, so the new AI version of the character was overly aggressive. We needed more data to soften the harsh villain character and enable it to work for a wider audience.The film character’s catchphrase was “You’re all going to die down here,” but the first version of the AI couldn’t quite get it right. It gave us some pretty funny results, including “You must die” and “Your death is here.” As you might imagine, it could be a bit heavy out of context and could have hindered our ability to reach a new audience that hadn’t seen the previous films.To add more training data and to make the AI smarter, we decided to tap into literature by authors like Charles Dickens and Shakespeare so the AI could learn from the more gentle communication styles of classic villains. Then, we added real conversations from police engagements with criminals to provide more realism and modern communication, as well as examples of people on psychoactive drugs recounting the things they saw, which ended up providing some rather creative dialogue.We trained and retrained, and finally settled on the AI’s output: “I’m not sure I’m done playing with you yet.” This statement would then appear more playful and not as murderous. Plus it worked for the context of the engagement, which allowed people back into the game.Everyone was happy with the end result, and the game was a hit. But it’s important to note that our decisions about which training data to use had biases. The decisions of the writers as to what made a good villain had biases. All of those biased slants can be OK when the aim is entertainment — but they should be approached with caution for more serious conversations managed by voice assistants, such as for healthcare, finances, and nutrition.The Challenges of AI AssistantsThe creators of AI assistants are often a small, homogenized group of people behind the curtain who decide what answers are true (or the most accurate) for billions of people. These arbitrary statements create a distorted view of reality that users of AI assistants might take as gospel.For instance, more than a year ago, Alexa was accused of a liberal bias. And last January, a video went viral when someone asked Google Home who Jesus was and the device couldn’t answer but could tell users who Buddha was. The company explained that it didn’t allow the device to answer the question because some answers come from the web, which might prompt a response a Christian would find disrespectful.As the use of smart speakers continues to climb, so do expectations. The number of smart speakers in U.S. homes increased 78% from December 2017 to December 2018 to a whopping 118.5 million, according to “The Smart Audio Report.” But users need to be mindful of the way the AI platforms work.Digital assistants have the potential to limit the scope of what products and platforms we use.After all, when one device (and, therefore, one company) owns the road to external knowledge, that company can act unethically in its own interest.For example, if I ask Siri to play a song by The Beatles, the device might automatically play the song from Apple Music instead of my Spotify library. Or I might ask Alexa to order AA batteries, and Alexa could happily order Amazon’s own brand.Combatting the Limited Scope of AI DevicesIn free markets, where competition is supposed to benefit consumers, these flaws can present significant obstacles. The companies that own the speakers could conceivably gain even more control over commerce than they already have.To combat this, users should be as transparent as possible with their requests to AI devices. “Play The Beatles on Spotify” or “Order the cheapest AA batteries,” for instance, are more thorough instructions. The more aware users are of how companies engage with them, the more they can enjoy the benefits of AI assistants while maintaining control of their environment.You can also ask an AI device to communicate with a specific company when you are buying items. For instance, Best Buy offers exclusive deals that you can only get when ordering through your smart speaker. You can also get updates on your orders, help with customer service needs, and updates on new releases.Users should remember that AI assistants are tools, and they need to think about how they manage them in order to have a good experience.And users should report responses if assistants make them feel uncomfortable so the makers of these devices and skills can improve the experience for everyone. Natural language processing requires a considered focus, as the potential benefits are just as significant as the liability of things going wrong.As for our natural language processing and the Red Queen, we discovered that some users were signing off at night with “Good night, Red Queen,” which means she clearly wasn’t too aggressive in the end. Follow the Puck A Web Developer’s New Best Friend is the AI Wai… Related Posts Why IoT Apps are Eating Device Interfaces Vince LynchCEO of IV.AI Tags:#AI#artificial intelligence#digital assistant#smart speakers AI: How it’s Impacting Surveillance Data Storage
Non-Muslims excluded from the National Register of Citizens (NRC) in Assam would not immediately or directly benefit from the Citizenship (Amendment) Bill, a senior Home Ministry official said.The comment by the official, who spoke on the condition of anonymity, comes as the Centre faces a backlash in the northeast, including in BJP-ruled Assam, over the Bill. Those vehemently opposed to the Bill fear that it would make it possible for the government to grant Indian citizenship mostly to illegal Hindu migrants from Bangladesh in Assam, who came after March 1971, in violation of the agreement of the Assam Accord, 1985. Almost 40 lakh people were excluded from Assam’s final draft of the National Register of Citizens (NRC) that was published on July 30 last year. The NRC is a Supreme Court monitored exercise that was carried out in the backdrop of the Assam Accord. Almost 30 lakh of those excluded from the NRC have filed claims to be included in the list of citizens. Government officials would now examine these claims and the final NRC would be published later.The future of those people whose nationality was “indeterminate” was yet to be decided, the official said.“Those who will not make it to the final NRC, does not mean they will immediately get citizenship,” the official asserted. “There will be legal hurdles because in their application for NRC they claimed to be Indians. You cannot suddenly change your stand. There won’t be a blanket citizenship offer.”The Intelligence Bureau (IB) told a joint parliamentary committee on the Citizenship Bill that those who have come to India from the three countries under reference due to religious persecution but have not declared so at the time of their arrival in India “will have to prove that they came to India due to religious persecution, if they had not declared so at that time of their arrival in India.”The law seeks to grant Indian citizenship to members of six communities — Hindus, Christians, Parsis, Buddhists, Jains and Sikhs — who came to India till December 31, 2014. It also reduces the mandatory requirement of 12 years stay in India to seven years to be eligible for citizenship if they do not possess any document.“The Bill is not only for Assam, it’s for the entire country. There are many people who came from the three countries due to religious persecution,” said the official.The official added that the application for citizenship would be approved only after the concerned State government cleared it.