Why AI needs human understanding
Despite continued advancements in AI when it comes to anlaysing social media conversations AI still struggles to accurately rate content for sentiment and topics.
Today’s newspaper columns and keynote conference addresses are filled with hot takes on the future of work. It’s hard to keep up with the number of times we’ve been told that our desk jobs will soon be replaced by AI. AI is regularly presented to the public as an ominous job-killing machine that will achieve new heights of efficiency and accuracy without the interference of human error.
There’s no doubt that in many sectors the threat to human jobs posed by AI is real. And indeed, some jobs will make way for the processing power of machine-learning systems. Algorithms already tell us what we want to buy, help us pay for it and in some places, they’re delivering our purchases to our front doors. But as Foteini Agrafioti, head of Royal Bank of Canada’s AI research arm, recently told the Financial Times, too many people are making claims about big costs and job impacts. Agrafioti says “there’s still a long way to go and many challenges we need to solve before a machine can operate even near the human mind.”
And while we should remember that algorithms are still able to crunch numbers that not even the brightest actuarial graduates could contemplate, there’s one thing that even the best algorithms can’t do as well as we can: understand fellow humans.
Machines still cannot accurately analyze and understand human conversation. Sentiment analysis, the automated algorithm-driven task of determining the meaning and intent of written text – be it a tweet or a product review on Amazon – is still far from producing consistently accurate outcomes. Algorithms can pick up on keywords and identify trends in the text we write. But when it comes to analyzing text to understand how people feel and what’s making them feel that way, machines perform quite poorly. The natural language processing algorithms used in sentiment analysis often produce accuracy results of around 50%. These algorithms are not able to grasp the nuanced way we communicate in written text, with tonal subtleties like sarcasm, local slang and emojis.
These inaccuracies are part of the reason that some decision-makers in business and government remain reluctant to seriously explore sentiment analysis. For a while now executives have been told that they should complement their traditional sources of public opinion data, polls and focus groups with public social media data from platforms like Twitter. Social media as a source of public opinion is unique in that offers listeners a real-time source of data, and unlike some traditional but lagging indicators, one is able quickly develop an immediate public response to unfolding events. Until now though, the tech on its own, has simply not been able to accurately deliver meaningful insights.
The solution to these inaccuracies lies in augmenting AI with humans. Tech giants like Facebook and Google already employ thousands of human verifiers to improve accuracy, from labeling photographs to removing terrorist content. The human-verified data is then fed back into their algorithms, providing them with more accurate data to learn from.
At BrandsEye, we analyze sentiment contained in public social media posts by distributing the data from our algorithms to a crowd of human contributors who verify and enrich the data. First our crowd ensures the data is indeed relevant and secondly, ensure that it’s accurately rated as negative or positive. The human contributors are also able to go step further and categorize the topics included in the data so that clients can understand not just how the public feel but what issues are driving them to feel that way, be it a queuing time in a retail bank or a delivery service in e-commerce.
Although different in its focus to most, our crowd platform is part of a growing trend in the tech world where companies large and small are employing large groups of humans to improve the accuracy and ability of their AI algorithms. For now, instead of replacing human jobs, AI systems’ flaws have generated new work opportunities for cash-strapped students to retirees looking for a new source of flexible work.