Artificial Intelligence Learns Human Sentiment
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): Conatus News/Uncommon Ground Media Inc.
Publication Date (yyyy/mm/dd): 2017/04/19
According to OpenAI, artificial intelligence (AI) has learned sentiment. However, it cannot express it. Nevertheless, it can read it.
The system has been termed an “unsupervised sentiment neuron.” It develops a good representation of sentiment through only prediction of the next character in a text of Amazon reviews.
“A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labelled examples.”
There appears to be a sentiment neuron within the system with containment of most of the signals relevant to sentiment. It is reported as a derivation from large neural networks. A property emerging as a result of the structure and nature of neural networks.
“We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs, with our model processing 12,500 characters per second.”
These were used as the foundation for the creation of a sentiment classifier: different types of sentiment. Each were weighted in linear combinations. Weighting is giving more or less value to something: X was more weighted than Y; Y was less weighted than X.
“While training the linear model with L1 regularisation, we noticed it used surprisingly few of the learned units. Digging in, we realised there actually existed a single “sentiment neuron” that’s highly predictive of the sentiment value.”
Sentiment became predictable from one value, mostly. This neuron can classify reviews as positive or negative based on the Amazon review system. It was dynamic, adaptable, and adjustable “on a character-by-character basis.”
The sentiment neuron within their model can classify reviews as negative or positive, even though the model is trained only to predict the next character in the text. (Image: blog.openai.com)
Typically, computers, algorithms, and AIs need big data to sift for self-learning. Unsupervised learning is different. This AI can do it. It can learn a good representation of a dataset, which can then be used to “solve tasks using only a few labelled examples.”
According to the researchers, the findings “implies that simply training large unsupervised next-step-prediction models on large amounts of data may be a good approach to use when creating systems with good representation learning capabilities.”
The researchers concluded that outside of the specific unsupervised learning, the capacity for “general unsupervised representation learning” could become a reality. “Our results suggest that there exist settings where very large next-step-prediction models learn excellent unsupervised representations. Training a large neural network to predict the next frame in a large collection of videos may result in unsupervised representations for object, scene, and action classifiers.”
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.
Copyright
© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.
