IQ tests usually have three (3) categories of questions:
1. Logic questions;
2. Mathematical questions; and
3. Verbal reasoning questions.
The authors claim their custom built deep learning machine outperforms the average human ability to answer verbal reasoning questions.
Of course data mining techniques have been used for some time to analyze massive corpuses of texts to find word links providing statistics of word patterns. Thus words are tranlated into vectors to be mathematically compared, added and subtracted like other vectors. For example, Google uses this method for automatic language translation.
The problem with this approach is assuming each word has a single meaning represented by a single vector and this is not always the case. The authors solution is to take each word and seek other words that often appear nearby in a corpus of text, execute an algorithm to see how these words are clustered and look up the different meanings of a word in a dictionary to match the clusters to each meaning. Thus, this method recognizes the multiple different senses that some words can have.
Proposed is a novel framework consisting of three components:
(i) the first component is a classifier that aims to recognize the specific type of a verbal comprehension question;
(ii) the second component leverages a novel deep learning technique to co-learn the representations of both word-sense pairs and relations among words (or their senses);
(iii) the last component is comprised of dedicated solvers, based on the obtained word-sense pair representations and relation representations, for addressing each of the specific types of questions.
See paper here.
Intelligence Quotient (IQ) Test is a set of standardized questions designed to evaluate human intelligence. Verbal comprehension questions appear very frequently in IQ tests, which measure human's verbal ability including the understanding of the words with multiple senses, the synonyms and antonyms, and the analogies among words. In this work, we explore whether such tests can be solved automatically by artificial intelligence technologies, especially the deep learning technologies that are recently developed and successfully applied in a number of fields. However, we found that the task was quite challenging, and simply applying existing technologies (e.g., word embedding) could not achieve a good performance, mainly due to the multiple senses of words and the complex relations among words. To tackle this challenge, we propose a novel framework consisting of three components. First, we build a classifier to recognize the specific type of a verbal question (e.g., analogy, classification, synonym, or antonym). Second, we obtain distributed representations of words and relations by leveraging a novel word embedding method that considers the multi-sense nature of words and the relational knowledge among words (or their senses) contained in dictionaries. Third, for each specific type of questions, we propose a simple yet effective solver based on the obtained distributed word representations and relation representations. According to our experimental results, our proposed framework can not only outperform existing methods for solving verbal comprehension questions but also exceed the average performance of human beings. The results are highly encouraging, indicating that with appropriate uses of the deep learning technologies, we could be a further step closer to the true human intelligence.