Emotion recognition technology is a code-based system that purports to identify a person's emotion by photographing their facial expressions and perform actions according to identification. examples of common places that use this technology are staffing companies, rating companies and security companies. This is a market currently valued at about $ 20 billion, and is expected to be valued at about $ 25 billion in about five years. the main problem with this technology is that it relies on a controversial theory about the concept of “emotion”.

Emotion is the way we interpret the reality around us. Expressing emotion is the way we communicate our emotions to those around us. Two main psychological approaches deal with the question of what emotions are: one approach claims that emotions have evolved evolutionarily and therefore are universal and identical in all human beings. The second approach argues that emotion is culture and context based and therefore not universal but perceived differently in different cultures.

American psychologist Paul Ekman is the main representative of the first approach. He defined 6 universal emotions according to studies he did among a distant and isolated tribe. He argued that these six emotions are universal and do not depend on place and culture, and that we express them instinctively by micro-facial expressions, muscle spasms lasting about 0.2 parts per second.The six emotions are: joy, fear, disgust, surprise, sadness and anger.

The use of emotion recognition technology is becoming more and more common in our world. The technology is in the process of being implemented in the autonomous vehicle and digital assistants so that they can understand the mood of whoever is talking to them. The main concern is that eventually Ekman's theory will become a consensus on what the concept of emotions is.
The ‘Openface’ project is based on texts by Jacques Lacan, describing the symbolic stage in which the subject defines his world by a set of categories which is given by language. The project tries to break through the language framework and teach the machine a greater number of emotion categories.

By building a repository of facial expressions that expands every time someone sits in front of the project camera, the system produces new definitions for emotions that do not exist in our language. The new emotions are compounds of facial expressions of different people who come from different backgrounds of race, different gender and age, and they are created as collages that teach the machine what the new emotion is.

In the combinations I asked the technology to do between different faces, into one face expressing a certain emotion, I tried to demonstrate another limitation of the technology, which refers to the sight of eyes only (we define it as 'Action Unit') and not to the essence behind physiological muscles action. Thus it combines micro-expressions in the upper part of one person’s face with micro-expressions in the lower part of another person’s face into a certain emotion without reference to the original emotional state of each of the people behind the muscle actions.