Concern has surrounded Amazon’s Rekognition technology, with fears that it could be used in an authoritarian and inaccurate manner by the governments and police. However, new research from The Center for Data Innovation, a nonprofit and nonpartisan research institute, indicates that Americans are opening up to the use of facial recognition technology, but only if it’s accurate. “People are often suspicious of new technologies, but in this case, they seem to have warmed up to facial recognition technology quite quickly,” the center’s director, Daniel Castro, told Nextgov.
Read More: Shareholders tell Amazon to stop selling Rekognition facial recognition tech to govt
With so many uses it appears that facial recognition can do a lot more than unlock iPhones and fight crime. This rising technology is now on track to transform customer experiences through the use of hyper-personalization. In 2018, Coty created a VR experience to help users pick perfume, while its Clairol brand partnered with Snapchat to allow customers to try on different hair colors. A collaboration is now taking place between tech and beauty with a lot of opportunity on the horizon.
We spoke to Taleb Alashkar, an AI and facial recognition wizard, and CTO of Algoface.ai, who is currently working on a platform that uses AI to give makeup recommendations based on your skin tone, eyes and facial shape. Their technology is trained by professional makeup artists, creating an accurate and hyper-personalized experience. We spoke with Alashkar to get a better understanding of what makes our faces and our expressions unique, and what are some of the most difficult aspects of conducting this research.
I understand you have “Experience in Face Analysis, Face recognition, Facial Expression classification.” Technically, what is the hardest part of classifying facial expressions?
The hardest problem of facial expression correct classification isn’t very far from common AI bias problem which is diversity and inclusion.
Since Facial expression recognition is a supervised learning problem, the diversity and richness of the dataset used for training are really important. Including faces from different ethnic and cultural backgrounds is vital for universal facial expression recognition system.
Besides the issue of diversity, which is a common problem in AI, Facial expression recognition problem has its own other challenges such giving the AI system a common sense to distinguish right emotion from the expression in the context. For example, some people might start crying when they are happy; other people may smile when they hear very painful news or they might smile when they hear something doesn’t make any sense as a type of sarcasm.
I think we need to move from recognizing facial expression into emotion interpretation which is different.
Following on from this, from what you have learned, do you believe that facial expressions are heavily influenced by culture or are they generally universal?
This question is an open question in AI for a long time. Even though I believe there is similarity of conveying some facial expression globally which is the base that Black and Yacoob 1997 built their standard six facial expressions [ 1] encoding system; the emotion beyond such expression might be different. People from different cultures might have different meanings for a smiley face according to the context for example.
So if we need an AI-based system that can interact with humans, we need to think beyond classifying the facial expression which is how our facial muscles move toward the right interpretation of that expression.
Some aspects of this technology are likely to attract controversy, however, there are still many practical applications which can go far beyond conventional uses of facial recognition. As the public begins to warm up to the idea of this technology, we are likely to see a growing number of startups innovating in this area.
Disclosure: This article includes a client of an Espacio portfolio company