A new study published in the Proceedings of the National Academy of Sciences finds AI-synthesized faces are indistinguishable from real faces and humans actually find them slightly more trustworthy. The results highlight not only the risks of deepfakes, but also a tremendous opportunity for researchers and businesses.

The study looked at three experiments. In the first experiment, 315 participants classified 128 faces taken from a set of 800 as either real or synthesized. Their accuracy rate was 48%, no better than a 50-50 guess.

In a second experiment, 219 new participants were trained on ways to identify deepfakes and given feedback on how to classify faces. This group classified 128 faces taken from the same set of 800 faces – but despite training, the accuracy rate improved to only 59%.

Finally, a third group of 223 participants rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy). This group rated synthetic faces a slightly higher average of 4.82, compared with 4.48 for real people.“We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley.

The results make clear that humans intuition and discernment alone will not be able to effectively combat deepfakes. Researchers should be proactively focused on countermeasures and other tools and techniques that can help detect deepfakes.

While the results suggest deepfakes can be highly effective when used for nefarious purposes, it also highlights the effectiveness to which marketers can use AI-synthesized faces and tools in promoting their services and products. The fact that humans find AI-synthesized to be more trustworthy than photos of actual humans suggests marketers might be able to leverage this attribute to form a stronger bond between consumers and their marketing message. Already, companies like LG are using synthetic humans to promote their products. The study results also highlights the potential role AI-synthesized humans might play in the metaverse.

Nielsen recently announced that the majority of teens now own smartphones.  Fifty-eight percent of teens now own a smartphone – up from 36 percent in 2011.  What is perhaps most interesting, is the strength of Android devices among this audience.  According to the same data, 59 percent of teens adopting smartphones in the last three months have acquired an Android smartphone.  That compares 33 percent for iOS.

smartphone-gfx

Dilbert.com

Last month Phil Fersht, CEO of HfS Research wrote about the changing (or in need of a change) industry analyst business. Gideon Gartner’s post yesterday brought this to my attention and it is certainly something that I’ve thought a lot about.  IIAR is holding a teleconference on this topic late this month.  Here are a few thoughts on the points Phil makes:

1) research needs to be dished in bite-size chunks: I’ve written frequently about the vast amount of information overrunning us all.  This is a pronounced problem for industry analysts.  One of the core value-adds industry analysts provide is a different view that allows research buyers to look outside of the box at the world in which they live.  Today there are thousands of “voices” coming across blogs, twitter, and other media-rich sources which allow analysts within firms to hear different opinions sans the need to buy a research vendor.  Analysts must add value above and beyond a fresh perspective.

2) There is nothing new in today’s research: Traditional research was/is largely focused on sizing an addressable market.  But companies I work with seem to be less concerned with sizing an addressable market perfectly – meaning they are increasingly comfortable with working in a world with some uncertainty when it comes to unit and revenue volume. The rise of several large research firms who’s estimates are always very similar likely drives this. More, these estimates are distant in the world of instant information.  A five-year horizon is great for large capital investments, but these have become less impactful today.  The analysts to survive (and thrive) will spend increasingly more time teaching their clients what the world will resemble in shape as opposed to in size. This will involve more modeling and tools like scenario planning. The world is demanding a more analytical industry analyst.

3) The Courage (or lack therefore) to be Non-consensus: Related to #2. There is minimal upside for these firms to stray far from consensus – but this ultimately dilutes the value analysts bring to their clients.  I cringe when I hear analysts say, “I’m trying to be conservative in my estimate.” My personal goal as an analyst is to be right – all the time, every time.  When I find myself far from consensus I highlight what is a contrarian view. I’m careful not to take a contrarian view simply to be non-consensus, but I do believe the ROI on non-consensus views will be more pronounced over the next 5 years.

4) Buyers don’t read research: The time of selling thick research reports is over. As point #1 already made clear, buyers want condensed, actionable information.  A thick research report is the opening comment of a long conversation.  While it might help the analyst formalize their thinking on a topic, clients today want open-ended conversation. The value-add of the successful industry analyst of tomorrow will be counted in minutes/hours not pages. The best analysts are morphing into strategic advisors.

5) Research Needs Personality: I completely agree succesful research will be driven by personalities not products.

Here are a few additional thoughts:

6) A Tidewave of Data:  Hard data was once scarce, but that world is changing rapidly. We are increasingly faced with the opposite problem and this problem of data availability (as opposed to the lack of data) will only get more pronounced over the next decade.  Today there are a plethora of “things” analysts can measure and from them create metrics. Many of these metrics are answers in search of a problem. Some analysts will make a living “explaining” these metrics.  To be sure, buyers will need to have much of these new data “interpreted” but many of these new data sources will be little more than noise.  The introduction of noise will lower the value for all metrics and data which in turn will make some of the above points more important.  The successful analyst will use data/metrics to provide a view their clients didn’t see elsewhere. There will be an important subtly in how these data are used.

7) The Macro for the Micro: Analysts have specialized themselves into corners.  The “lack of vision” discussed by Phil and others in comments is fundamentally a lack of understanding about the broader surroundings.  Too many firms show hockey stick forecasts or tout an understanding of the hottest trends, but fail to explain where this growth is coming from or what implications exist for adjacent categories.

In passing, Phil writes, “At the end of the day, research is discretionary spend.”  Too many analysts overlook this simple tenet.

The demand for “metrics” is increasing. At the same time, data availability is accelerating. More, the availability of survey software like SurveyMonkey has driven down both the cost and accessibility to survey tools. In economic parlance, we’ve seen both supply and demand shift out. As the chart shows, the end result is a lower price and a much higher quantity.  

This is in everywhere evident. Political and social issue polling has increased with a 24 hour news cycle, cable news channels, more independent research institutions, and think tanks. Surveys have become commonplace. I receive a survey invite each time I stay in a hotel, attend an event, close an account or any number of a host of activities. These invites enter my inbox with subject lines like “your opinion counts,” “please share your feedback with us,” “your recent stay at Renaissance,” or “would you recommend Hertz?”