Amazon’s foray into streaming live sporting events is about digital data. It was never about audience size, new activations, or subscriber retention.

Last night Amazon streamed their second NFL Thursday Night Football game. Bloomberg reported Amazon’s first NFL live streaming event last week attracted 372,000 viewers who watched the game an average of 55 minutes while 1.6 million viewers initiated streaming of the game at some point. That may feel like a paltry sum compared to the 14.6 million viewers who tuned in to watch the game on CBS or the nearly 500,000 who watched on other digital platforms like NFL.com, but Amazon’s investment was never about audience size.

Amazon paid more than $50 million to air 11 Thursday night games this season. This was a significant increase from the $10 million Twitter paid to live stream 10 NFL Thursday Night Football games on their platform last year. Many have written that Amazon’s strategy here is to diversify content, gain new subscribers, and retain current subscribers. I argue this investment is purely about data.

Traditional media measures the success of these investments on a per-viewer basis, but Amazon’s investment is decidedly different. This isn’t about Amazon “catching-up” in terms of viewers tuning in. It isn’t about the lack of exclusivity “hurting” Amazon. This isn’t the litmus test to see if NFL games will move away from traditional broadcasting platforms. They won’t. Not anytime soon. The traditional broadcasting deals will be renewed when they begin to expire in 2021. Google, Facebook and other digital platforms won’t likely take over for ESPN, CBS, NBC, and Fox.

The games are only available to Amazon Prime members. Critics have suggested this limits Amazon’s potential audience size whereas Twitter made last season’s games available to individuals both with and without an account. But this experiment isn’t about driving new Prime memberships. Most Prime member households likely have access to some type of paid TV service so streaming the game isn’t their only watching option. And certainly not their default option. By making the games available exclusively to Prime members, Amazon is able to focus their data collection on those already using the platform.

Amazon can now build information about live sporting event audiences – something they’ve lacked until now. Amazon can know know what these households were watching before and what they turned to after. This information will help inform other content investment decisions. Not only will it make existing content investment decisions more data driven, but it might also help investment decisions in adjacent markets. For example, this could help inform Amazon’s potential entry into the $2 billion annual U.S. pay-per-view (PPV) market. A market dominated by sporting events mind you.

This is also an investment to hone Amazon’s advertising platform. You don’t hear much about it today because it’s small compared to other’s Google and Facebook. But in the coming years Amazon will seek to grow its ad revenue significantly. Barclay’s estimates Amazon generates around $1.4 billion a year in adverting revenue. As part of the agreement with the NFL, Amazon can sell a “limited number of advertisements” in addition to ones aired from CBS and NBC. Amazon can charge millions for these 30 second spots. The nature of Amazon’s digital platform enables them to specify who sees which commercials. This drives ultimate ad customization.

Sure Amazon is bringing other things to the digital table. The game will be available in 200 countries. There will be audio feeds in Portuguese, Spanish, and even an English language one positioned specifically for the UK market. Amazon also tied in Alexa with features like football trivia. But this is less about driving new audiences to the platform, connecting with chord-cutting millennials, or diversifying content. This is first and foremost an investment in data.

With a growing list of CEOs losing their jobs because of cyber attacks and data breaches, it’s time for corporations to change how they operate. Escalating cyber risks necessitate two core corporate structural changes. First, the role of the CIO should be elevated, reporting directly into the CEO and gaining corporate board visibility. Secondly, corporate boards must form cyber risk audit committees. Today, boards maintain financial audit committees to monitor company financial risk, internal control processes, and oversight of financial reporting and disclosure processes. They must follow that model again in order to gain oversight for monitoring and disclosing relevant cyber threats.

The rise of cyber threats and attacks is a direct result of changing business environments. These changes are impacting every organization. As I wrote about in Digital Destiny, all businesses are digitizing and this changes business as usual. As Ginni Rometty, IBM’s Chairman, President and CEO, put it, “data is the phenomenon of our time. It is the world’s new natural resource. It is the new basis of competitive advantage, and it is transforming every profession and industry. If all of this is true – even inevitable – then cyber crime, by definition, is the greatest threat to every profession, every industry, every company in the world.” But despite this sea change, corporations have not changed the way they operate. While organizations are making investments to mitigate cyber threats, they have yet to make sufficient operational changes. Consider research from Ponemon Institute which found corporations have 62 percent of their potential lose from Property, Plant & Equipment (PP&E) assets covered by insurance while only insuring 16 percent of the potential lose of information assets. Organizations have yet to operationalize cyber risk like they have for other business risks.

The first structural change involves elevating the CIO to report to the CEO and gain direct visibility to the Board of Directors. CEOs are waking up to the fact that a major data breach will cost them their job and their reputation and they will increasingly want more direct oversight of the CIO.

Historically companies employed CIOs in one of two primary ways. They either focused on technology development and R&D initiatives within companies pursuing a product differentiation strategy OR they supported the CFO by helping to lower costs and gain efficiencies. This is the approach of companies seeking competitive advantage by lowering their cost structure. Management often has a dual mandate to find innovative ways to deliver value to customers AND to deliver value at a lower cost. But in almost all cases the later mandate is strongest. Companies benefit by delivering value at a lower cost and CIOs’ roles evolved to focus on corporate efficiency and productivity gains. Reporting to the CFO forces CIOs to have a narrower financial focus and quantify the success of technological investment. Having a CFO-CIO reporting relationship tempers IT investment decisions, controls costs, and makes the organization IT-conservative relative to its peers. This is not the corporate philosophy you want when cyber risks are operational risks too.

Research from Gartner and Financial Executives Research Foundation (FERF) estimates 42 percent of CIOs report to the CFO. For smaller corporations with revenue between $50 million and $250 million, the CIO or IT organization is reporting into the CFO 60 percent of the time. Past research finds CIOs are generally not included in strategic planning initiatives. In many cases, CIOs were brought on to implement and manage enterprise software solutions that most directly benefited the CFO and as a result the CIO’s primary strategic role developed as support for other departments within the organization.

Mounting cyber risks introduce new challenges for organizations. They must now maintain focus on risks outside of their core business. As a result, corporations need to operationalize cyber threats because of the direct impact they can have on the business and the management team. Morever, cyber threats are outside the domain knowledge and professional interests of most CFOs. The role of the CIO is to support the strategic focus of the organization. The growing prevalence of cyber risks alters the strategic focus of every corporation and the CIO will increasingly be tasked with oversight of this now strategically important silo. As companies digitize, cyber risks are viable threats to every business even through they are not direct operational risks. CIOs will need board visibility to request funding and outline strategic initiatives. Overtime the CIO report will become a standard component of quarterly conference calls that have historically focused on financial reporting and operational information. Managing cyber threats and risks will become a core business competency for every corporation. Few CIOs become CEOs, but the path to CEO will change in the future as a result of these new operational realities.

The second major structural change organizations must undertake is to institutionalize cyber threat monitoring at the corporate board level. Similar to today’s audit committees, boards need to gain ongoing visibility and oversight for measuring and monitoring the company’s cyber risk profile, internal cyber control processes, and oversight for cyber reporting and disclosure processes. This change will drive additional focus and funding. In the coming years the board of directors will gain greater oversight and have great influence.

Rising cyber risks necessitates changing corporate structures. Elevating risks make it time to elevate the CIO as well as direct monitoring and oversight by the board of directors. These are simple corporate hacks that will improve corporation management as we press further into new digital directions.

We are introducing the internet to more and more diverse places through the wide deployment of digitized, ‘sensor’ized, connected objects. Each new introduction is an experiment. And the question we are asking is “does the internet make sense here and if so, what are the use-case scenarios?”

When Apple announced the Apple Watch in September 2014, and released it in April 2015, most observers focused on the purported killer applications of the watch. The broader and more relevant question centered on the new use-case scenarios that would materialize as a result of having the internet on our wrists. After all, most users would have their smartphone with them so the question really was, “what does extending internet access by the length of our arm net us?”

With the introduction of cellular connecting to the newest Apple Watch announced earlier this month, the fundamental question we are now asking is, “what does having the internet on your wrist mean when you can be further separated from your phone?” Now however, two years and several iterations removed from the original Apple Watch release, Apple has a much better sense of the use-case scenarios that are becoming relevant and meaningful.

Consider the first commercial for the Apple Watch.

https://youtu.be/AQZTVbmLZ8w

Apple highlighted the following applications in that first commercial spot:

  • Clock, Alarm, and time features of course
  • Maps
  • Airlines tickets
  • Weather
  • Information status from other connected devices (electric car charging status)
  • Movie tickets
  • Payments
  • Calendar and appointments
  • Alerts for incoming calls, texts and other notifications
  • Messaging
  • Localized control for things like music
  • Fitness
  • Style
  • And a portfolio of diverse apps too numerous to list individually

Unsurprisingly, Apple threw everything at you in that first commercial to see what might resonate with you most. Fitness is actually one of the last shown in a long series of possible applications you might want to employ. Follow-on commercials like this one in the months and years that have followed highlight the device as little more than a highly customizable, stylistic wardrobe decision.

https://youtu.be/32Dzb5yxjL4

 

So how should we think about cellular service bring internet access to the wrist? What are the use-case scenarios that will emerge? Make no mistake, cellular service on watches isn’t for making calls despite what you might have seen. That Apple chose to demo this was simply to highlight what was now possible. But cellular connectivity in the watch, like cellular connective in an increasing array of digital devices, isn’t about phone calls. We don’t use our smartphones to make calls today. Why would we think that we are going to start making cellular calls from our wrists? Here are the data. While we spend some four hours a day on average on our smartphone, we spend the vast majority of that using apps and a very small fraction making and taking calls.

[table id=1 /]

Now consider Apple’s most recent Apple Watch commercial. Gone are many of the use-case scenarios from their first commercial. Their most recent commercial focuses on health care and wellness related topics almost exclusively.

https://youtu.be/Kc0c_jQeo5E

The cellular connectivity capabilities of the Apple Watch are to send medical information to medical-related services and health care providers. That application isn’t materializing immediately, but it will come. Apple is becoming a health care company. Others will follow. The entire industry will sway in that direction. It’s Clay Christensen’s “law of conservation of attractive profits.” When attractive profits disappear at one stage in the value chain because it is commoditized, attractive profits will emerge elsewhere in the value stack. As profits are being squeezed from hardware, it’s moving to adjacent areas like services.

More still needs to come to fruition. Battery life for one must be extended significantly. Sleep monitoring will be an important element of a holistic health assessment and the current battery life doesn’t sufficiently support that use-case element today. Not everything will be immediately heath cared. And not everything needs to be health care related. There will be other features to bridge us in that direction, but overtime health care related services and applications will become the dominant killer app for these suite of devices. Apple is moving us in that direction.

This week the FDA approved the first continuous blood sugar monitor for diabetics that doesn’t require backup finger prick tests. Now imagine throwing that into something that looks like a watch. This is our future. And this is why cellular connectivity will be meaningful.

We are starting to see viable applications of Augmented Reality (AR) that are improving operational efficiency and financial metrics. These applications will begin to scale across the industry in the coming years. Consider DHL’s experience. After a series of pilots over the last three years, DHL is expanding the use of augmented reality (AR) in warehouses. DHL is arming warehouse workers with smart glasses capable of providing visual displays of detailed order picking instructions including where the items are within the warehouse and where they need to be placed on the cart. Delivering relevant information through smart glasses frees warehouse workers from having to hold and manage physical pick lists printed on paper. Through the use of this “vision picking technology,” DHL saw productivity improvements of 15 percent, higher accuracy rates, and reduced training times.  

DHL reports that about 50 percent of the total work hours spent in a typical warehouse are spent picking inventory. Roughly a third of DHL’s costs are related to staff costs like wages, salaries, and compensation. Cutting staff costs by 20 percent would improve EBITDA margin by 80 percent, operating margins by 112% and net profit margins by nearly 150%. According to the Bureau of Labor Statistics, there are some 950,000 U.S. workers employed in warehousing and logistics so AR technology like this could drive significant industry-wide productivity gains while improving operating metrics for myriad companies.   

Overtime AR applications like “vision picking” will likely extend into other areas of the supply chain as well. Estimates suggest drivers spend between 40% and 60% of their time outside of distribution centers trying to locate the correct boxes within their truck instead of driving. AR applications could cut delivery times down significantly by providing drivers with necessary information at the time of delivery. We have only begun to scratch the surface of AR applications revolutionizing the supply chain.  

You can see how DHL is implementing their “vision picking” technology in this video:  

https://youtu.be/CMwgXcPVAR8

To truly understand Artificial Intelligence, we have to understand where we are assigning it too much potential and where we aren’t giving it enough credit.

Artificial intelligence or AI for short, began as a field of study in the 1950s, not long after the introduction of the first digital computer in 1942. In these early days we sought to develop intelligence that could rival that of humans. A superhuman digital intelligence that could reason and make decisions as we do but without the shortcomings inherent in human thought. We wanted to strip the emotions, inconsistencies, and errors of human decisionmaking while accelerating the rate at which decisions could be made. Moreover, this type of omnipresent artificial intelligence would be applied broadly to the full range of decisions confronted by humankind. Today we refer to this type of artificial intelligence as “General AI” or “Artificial General Intelligence,” but in those early days it was the type of intelligence we wanted to build.

In 1965 Herbert Simon, one of the early AI scientists, predicted “machines will be capable, within twenty years, of doing any work a man can do.” Simon missed the mark and this type of AI overzealousness at the time led to the long AI Winter of the 1970s and 1980s. Unrealizable expectations, an inability to commercialize and monetize General AI, and its seemingly slow progress all contributed. Technological innovation tends to move very slowly until suddenly it doesn’t.

Developments over the last decade in “deep learning,” using massive amounts of data to optimize decision engines with incredible accuracy, have reinvigorated interest in AI. AI research today primarily focuses on applying large amounts of data and computing power to narrowly defined domains. Deep learning is used to optimize single objective functions like “achieve checkmate,” “win Go,” or “maximize speech recognition accuracy.” In recent years we’ve seen significant progress in “Narrow AI” and that’s got us excited, and a little scared, that general AI, and specifically untethered general AI, is just around the corner. We find ourselves in one of those periods of seemingly sudden progress. But AI has been progressing since those early days in the 1950s. We’ve just tended to discount many of the advances made over the last six decades.

The earliest forms of narrow AI could do one thing really well, but couldn’t learn to do anything better until it was programmed to and updated with new potential. In this way, your computer, myriad software, and even a basic calculator where very simply narrow AI systems. But we often overlook these developments as AI. We don’t like to think that basic calculators are all that special or deserve to be considered intelligent. As roboticist, Rodney Brooks put it, “every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.’” Or as computer scientist Larry Tesler noted, “Intelligence is whatever machines haven’t done yet.” This has become known as Tesler’s Theorem. The “AI Effect” paraphrases this idea to say that “Artificial Intelligence is whatever hasn’t been done yet.” In other words, AI doesn’t get any of the credit for advances made, but holds all of the expectations of developments yet to come.
The massive growth of digital information has created a situation ripe for AI applications.Today’s AI advances are driven by Machine Learning coupled with massive computational processing power. We’ve shifted from trying to perfectly duplicate the logic of an expert to a probabilistic approach to offers some flexibility. Rather than defining all of the rules ex-ante, we are apply statistical techniques to uncover the rules embedded in data.

Machine learning is enabling previously simply AI systems to learn (i.e. improve) within their programmed field. Algorithms can identify new information from the inputs they receive and create new outputs. These AI systems get better at what they were intended to do, but they don’t jump out of that domain. Digital Assistants for example get better at deciphering speech but that won’t enable them to drive your car. While we call it learning, it is a narrow form of learning. So I can teach Alexa to play my favorite band or deliver me the nightly news, but I can’t get Alexa to have a favorite band of her own or have an opinion on what to do about North Korea.

We are building discrete AI systems to be very good at discrete problems. These systems are inherently poor, by design, at autonomously learning new skills, adapting to changing environments and ultimately outwitting humans. While general AI is still the goal for some, commercial forces will keep narrow AI applications the focus for many decades. Moreover, general AI will never be an outgrowth of narrow AI applications. Narrow AI applications are not designed to adapt their knowledge to other problems. These systems have programmed logic and parameters to make them best in class at solving a discrete sets of problems. Narrow AI systems lack the context required of general AI environments. Interact with any digital assistant today and it is immediately and abundantly clear. And while the perception of context appears to be improving, we are far from general AI.

As Rodney Brooks noted, there is “a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine. But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.”

In practicality we don’t want AI systems that think like humans. We want hyper focused AI applications that are extremely proficient within a narrowly defined field. We want to solve discrete problems. The AI systems that will flourish in the years to come will decipher large amounts of data to solve previously difficult, but discrete problems. We’ve given too much potential to general AI, and not enough to the narrow AI applications that are changing how we live, work, and communicate.

Last week was the iPhone’s 10th anniversary and there were dozens, maybe hundreds, of articles written articulating how the iPhone changed everything. Most narratives build off the now famous line in Steve Job’s iPhone keynote:

…today, we’re introducing three revolutionary products of this class. The first one is a widescreen iPod with touch controls. The second is a revolutionary mobile phone. And the third is a breakthrough Internet communications device. So, three things: a widescreen iPod with touch controls; a revolutionary mobile phone; and a breakthrough Internet communications device. An iPod, a phone, and an Internet communicator. An iPod, a phone…are you getting it? These are not three separate devices, this is one device, and we are calling it iPhone. Today, Apple is going to reinvent the phone…

And reinvent the phone they did. But they did so much more than that. I was surprised to not see coverage on the paradigm shift that took place in the telecom industry as a result of the iPhone. This was Apple’s true impact.

With the iPhone, Apple become the first to build a phone for the consumer. Prior to the iPhone, manufacturers built phones for the telecommunication carriers. Manufacturers built the hardware wireless carriers wanted. Prior to the iPhone, carriers held tremendous control over myriad aspects of hardware design – from features included to processing power offered. Carriers controlled pricing, hardware and software distribution and service offerings from initial activation to replacement and repair.

Prior to the iPhone, carriers would develop services from the hardware specs for which they would charge consumers a premium. Most forget that the original iPhone was the first mobile phone with visual voicemail. Prior to the iPhone users had to call into their voicemail. With the iPhone we could now see voicemail messages visually on the device. This was a feature Cingular had to build for the iPhone. And for the first time, the carrier would not charge consumers extra for the new feature.

With the iPhone, Apple become the first to ever retain full control over the design of the hardware and its features. Apple decoupled hardware design from the carriers and the broader telecommunications industry. While Apple worked with Cingular for over two years leading up to the launch of the iPhone, the Wall Street Journal reported at the time that only three executives at Cingular saw the iPhone before it was announced. Look back at mobile phones prior to 2007 and in almost all cases you’ll see the carriers brand on the phone, but in the case of the original iPhone, Cingular agreed to leave their brand off the device. Circa 2007 carriers normally insisted mobile phone hardware include the carrier’s software for things like ringtones (often a premium up-sell at the time) and surfing the Web, but that would not be the case with the original iPhone. Apple decoupled software distribution from the carriers and the telecommunications industry and would go on to sell music and other applications to consumers without going through the carrier’s portals and platforms.

Apple also changed the distribution and activation of mobile phones. The original iPhone was available only through Apple and Cingular stores. At the time, this cut out the carrier retail distribution partners like RadioShack. The original iPhone could be activated directly within iTunes as opposed to requiring activation within a wireless carrier’s physical store or that of one of it’s distribution partners. Apple also wanted sole discretion over the decision to replace or repair phones – inserting themselves between the carrier and the customer on service support.

Apple broke the control carriers had over what hardware was made, what features were available, how both hardware and software were distributed to consumers and how much it would all cost. Apple built a product that consumers loved as opposed to one that carriers would approve – that’s what really changed the industry.

 

 

 

Last month Chief Justice gave the commencement speech at Cardigan Mountain School, an independent junior boarding school in New Hampshire for boys in grades 6th through 9th. The entire 12 minute speech was posted to YouTube and I’ve included the video below. The entire talk is worth a quick listen. As someone who writes physical letters each week, it was nice to hear his advice to write letters. I most appreciated his advice to remain humble and grounded and use life’s diverse experience to gain empathy over bitterness:

…commencement speakers will typically also wish you good luck and extend good wishes to you. I will not do that, and I’ll tell you why. From time to time in the years to come, I hope you will be treated unfairly, so that you will come to know the value of justice. I hope that you will suffer betrayal because that will teach you the importance of loyalty. Sorry to say, but I hope you will be lonely from time to time so that you don’t take friends for granted. I wish you bad luck, again, from time to time so that you will be conscious of the role of chance in life and understand that your success is not completely deserved and that the failure of others is not completely deserved either. And when you lose, as you will from time to time, I hope every now and then, your opponent will gloat over your failure. It is a way for you to understand the importance of sportsmanship. I hope you’ll be ignored so you know the importance of listening to others, and I hope you will have just enough pain to learn compassion. Whether I wish these things or not, they’re going to happen. And whether you benefit from them or not will depend upon your ability to see the message in your misfortunes.

 

https://www.youtube.com/watch?v=Gzu9S5FL-Ug

 

 

Bill Gates writes on Yuval Noah Hattari’s most recent book Homo Deus:

I am more interested in what you might call the purpose problem. Assume we maintain control [of robots and artificial intelligence]. What if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then? What challenges would we be inspired to solve?

In this version of the future, our biggest worry is not an attack by rebellious robots, but a lack of purpose.

 Yes, all the current signs are promising. Despite the perception we might get from mass media, violence and war are at historical lows (and declining). Technological innovation should on the whole improve our quality of life and make significant progress in reducing inequalities in medical care, access to food, and education opportunities. 
And still I struggle with the idea we might solve the world’s problems and reach a utopian state. While I don’t ascribe to an apocalyptic future ruled by robotic overlords, I’m less convinced than Gates that we reach, let along maintain, the higher state he imagines. Certainly the trajectory looks better than ever, but I wonder if we can ever close the gap without new chasms opening elsewhere. 

The greatest force on this planet is our own agency and I believe one of the great life-long tasks before each of us is to control and direct our agency for good. In many ways, our purpose in life is simple. No amount of innovation will solve this “purpose problem.” While the definiton of “good” might change and evolve in subtle ways overtime and we might use different tools to implement good, the basic precept will remain largely intact. Being more self-aware and less selfish will never be “solved.”

Moreover, I’m not convinced we can successfully unite everyone along a shared and collective use of their individual agency. It would seem that there will always be those who seek outcomes that differ from what the collective social norms might suggest is optimal. These might be age old inclinations like selfishness and pride or could be more nefarious actions and acts. I think there will always be private deals designed to advance some while disadvantaging others. The tools we employ will evolve but inequalities of varying sorts will seemingly remain. 

For me, the most interesting questions center on how to fulfill our purpose here while our lives become more mechanized and digitized. In all our discussion on a future filled with robots, automation, and artificial intelligence, this one seems to be missing the most. 

The idea of voice-first commerce is nothing new but it’s about to evolve in meaningful ways. 

We’ve long engaged in voice-first commercial transactions. Drive-through restaurants were build around voice-first  commerce. While there is some debate on who was first, In-N-Out Burger’s first restaurant, which opened in Baldwin Park, California in 1948, helped define the modern drive-through experience. That first drive-through only restaurant had the now-familiar intercom ordering system and lacked both parking and an inside seating area. The model exploded and changed social norms in the process. Several years ago NPD estimated there were some 12 billion annual visits to U.S. drive-through windows and that figure has likely only gone up in the ending years. For fast food burger restaurants, drive-ins represent nearly 60 percent of total customer visits. 
The early drive-through experience wasn’t perfect. But it was convenient and as a nation we embraced it wholeheartedly. Over time we added displays and screens to improve the experience. The first displays depicted the options available. Secondary screens were added in more recent years displaying and confirming the order. Visually displaying the order overcomes the problems inherent in confirming the order by reading it back to the consumer. 

One could argue mail-order catalogs during the 1980s and 1990s, when voice was the primary method of placing orders, was an important voice commerce channel. Arguable the second true voice commerce channel after drive-throughs, though technically not voice-first. Voice gave way to websites over the last decade. Shopping networks like QVC have been an important voice commerce channel. But for the most part, from fast food drive-throughs to mail-order catalogs for part of their tenure to television shopping networks and infomercials, voice-first commerce has changed little over the last 70 years. With the release of the original Echo in 2014, Amazon ushered in the next iteraction of voice-first commerce. 

With the echo, voice-first commerce is making greater inroads into the home. I see the modern-era of voice-first commerce evoluting in much the same way it did over the last few decades. Today’s release of the Echo Show will play an important role in defining this evolution. 
Last week I asked Alexa to order a refrigerator water filter for me. I needed a specific part number and Alexa picked it up perfectly. But given the complexity and exactness of the order, the voice confirmation wasn’t quite sufficient for me so I turned to my phone to ensure Amazon had identified the correct item. In my case there was no hand off from the Echo to the phone so I essentially had to search anew to confirm the item was the one I needed and then place the order. 

I imagine the Amazon Echo Show will help tremendously in a use-case scenario like the one I had last week. It should help drive up close rates and generally improve the user experience. As in my fast food example above, adding additions displays facilitated the voice-first commerce experience. There will be other use-case scenarios that emerge of course. But I’m most interested to see the impact on voice-first commerce. 

With the April jobs report out last Friday there are a number of characteristics of the current labor market worth noting. Most of the headlines last week focused on strong than expected employment gains and a decline in the unemployment rate that takes it down to 4.4 percent – the lowest since May 2007. The unemployment rate was also that low in March 2007, December 2006, and October 2006. The lowest the unemployment rate ever got during the last expansionary period (November 2011 through December 2007) was 4.4%. To find a lower unemployment rate one has to go back to the 1999-2000 period.

But beyond the headlines, the April report was weak around the edges. Much of the job growth materialized in low-wage sectors, cyclical areas like manufacturing and construction added few jobs, and  growth in hourly earnings is slowing, at least temporarily.

 

 

 

 

 

 

 

 

 

 

 

 

 

Slowing wage growth in the face of extremely low unemployment is perplexing, if not concerning. There are a few possible reasons wage growth hasn’t been more robust even in the face of declining unemployment.

First, there might still be slack in the labor market. And more precisely, there might be slack in specific segments of the labor market. Broader measures of unemployment remain above pre-recession levels despite significant improvements through the economic recovery. The U-6 measure of unemployment includes not only those who are seeking work and can’t find it but also includes workers who are marginally attached or those who are working part-time for economic reasons and would prefer to be working full-time.

As of the April jobs report, 78.6 percent of prime-aged Americans are working – an improvement from the recession fallout but not quite back to pre-recession levels. This metric might also suggest slack remains in the labor market. The employment-to-population ratio of prime-aged women has almost returned to pre-recession levels. As the charts below illustrates, the employment-to-population ratio of prime-aged men fell most significantly during the recession and while it has recovered significantly, it still remains about two percent below it’s pre-recession level.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The split between part-time and full-time jobs depends to some degree on how you look at the statistics. We’ve added 2.5 million part-time jobs since December 2007. That’s over ten percent of total part-time employment, but essentially all of that growth came during the recession. We’ve added 4.4 million net full-time jobs since December 2007, or roughly three percent of total full-time jobs. But as the chart below illustrates, we lost about 9 million jobs during the recession and we’ve gained over 13 million jobs since the recession ended. Most of the jobs we’ve added since the end of the recession are full-time jobs. The number of part-time jobs has changed little since the end of the recession. However the number of full-time jobs in the labor market is about five times larger than the number of part-time jobs. We’ve added about 6.9 jobs since December 2007, but over a third of them are part-time jobs. This mix could be one reason for slow wage growth and slow labor productivity gains.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A second reason for slow wage growth is muted labor productivity. As workers become more productive they equally become more valuable to potential employers and can therefore demand higher wages. Lackluster productivity growth is likely stymieing wage growth.

Finally, a third reason for slow wage growth could be that corporations just aren’t paying out as much of their corporate gains to employees as they have in the past. Corporations are paying out a historically low share of their value-add. For much of the post-war period, corporate compensation as a share of corporate GDP hovered around 63 percent. It began to decline in the early aughts and has yet to fully recover. In recent months it has improved slightly, however corporate compensation as a share of corporate value-add remains muted. As a result employees are garnering a smaller share of corporate value-add.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Digitization and automation might also be at play in some of what we are seeing materialize in the labor market. Research by David Autor suggests high-skill, high-wage employment growth decelerated in the 2000s. The historical U-shaped growth profile exhibited in prior periods gave way to a downward sloping curve. Autor also found that occupations losing employment share increasingly come from higher ranks of the skill distribution. For example,  the highest ranked occupation to lose employment share during the 1980s was at the 45th percentile but moved to the 75th percentile in subsequent time periods. This phenomenon suggests displaced middle-skill employment is moving into higher-skilled areas. If workers who were historically in higher-skill, higher-wage jobs weren’t able to gain employment because large components of their job were digitized and automated then they would fall to the low-skill, low-wage jobs. Increased competition in low-skill, low-wage jobs would keep wage growth down. However, if this were the case, then increases in digitization and automation would be accompanied with higher grow rates in capital expenditures  as well as labor productivity – both of which remain relatively muted.