Archive for November, 2017

Zero Look and the upcoming tectonic shift in ad tech

Tuesday, November 21st, 2017

We are at the beginning of a tectonic shift in the ad tech ecosystem. For much of the last decade ad tech companies have been fighting for “First Look”. This is about to change.

When a user launches an app their presence creates an impression, and typically launches an ad request. These days most ad requests end up in an auction environment, where thousands of advertisers can see this impression and bid on the ability to show an ad to the user.

“First Look” refers to the chance to see an ad impressions before anyone else. Conventional wisdom holds that First Look impressions are more valuable than impressions that have been grazed over by multiple buyers/advertisers. The value is made up of a timing efficiency and the fact that the first buyer can snatch up valuable users before other advertisers even have a chance to show an ad.

Over the years, ad tech vendors have deployed multiple strategies to see impressions before the competition. Static mediation chain adjustments, guarantees, private auctions, private deals, and now header bidding.

I believe that tomorrow’s confrontation is not going to be about “First Look”, but rather about what happens before the first ad request. Perhaps we can call this the “zero look challenge”. Publishers will make critical inventory decisions before initiating ad requests all together.

As machine learning becomes more prevalent, publishers will be able to recognize their incoming users, create micro audience segments, and dynamically adjust the content and experience just for them. The result is active and intelligent user lifecycle management.

Publishers will be able to create bespoke experiences that are specifically tailored to particular users. They might change the design, manage game actions, adjust the content, add levels, and even manipulate pricing. At its core, it will also include making the decision between the various monetization options: In-App-Purchase events, advertisements, and subscriptions.

Making these real-time decisions will bring publishers closer to the ultimate goal of holistic lifetime value management, and probably represents the next frontier for supply side platforms.


PS:  Here are a few startups trying to tackle broad zero look challenges. Some approach it from a dynamic pricing perspective, and others focus on the fundamental building blocks of CRM and audience segmentation. mParticle, Wappier, Gamesparks,, Clevertap,, DeltaDNA, Game of Whales & Scientific Revenue

‘It’s able to create knowledge itself’: Google unveils AI that learns on its ow

Sunday, November 5th, 2017

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go … with no human help

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

Match 3 of AlphaGo vs Lee Sedol in March 2016.
 Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms.” Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks. In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

At the heart of the program is a group of software “neurons” that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. Given more time, it could have learned the rules for itself too, Silver said


Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as “small avalanche” and “knight’s move pincer” soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

AlphaGo Zero starts with no knowledge, but progressively gets stronger and stronger as it learns the game of Go. Credit: DeepMind

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. “Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.”

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an “outstanding engineering accomplishment”. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind

While AlphaGo Zero is a step towards a general-purpose AI, it can only work on problems that can be perfectly simulated in a computer, making tasks such as driving a car out of the question. AIs that match humans at a huge range of tasks are still a long way off, Hassabis said. More realistic in the next decade is the use of AI to help humans discover new drugs and materials, and crack mysteries in particle physics. “I hope that these kinds of algorithms and future versions of AlphaGo-inspired things will be routinely working with us as scientific experts and medical experts on advancing the frontier of science and medicine,” Hassabis said.