Artificial intelligence and machine learning are bringing about fundamental changes to the ways in which humans interact. It’s an exciting future where digital layering can be used to enhance human connectivity. But how can we use it to fuel creative experiences and what benefits does it hold for customer experiences?
AI is already great at performing specific tasks, such as facial and voice recognition, object tracking, or even transposing your face onto someone else’s body, and due to advances in deep learning, computers are starting to learn and frame reality visually in the same way as humans perceive it.
AI is typically deep rather than broad, so the trick is combining these skills effectively. As part of a creative toolkit, it can augment an existing idea or technique, or make mundane tasks more efficient.
A fascinating example of what is possible, is Deepmind’s work using AI to learn to write programmes that generate images, and Microsoft’s Starship Commander, using voice and intent to control the narrative, and Intel’s sensational light show, which places drones in the sky like pixels on a screen.
Powerful creative tool
More controversial was the use of TensorFlow, Google’s AI tech open sourced in 2015, to analyse a person’s face from multiple images from a social feed and map the likeness onto another video, frame by frame. A similar technique was used in Star Wars to bring back the 1977 version of the late Carrie Fisher, a young Sean Young for Blade Runner 2049, and a young Kurt Russell in Guardians of the Galaxy Vol. 2.
With an individual’s and content owner’s explicit permission, this could be a powerful creative tool for live experiences that would wow audiences. Imagine being able to appear in a classic Bullitt scene, with or as Steve McQueen, as Neo or Agent Smith in the Matrix, or doing a death defying stunt in Mission Impossible or as a superhero in Guardians of the Galaxy. It doesn’t have to be as personal as your face, the same technique could map your own designs or characteristics onto a physical product, such as a pair of trainers or clothing.
The physical experience itself doesn’t have to include any visible technology at all, for example, the Google Arts and Culture app helps museum visitors find their lookalike in nearby paintings, sculptures and artwork.
Another example is using the iPhone X depth sensing camera to do live facial capture, coupled with an Xsens suit for full body motion capture. This strange but entertaining demo shows the potential. Suddenly, technology previously only developed by blockbuster movie directors, is available to everyone.
Techies are experimenting with voice to personalise experiences, using natural language to improve chatbot engagement, AI to prototype and build physical experiences in virtual reality and augmented reality as part of the ideation and literally walking through the experience, which fast-tracks decision making time-frames and client sign-off.
The discussion up until now in terms of AI for events has focused on marketing and customer engagement. What is becoming even more fascinating, is augmenting the creative process itself, where you can build a powerful and accessible toolkit to inspire live experiences and content that is truly immersive and personalised, as well as being able to provide an entertaining experience.
We are on the cusp of a revolution in what we can achieve in the field of amazing, immersive, personalised experiences. In the future, the ‘intelligence’ may be artificial and the ‘reality’ virtual, but the impact on creativity is very real indeed.
Glenn van Eck is CEO of events company, Magnetic Storm.
Want to continue this conversation on The Media Online platforms? Comment on Twitter @MediaTMO or on our Facebook page. Send us your suggestions, comments, contributions or tip-offs via e-mail to firstname.lastname@example.org.