Google @Stadia / @ATAP
Technical Art Director / Senior Technical Artist

I worked as a Technical Art Director through first through Massive Black and later as a contractor for Google directly on a number of research projects run by the Stadia Starlab and Google's Advanced Technology and Projects teams.

These projects focused on uses for mobile AR, Machine Learning, and Streaming Applications within the gaming sphere. The projects below are the few that have been shown publicly.

PROJECT VERBWORLD


Project VERBWORLD allowed the player to communicate through speech or text with a fox avatar. Using symantic AI the fox and it’s environment would then respond to the player.

I established the visual style for this project and created all of the 3D assets, particle effects, dynamic weather system, and post-processing materials in Unreal. An external partner created the Fox animations under my direction.

Read more about this project here:
https://stadia.dev/blog/creating-game-ai-using-mostly-english/

PROJECT CHIMERA

Years before diffusion models became the rage, in PROJECT CHIMERA, players could create creatures by combining an unlimited number of almost any animal imaginable to create a new, unique hybrid.

To accomplish this, I created a custom data set of hand-authored animal models in various poses. I then created a system in Unreal Blueprint to automate rendering those animals and poses from a variety of camera angles and lighting conditions that emulate the rules of composition one would traditionally follow for an illustration. These images were also rendered with a paired ‘segmentation map’ that defined areas as ‘wings’ or ‘arms’, etc.


Once the data set was complete, it was handed off to a team of AI specialists to train ML models. The final dataset used 800,000 image pairs. Throughout the project, we generated over 4 million image pairs.


You can read more about this project here -
https://ai.googleblog.com/2020/11/using-gans-to-create-fantastical.html

Previous
Previous

MONACO 2

Next
Next

Mages Tale