Over a number of years I worked with a team at Microsoft Research to add an animating, expressive face to their amazing artificial intelligence they were developing. The goal was to use facial expression to enhance communication, while avoiding the "uncanny valley." I built the models, created all the micro expressions like eye brow lifts, and squeezes and wrote the code in C# to combine these into expressions that could be driven by MSR digital assistant code.
This is "Natural Communication about Uncertainties in Situated Interaction" a research project by Tomislav Pejsa which I collaborated on at Microsoft Research. (co-authors include, Michael Cohen, Dan Bohus, Nick Saw and Eric Horvitz) The paper was presented at ACM International Conference on Multimodal Interaction.
Much of the early work was writing code to create a sense of intelligence behind the character even when she was just idling...
Mike 1.0 was our first version - the tricky part was to make him look somewhat realistic but still cartoonish enough so as not to fall into the dreaded uncanny valley.