Image Image Image Image Image
“Design is not just what it looks like and feels like. Design is how it works.”

– Steve Jobs

Scroll to Top

To Top

Page Not Found

With the ability to quickly create working AI prototypes that intelligently move, see, hear and speak, designers can go beyond the cliches of AI, and bring collaborative systems into people’s lives that are more humane, more personal, maybe even more inspiring.

Research Fellowship in Delft, Netherlands

In Fall of 2017, I’ll be in residence at TU Delft on a research fellowship to develop concepts and prototypes of a tool for designing smart, collaborative things. Funded by Design United, I’ll be working with Professor Elisa Giaccardi (department of Industrial Design Engineering) to start developing this visual programming environment for the interaction design of AI.

This summer I’m conducting a research project on new approaches for AI in a Mixed Reality context. The project, extends my work with Animistic Design, but takes a different approach to AI embodiment, using the integration of virtual AI entities with “Real Reality.”

I’m working with five ArtCenter students from our Media Design Practices program: Stephanie Cedeño, Xing Lu, Godiva Reisenbichler, Nan H Tsai, Nicci Yin.

Note: The project completed summer of 2017. Initial work is summarized here:

Project Description

This 2017 summer research project will explore how AI based, non-anthropomorphic animistic entities could work as colleagues and collaborators in Mixed Reality.

In a newly published paper, “Animistic design: how to reimagine digital interaction between the human and the nonhuman” (Digital Creativity – Special Issue: Post-Anthropocentric Creativity), my co-author Betti Marenko and I argue that a new model is needed for design in an Internet of Things world. We think it’s time to rethink the standards of Human Centered Design, AI, and interaction design, especially for open-ended, creative contexts, whether that’s directing a self-driving car, planning a vacation, or solving a hard legal problem.

See my Medium post “Rethink IxD” on this topic.


Technology as the designer’s material

I would argue that the modern designer’s primary material is technology. And to effectively design and make digital things, you need to deeply understand technology’s affordances, characteristics, and limits – i.e. the grain of the material. Immerse yourself in serious making with technology, and you will become a better designer, able to invent new approaches and designs through your understanding of the material.


SIGCHI 2013 paper

AniThings: Animism and Heterogeneous Multiplicity

Joshua McVeigh-Schultz and I presented our paper at SIGCHI 2013 in Paris. It presents ideas and a project on using animism as a metaphor for interaction design, something I’ve been exploring for the last few years.

Go here for a link to a free download of the paper.

I wrote this article on interactive story telling in 1992 for a group called Interactive Expression, a group of us in the Los Angeles area who worked at Philips and other early “multimedia” companies. We got together to discuss the new medium of Interactive Media.

The NETLab Toolkit is a system for integrating tangible interaction and media. Designed for project sketching and production, the toolkit enables novices and experts to integrate hardware, media and interactive behaviors for products, installations, and research

This free collection of software makes it easy integrate all kinds of media with microcontrollers like the Arduino. Using a simple drag-and-drop interface, you can create interactive projects that combine sensors, video, text, graphics, sound, lighting, motors and more. These projects can be created quickly, without programming, using the smart widgets included with the Toolkit.

I initiated the project in 2004, and have continued developing it since. It is used all over the world by schools, individuals, and agencies.

This spring’s New Ecology of Things course in the Media Design Program had the theme of animism, and explored how interaction design can utilize the natural tendency to imagine that inanimate objects and spaces have motivation, intention and/or consciousness.

Recently a few of my students from the Media Design Program at Art Center and I created an interactive installation for the 10th anniversary of the Architecture+Design Museum. The A+D is a growing institution in the Los Angeles area, and they were having a party for their board and major donors.

Nokia Research recently gave me a small grant to conduct a research project in Summer, 2011. Here’s the basic description:

This project explores the design opportunities in objects that seem to have inner lives through their expressive behavior.


An Emerging Landscape in The New Ecology of Things

An updated, illustrated, and edited version of this post was published in the magazine about Interaction Design.

With the Apple iPad launched and scores of other tablets and e-readers hitting the market, I think it’s important to step back and look at the larger trends. We’re in the middle of a major shift towards ubiquitous computing, cloud based personal storage, and tangible interaction. It’s a shift away from the generic computation typified by the “personal computer,” which never really achieved the individuality or specificity implied by the term “personal.” In short, we’re experiencing the emergence of The New Ecology of Things, where a network of heterogeneous, smart objects and spaces create opportunities for a more personal and meaningful landscape. This is what I’d like to explore:


The NETLab Toolkit has a new website: The old site was on a free wiki service that’s being discontinued, and we decided that this was a good time to reorganize and improve the content.
For those of you unfamiliar with the toolkit:

The NETLab Toolkit is a free system for tangible interaction sketching and production. It enables novices and experts to quickly integrate hardware, media and interactive behaviors for products, installations, and research. It integrates with micro-controllers including the Arduino, and through its Flash widgets provides a drag-and-drop environment for hardware and media sketching with no programming required.

There’s also a new version of the Widgets on the download page with these new features:

If you have been using the Arduino with a previous version of the widgets, you’ll need to update your Arduino with the newer Firmata released with Arduino 18.

Please send us your feedback on the new website on our contact page.

The School Performance Dashboard web project was developed for USC’s Center on Educational Governance provides an easy way for educators and others to explore a range of performance metrics for charter schools in California. Each charter school is rated on 12 indicators assessing financial resources and investment; school quality; student performance; and academic productivity. Users can evaluate individual schools; compare the performance of multiple schools; review the performance of a single school across several years; or download the entire data set.

The challenge with this project was to make access to thousands of database entries easy to use for school administrators, researchers, parents, and the press. By creating a clean and clear visual and interactive system, users can explore the data in a very direct and productive way.

Working with Liz Burrill and Jamie Cavanaugh, I developed the interaction design and functional capabilities, and then built out the database and software. The system allows users to do incremental searches based on county and/or school name, showing immediate results as the user types. When a school is selected, it animates into the Schools of Interest List, building the user’s personal list of schools to analyze. The list can then be customized by narrowing the set of indicators to focus on, and then printed or downloaded as an Excel file. Alternatively, the user can select any single school, and analyze it over several years to see how performance has changed.

Quick Introduction to Sound

This workshop is a quick introduction to working with digital audio. It uses the free, open-source Audacity software. Digital audio software like Audacity shows sound as a waveform, which is a visual representation of the audio over time. Sound can be edited in much the same way that words are edited in a word processing program – i.e. by copying, cutting, and pasting, as well as modifying the sound (e.g. EQ or changing volume) which is similar to styling text.


A lot of doubters are making a classic mistake in evaluating Apple’s iPad. They did the same thing after the initial announcement for the iPhone, or for that matter the Toyota Prius. The mistake is thinking in terms of existing categories and value propositions. For the iPad, the doubt seems to boil down to: “I don’t like it because it doesn’t fit my ideal for a great laptop.” The critiques don’t always state it those terms, but I think that’s where it’s coming from. No camera, no keyboard, no multi-tasking, no Flash (okay, actually Safari on the iPad really does need that), etc. – these are standard expectations for a laptop.


This page has moved to:

I’m back from London and the Sketching09 conference that focused on the practice of “sketching in hardware,” i.e. making quick interactive hardware prototypes as a way to explore a design direction. Lots of great ideas and work presented. A few highlights:

  • Conference organizer and ThingM partner Mike Kuniavsky’s talk “Read Write Material Culture” proposed that only the 20th century was mostly Read-only, where before that and emerging in the 21st century, production can be local and accessible to many makers. The economics of industrial production pushed individuals away from making, but the emergence of new technologies and tools (e.g. web-based distribution, 3D printing, open-source hardware and software toolkits) make it once again possible for individuals to produce things and make a living at it.
  • Ed Baafi of Learn 2 Teach, Teach 2 Learn and the Boston FabLab demoed a web-based visual programming system for putting code on the Arduino. Using the same approach as Scratch, users can drag-and-drop programming structures and watch them run while the hardware responds. Once the code is finished, the system will download compiled code to the Arduino so it can run un-tethered. He hopes to release a beta version soon.
  • Along these same lines, David Zicarelli founder of MAX/MSP maker Cycling74 demoed a project where users can create a patch in MAX, and it will run on the Arduino, either tethered or downloaded and un-tethered.
  • André Knörig demoed Frizting, a web-based system for visualizing hardware prototypes with the Arduino and other microcontrollers. Once diagramed, the circuit can be shared, and most importantly, Frizting will generate the layout for a printed circuit board (PCB), so you can turn your idea into a more formal project that can be manufactured.
  • Jan Borchers of The Media Computing Group at RWTH Aachen University showed his Luminet project, which is a system of intelligent nodes that talk to each other, and are programmed by infecting the network of Luminet nodes, where the code jumps from one node to the next.

Slides for many of the presentations are here.

A couple days ago, RISD president John Maeda tweeted that “Design is a solution to a problem. Art is a question to a problem.” Perhaps he was kidding, but I have to object. To me, good design raises new questions. If designers simply solve problems, we deaden design and culture by making things that operate at the most mundane level. Instead, we should create things that inspire, challenge, provoke, surprise, satisfy, engage and open up opportunities. The best design changes the context around it and allows people to see and feel the world in a new way. What problem did the Porsche 356 solve? What is the impact of the new Seattle Public Library? Why is the iPhone important? What’s interesting about Paula Scher‘s posters? What makes a great hammer?

Each of these play a role in people’s lives with broad effects in terms of activities, emotions, thinking, tactility, social interactions, creativity, work, play, and more. Even the “functional” hammer does more than solve the problem of putting nails into wood – it feels right in the hand, it gains a patina over time that makes it personal, in a pinch it will open a beer bottle, and you can use it to repair a church after Katrina.

In particular, if we think about Interactive Design, the highest goal should be to empower people to create their own meaning spaces, not solve pre-determined problems or even make great experiences. As I’ve discussed in my Productive Interaction paper and in The New Ecology of Things, design plays a greater role than serving tasks and solving problems. The things in our lives communicate, create social exchanges, and enable us to manipulate both the tangible and the idea. They afford creative abuse and invention. Forget solving problems, design things to be productive, embodied, mythological, meaningful.

I just read a couple interesting posts on something called The Implicit Web which relates ideas of the Semantic Web, social computing, “clickstreams“, folksonomies, sophisticated search systems, intelligent software assistants, crowdsourcing, etc. By tracking the activity of people and analyzing semantic content on the web the Implicit Web can automatically discover networks of people and interests without the explicit kind of work one does in Twitter, Facebook, or Google search.

In other words, by tracking what you and others do and create (emails, blog entries, tweets, browsing activity, shopping, etc.), and by scouring the web and analyzing its content, these systems make sense of the web in a much more sophisticated way than the brute force kind of searching that Google does. So it could find correlations, generate connections, optimize searches, make you aware of implicit networks of interest, and generally act on your behalf to both filter the incoming avalanche of data, and provide better/faster means to get to interesting information that you might not otherwise find.

While this idea is related to the kinds of recommendations that Amazon and other sites do, it is stronger because it aggregates a lot more activity and content beyond the silo of a single site. Plus, the ultimate expression of the implicit web (I hope) is that the user will have more control, and can “dial-in” the criteria of a search or automated task to their specific interests at that moment, rather than being stuck with some company’s idea of your interests. This idea relates to my essay on Productive Interaction, where the design of these systems is not about creating enveloping, persuasive experiences (as experience design dictates), but designing contexts where users are empowered to create their own meaning spaces.

Related LINKS below

I just wrapped up my The New Ecology of Things class at Art Center’s Media Design Program. The class addressed the design of ubiquitous, massively networked systems – i.e. emerging ecologies of things. Our topic this term was “anti-homogenous” and we looked at heterogeneous alternatives to the mouse, keyboard, screen for specific work and play activities. This continues the idea mentioned in my Microsoft Future 2019 video post, where interactions should adapt to the type of activity, rather than the person adapting to the same type of interaction for every task. The 13 students designed and prototyped projects ranging from a special table for art directors to a lamp that receives and projects video messages from your friends. The projects addressed different affordances as well as the relationships between tangible, embodied things and their meta-data/meta-content. More details and links to project websites below the photos.

netdesk wisperstones booknpen memoryapparatus postgeheimnis shopconsious2 projector netcreators

All projects are working interactive demos that use the Make Controller in combination with our NET Lab Toolkit (Pen & Book didn’t use the Make).


The Microsoft Office Labs Vision 2019 video recently shown at the Wharton Business Technology Conference, by Microsoft’s Business Division president Stephen Elop (text of speech), does a good job of showing potential modes of interacting with embedded and ubiquitous multi-touch displays. But how original is it? My students in Art Center College of Design’s graduate Media Design Program have been working on ideas like this for many years, and have made speculative videos like this, as well as working prototypes and real projects. See below for several examples, as well as some thoughts on where future interfaces should go – is Microsoft just proposing another version of windows?

Update: Behind the scenes of the making of the video

[flashvideo file=videos/msofficelabs2019.flv  image=videos/microsoftnews.jpg  width=600 height=400 /]

Microsoft Office Lab’s Vision 2019 video


MDP Alumni Sebastian Bettencourt’s Beyond The Fold Newspaper Project


Here are links to several of my students’ past projects:


I’m very interested in how tangible objects can be used in interesting ways to interact with information on screens. This video collects together a series of experiments on the use of a range of object prototypes. In making these, I imagined a screen in front of me (in some cases a standard size screen, in other cases a wall sized screen), and manipulated the various objects as if I was controlling and interacting with content on the screen. It was more an experiment in the affordance of the objects in relation to screens than thinking of specific applications.

This project was aided by the help of my graduate students Jonathan Jarvis, and Parker Kuncl.


View Video (95meg)


NET Lab Widgets

NET Lab Widgets


There’s a new version of the NET Lab Toolkit. This release adds a new skin, single keystroke to make widgets invisible, play/pause function for VideoControl and several bug fixes. This is in addition to support for Xbee wireless sensors, the Wii Remote, and DMX lighting control that came with the ALPHA version released in July ’08.

I’ll be speaking about The New Ecology of Things and our NET Lab tools at the flashbelt conference that runs from June 8th to June 11th, 2008 in Minneapolis, MN. This conference focuses on the in-depth issues of designing and developing real interactive applications. Sessions range from experience design from Motion Theory‘s perspective, to animation design, sound design, developing in Adobe’s AIR, programming in processing, physical computing, to working with the Papervision3D library in Flash.

On April 25th 2008, Anne Burdick (MDP Department Chair), Nik Hafermaas (Dean of Communication Design @ Art Center) and I gave a talk at the USC Interactive Media Arts and Practice Program to discuss the MDP’s New Ecology of Things research initiative. This talk was webcast, and the web recording of it can be seen on Adobe’s education site.

This project created two touch screen learning stations installed in the renovated Huntington House on the grounds of the Huntington Library in San Marino, CA. Working with an education specialist at the Huntington, we developed projects around exhibits on Silver and Porcelain, targeted at 8-12 year-olds. These touch projects focused on the history and craft of the objects displayed, and involved children in interactive activities such as selecting the kiln temperature to fire the porcelain, or putting their stamp on a silver vase.

In collaboration with Liz Burrill (visual design), and Jamie Cavanaugh (technical production & animation), I developed the interaction design and wrote the software.

[jj-ngg-jquery-slider html_id=”hunt” gallery=”20″ width=”640″ height=”400″ effect=”fade” order=”sortorder” captionopacity=”0.5″ pausetime=”5000″]


American Honda and George P. Johnson have donated one of their Oracle Multi-touch Tables to the Media Design Program. We now have it permanently in our graduate studio where it is available for faculty and students to develop new applications. In particular, we’re interested in exploring how large sets of text and image content can be explored in a collaborative way with multiple users.

More on the original project

How can we make computational design and code understandable to design students, and how can they define the designer’s role in regard to coding? I was recently explaining to a student the importance of timing when a project responds to a user – a difference in milliseconds can make a big impact. We were also talking about how designing and developing code requires a different way of thinking and abstraction compared to visual design. In interactive design, the 4th dimension of time and the definition of behavior in code is very different from the see-it-all gestalt one can get from looking at and refining a 2D visual design.

I think the way to go is to cast it in terms of designing behavior.  There are many principles and concepts of designing interesting, rich, meaningful behavior that I think could be developed, some of which is instantiated in code, other aspects in the mechanical design (the turning of a doorknob or the page of a book for example), and others in the conceptual design.  This shift to behavior design as an overarching concept that encompasses computation may make it more interesting and relevant to designers.

The Media Design Program’s new transmedia publication, The New Ecology of Things, is complete. The book, website, poster and mobile phone content address the design and educational issues related to ubiquitous computing and is an ecology of essays, glossary, forum, interactive works, video, and a short story by Bruce Sterling. You can order the book here: The New Ecology of Things (NET).

My visit to the Maker Faire was briefly covered on