This summer I’m conducting a research project on new approaches for AI in a Mixed Reality context. The project, extends my work with Animistic Design, but takes a different approach to AI embodiment, using the integration of virtual AI entities with “Real Reality.”
I’m working with five ArtCenter students from our Media Design Practices program: Stephanie Cedeño, Xing Lu, Godiva Reisenbichler, Nan H Tsai, Nicci Yin.
This 2017 summer research project will explore how AI based, non-anthropomorphic animistic entities could work as colleagues and collaborators in Mixed Reality.
In a newly published paper, “Animistic design: how to reimagine digital interaction between the human and the nonhuman” (Digital Creativity – Special Issue: Post-Anthropocentric Creativity), my co-author Betti Marenko and I argue that a new model is needed for design in an Internet of Things world. We think it’s time to rethink the standards of Human Centered Design, AI, and interaction design, especially for open-ended, creative contexts, whether that’s directing a self-driving car, planning a vacation, or solving a hard legal problem.
See my Medium post “Rethink IxD” on this topic.
Technology as the designer’s material
I would argue that the modern designer’s primary material is technology. And to effectively design and make digital things, you need to deeply understand technology’s affordances, characteristics, and limits – i.e. the grain of the material. Immerse yourself in serious making with technology, and you will become a better designer, able to invent new approaches and designs through your understanding of the material.
SIGCHI 2013 paper
AniThings: Animism and Heterogeneous Multiplicity
Joshua McVeigh-Schultz and I presented our paper at SIGCHI 2013 in Paris. It presents ideas and a project on using animism as a metaphor for interaction design, something I’ve been exploring for the last few years.
I wrote this article on interactive story telling in 1992 for a group called Interactive Expression, a group of us in the Los Angeles area who worked at Philips and other early “multimedia” companies. We got together to discuss the new medium of Interactive Media.
This spring’s New Ecology of Things course in the Media Design Program had the theme of animism, and explored how interaction design can utilize the natural tendency to imagine that inanimate objects and spaces have motivation, intention and/or consciousness.
Recently a few of my students from the Media Design Program at Art Center and I created an interactive installation for the 10th anniversary of the Architecture+Design Museum. The A+D is a growing institution in the Los Angeles area, and they were having a party for their board and major donors.
Nokia Research recently gave me a small grant to conduct a research project in Summer, 2011. Here’s the basic description:
This project explores the design opportunities in objects that seem to have inner lives through their expressive behavior.
An Emerging Landscape in The New Ecology of Things
An updated, illustrated, and edited version of this post was published in the JohnnyHolland.org magazine about Interaction Design.
With the Apple iPad launched and scores of other tablets and e-readers hitting the market, I think it’s important to step back and look at the larger trends. We’re in the middle of a major shift towards ubiquitous computing, cloud based personal storage, and tangible interaction. It’s a shift away from the generic computation typified by the “personal computer,” which never really achieved the individuality or specificity implied by the term “personal.” In short, we’re experiencing the emergence of The New Ecology of Things, where a network of heterogeneous, smart objects and spaces create opportunities for a more personal and meaningful landscape. This is what I’d like to explore:
- Where we’ve been and how the personal computer has made us soulless
- Where we’re about to be #1 with the emergence of digital slabs
- Where we’re about to be #2 with a new form of design that’s a hybrid of software and product
- Where we may be going and the future of the designer in an era of bespoke objects
The NETLab Toolkit has a new website: http://netlabtoolkit.org. The old site was on a free wiki service that’s being discontinued, and we decided that this was a good time to reorganize and improve the content.
For those of you unfamiliar with the toolkit:
The NETLab Toolkit is a free system for tangible interaction sketching and production. It enables novices and experts to quickly integrate hardware, media and interactive behaviors for products, installations, and research. It integrates with micro-controllers including the Arduino, and through its Flash widgets provides a drag-and-drop environment for hardware and media sketching with no programming required.
There’s also a new version of the Widgets on the download page with these new features:
- Arduino support now requires Firmata 2.1 which comes with the Arduino 18 software
- Insert widget now has data logging features
- Added IfThenElse widget
If you have been using the Arduino with a previous version of the widgets, you’ll need to update your Arduino with the newer Firmata released with Arduino 18.
Please send us your feedback on the new website on our contact page.
A lot of doubters are making a classic mistake in evaluating Apple’s iPad. They did the same thing after the initial announcement for the iPhone, or for that matter the Toyota Prius. The mistake is thinking in terms of existing categories and value propositions. For the iPad, the doubt seems to boil down to: “I don’t like it because it doesn’t fit my ideal for a great laptop.” The critiques don’t always state it those terms, but I think that’s where it’s coming from. No camera, no keyboard, no multi-tasking, no Flash (okay, actually Safari on the iPad really does need that), etc. – these are standard expectations for a laptop.
I’m back from London and the Sketching09 conference that focused on the practice of “sketching in hardware,” i.e. making quick interactive hardware prototypes as a way to explore a design direction. Lots of great ideas and work presented. A few highlights:
- Conference organizer and ThingM partner Mike Kuniavsky’s talk “Read Write Material Culture” proposed that only the 20th century was mostly Read-only, where before that and emerging in the 21st century, production can be local and accessible to many makers. The economics of industrial production pushed individuals away from making, but the emergence of new technologies and tools (e.g. web-based distribution, 3D printing, open-source hardware and software toolkits) make it once again possible for individuals to produce things and make a living at it.
- Ed Baafi of Learn 2 Teach, Teach 2 Learn and the Boston FabLab demoed a web-based visual programming system for putting code on the Arduino. Using the same approach as Scratch, users can drag-and-drop programming structures and watch them run while the hardware responds. Once the code is finished, the system will download compiled code to the Arduino so it can run un-tethered. He hopes to release a beta version soon.
- Along these same lines, David Zicarelli founder of MAX/MSP maker Cycling74 demoed a project where users can create a patch in MAX, and it will run on the Arduino, either tethered or downloaded and un-tethered.
- André Knörig demoed Frizting, a web-based system for visualizing hardware prototypes with the Arduino and other microcontrollers. Once diagramed, the circuit can be shared, and most importantly, Frizting will generate the layout for a printed circuit board (PCB), so you can turn your idea into a more formal project that can be manufactured.
- Jan Borchers of The Media Computing Group at RWTH Aachen University showed his Luminet project, which is a system of intelligent nodes that talk to each other, and are programmed by infecting the network of Luminet nodes, where the code jumps from one node to the next.
A couple days ago, RISD president John Maeda tweeted that “Design is a solution to a problem. Art is a question to a problem.” Perhaps he was kidding, but I have to object. To me, good design raises new questions. If designers simply solve problems, we deaden design and culture by making things that operate at the most mundane level. Instead, we should create things that inspire, challenge, provoke, surprise, satisfy, engage and open up opportunities. The best design changes the context around it and allows people to see and feel the world in a new way. What problem did the Porsche 356 solve? What is the impact of the new Seattle Public Library? Why is the iPhone important? What’s interesting about Paula Scher‘s posters? What makes a great hammer?
Each of these play a role in people’s lives with broad effects in terms of activities, emotions, thinking, tactility, social interactions, creativity, work, play, and more. Even the “functional” hammer does more than solve the problem of putting nails into wood – it feels right in the hand, it gains a patina over time that makes it personal, in a pinch it will open a beer bottle, and you can use it to repair a church after Katrina.
In particular, if we think about Interactive Design, the highest goal should be to empower people to create their own meaning spaces, not solve pre-determined problems or even make great experiences. As I’ve discussed in my Productive Interaction paper and in The New Ecology of Things, design plays a greater role than serving tasks and solving problems. The things in our lives communicate, create social exchanges, and enable us to manipulate both the tangible and the idea. They afford creative abuse and invention. Forget solving problems, design things to be productive, embodied, mythological, meaningful.
I just read a couple interesting posts on something called The Implicit Web which relates ideas of the Semantic Web, social computing, “clickstreams“, folksonomies, sophisticated search systems, intelligent software assistants, crowdsourcing, etc. By tracking the activity of people and analyzing semantic content on the web the Implicit Web can automatically discover networks of people and interests without the explicit kind of work one does in Twitter, Facebook, or Google search.
In other words, by tracking what you and others do and create (emails, blog entries, tweets, browsing activity, shopping, etc.), and by scouring the web and analyzing its content, these systems make sense of the web in a much more sophisticated way than the brute force kind of searching that Google does. So it could find correlations, generate connections, optimize searches, make you aware of implicit networks of interest, and generally act on your behalf to both filter the incoming avalanche of data, and provide better/faster means to get to interesting information that you might not otherwise find.
While this idea is related to the kinds of recommendations that Amazon and other sites do, it is stronger because it aggregates a lot more activity and content beyond the silo of a single site. Plus, the ultimate expression of the implicit web (I hope) is that the user will have more control, and can “dial-in” the criteria of a search or automated task to their specific interests at that moment, rather than being stuck with some company’s idea of your interests. This idea relates to my essay on Productive Interaction, where the design of these systems is not about creating enveloping, persuasive experiences (as experience design dictates), but designing contexts where users are empowered to create their own meaning spaces.
Related LINKS below
I just wrapped up my The New Ecology of Things class at Art Center’s Media Design Program. The class addressed the design of ubiquitous, massively networked systems – i.e. emerging ecologies of things. Our topic this term was “anti-homogenous” and we looked at heterogeneous alternatives to the mouse, keyboard, screen for specific work and play activities. This continues the idea mentioned in my Microsoft Future 2019 video post, where interactions should adapt to the type of activity, rather than the person adapting to the same type of interaction for every task. The 13 students designed and prototyped projects ranging from a special table for art directors to a lamp that receives and projects video messages from your friends. The projects addressed different affordances as well as the relationships between tangible, embodied things and their meta-data/meta-content. More details and links to project websites below the photos.
The Microsoft Office Labs Vision 2019 video recently shown at the Wharton Business Technology Conference, by Microsoft’s Business Division president Stephen Elop (text of speech), does a good job of showing potential modes of interacting with embedded and ubiquitous multi-touch displays. But how original is it? My students in Art Center College of Design’s graduate Media Design Program have been working on ideas like this for many years, and have made speculative videos like this, as well as working prototypes and real projects. See below for several examples, as well as some thoughts on where future interfaces should go – is Microsoft just proposing another version of windows?
[flashvideo file=videos/msofficelabs2019.flv image=videos/microsoftnews.jpg width=600 height=400 /]
Microsoft Office Lab’s Vision 2019 video
Here are links to several of my students’ past projects:
- 2004 – wall – scott nazarian, nikolai cornell
- 2005,2006 – mirror – in search of identity, infiniti interactive – nikolai cornell, phil van allen, others
- 2006 – interactive table – acura oracle – nikolai cornell, jonathan jarvis, phil van allen, others
- 2006 – handheld augmented reality – telepath – matt mcbride
- 2007 – e-paper newspaper – beyond the fold – sebastian bettencourt
There’s a new version of the NET Lab Toolkit. This release adds a new skin, single keystroke to make widgets invisible, play/pause function for VideoControl and several bug fixes. This is in addition to support for Xbee wireless sensors, the Wii Remote, and DMX lighting control that came with the ALPHA version released in July ’08.
I’ll be speaking about The New Ecology of Things and our NET Lab tools at the flashbelt conference that runs from June 8th to June 11th, 2008 in Minneapolis, MN. This conference focuses on the in-depth issues of designing and developing real interactive applications. Sessions range from experience design from Motion Theory‘s perspective, to animation design, sound design, developing in Adobe’s AIR, programming in processing, physical computing, to working with the Papervision3D library in Flash.
On April 25th 2008, Anne Burdick (MDP Department Chair), Nik Hafermaas (Dean of Communication Design @ Art Center) and I gave a talk at the USC Interactive Media Arts and Practice Program to discuss the MDP’s New Ecology of Things research initiative. This talk was webcast, and the web recording of it can be seen on Adobe’s education site.
American Honda and George P. Johnson have donated one of their Oracle Multi-touch Tables to the Media Design Program. We now have it permanently in our graduate studio where it is available for faculty and students to develop new applications. In particular, we’re interested in exploring how large sets of text and image content can be explored in a collaborative way with multiple users.
How can we make computational design and code understandable to design students, and how can they define the designer’s role in regard to coding? I was recently explaining to a student the importance of timing when a project responds to a user – a difference in milliseconds can make a big impact. We were also talking about how designing and developing code requires a different way of thinking and abstraction compared to visual design. In interactive design, the 4th dimension of time and the definition of behavior in code is very different from the see-it-all gestalt one can get from looking at and refining a 2D visual design.
I think the way to go is to cast it in terms of designing behavior. There are many principles and concepts of designing interesting, rich, meaningful behavior that I think could be developed, some of which is instantiated in code, other aspects in the mechanical design (the turning of a doorknob or the page of a book for example), and others in the conceptual design. This shift to behavior design as an overarching concept that encompasses computation may make it more interesting and relevant to designers.
The Media Design Program’s new transmedia publication, The New Ecology of Things, is complete. The book, website, poster and mobile phone content address the design and educational issues related to ubiquitous computing and is an ecology of essays, glossary, forum, interactive works, video, and a short story by Bruce Sterling. You can order the book here: The New Ecology of Things (NET).
I’ll be at the Maker Faire this weekend (May 19th & 20th 2007) demoing the new version of the NET Connect software. This software allows designers to create interactive objects and spaces projects without programming, using the familiar enviroment of Flash. I’ll be in a booth with Moto Development, and we’ll also be showing re.moto, an open-source wireless sensor system. For more info on NET Connect, see the website:
See you at the show!
Yesterday my Uncle James Van Allen died in Iowa City, Iowa at the age of 91. He leaves an amazing legacy as a space scientist, having discovered the Van Allen Radiation Belts through experiments starting with a scientific package on Explorer 1, the first US space flight in 1958. Following this work, he and his group at the University of Iowa continued to have major experiments on robotic space flights, participating in the Voyager, Pioneer, and many other spacecraft. He was also a vocal opponent of human spaceflight, arguing (rightly I think) that putting people into space was a waste of resources when the exploration of space could be done better and more efficiently by machines.
An article about him: Daily Iowan
On a personal note, I visited my family in Iowa a year ago and stayed at my Uncle Jim’s house. At the time, I was working on the Infiniti Interactive Mirror project and could not remember enough trig to work out some of the math needed for the sensor system. Almost as a joke, I drew up a description of the problem and left it on his dining room table before I went out for the day to visit other relatives. When I arrived home that night, he had not only written out the solution, but (ever the teacher), diagramed all of the geometry and math he used to arrive at the final equation, complete with examples. Not only was it kind of amazing to have a world famous astrophysicist help with the project, it was a small way to reconnect with my late Father, who worked with Uncle Jim during WWII on the invention of the proximity fuse at Johns Hopkins University.
I will remember him as a thoughtful and kind man whose passion for science and the pursuit of knowledge will always be an inspiration.
One of the online outlets of Communication Arts, designinteract.com (site dead), just covered the Infiniti Interactive Mirror — a life-sized video and sound installation where people interact with three 8 foot high screens by simply reaching out to different areas of the display interface. This project grew out of one of our graduate’s thesis project and work he did in my Interactive Objects and Spaces class with fellow student Scott Nazarian (MDP, 2004). Nikolai Cornell (MDP, 2004) conceived of and managed the project which was built for carmaker Infiniti by George P. Johnson in collaboration with the Designory, Mindflood, and my company, Commotion. The installation is part of the traveling Infiniti auto show exhibit and was displayed at the Detroit and Chicago auto shows. It will soon be in New York City, and in December it will be in Los Angeles.
The designinteract.com article is especially interesting in that it covers the entire process of the project, from concept through completion: www.designinteract.com/features/mirrors/ (site dead – instead please view a PDF of site rescued from the wayback machine at archive.org)
For an overview of the project and video of people using it, see the dedicated site for it here: www.interactivemirror.net
An installation for Infiniti at the International Auto Shows, the Infiniti Interactive Mirror is a 3 screen interface that uses mirrored glass and rear projection to create a seamless large screen touch interface. George P. Johnson created the project with Nikolai Cornell as creative director (MDP alum, based on his thesis project). Nikolai worked with The Designory, MindFlood, and my company Commotion.
On this project, I provided consulting on interaction design and approaches for the project, and designed and built the sensor system that detects a person’s hand position in front of a flat surface without any sensors on the sides of the display or behind the user.
[nivo source=”nextgen” gallery=2 effect=”slideInRight”]
Recent alum of the Grad ID program and former IOS student Sean Sullivan died in a tubing accident on the Kern River last weekend. There will be a memorial service for Sean this coming Sunday (Aug 28th) at the “War Memorial” in Pasadena from 2 PM to 4PM — War Memorial Hall, 435 S. Fair Oaks Ave., South Pasadena, CA
Sean was a highly creative student who always found his own path. He came up with a fantastically evocative project in IOS. We miss him already.
Another sad passing occured this weekend when Bob Moog died of brain cancer at the age of 71. Moog introduced the Moog Modular Synthesizer in 1964, and forever changed the face of popular music. He was happily still involved in music, and was at the recent NAMM show in Anaheim last January. More on him can be found at www.moogmusic.com
It’s official, we have a sponsored course for this fall. The New Ecology of Things will be funded by Sun Microsystems Labs. The class will address design issues in the coming environment when every object and space has a life of its own. Submit your applications by Wednesday, July 27th.
The New Ecology of Things
Thursdays, 1-5, Design Based Learning Lab, South Campus
Update: Following this course, the Media Design Program initiated a transmedia publication on the topic of The New Ecology of Things. The official website for this publication is newecologyofthings.net.