A Brief Introduction to Mouthwash

June 2, 2011 - Mouthwash

Previously, I’ve talked about conversation systems and my discontent with them. Talk is cheap; so in addition to complaining about it, I’ve been working on an engine called Mouthwash to make these ideas more concrete. Eventually I do plan to develop some games with it, but for now it’s a lot of notes and a rough testing framework with some basics implemented.  I’m going to start blogging my progress here, both to collect my thoughts and in hopes of some feedback along the way.

Here’s the rough overview: Mouthwash is about defining conversation acts in terms of their function. Those functions include changes in four domains: emotions, relationships, viewpoints, and intentions. Speech acts range from basic to specialized and advanced, like RPG skills. The success and failure of speech skills is determined by something like D&D mechanics, including rolls based on character stats. NPCs choose skills based on their current intentions.

The longer version: Mouthwash is mostly inspired by pragmatics, the study of how language derives meaning from context. In practice, pragmatics theorists talk a lot about the function of speech, because that has a lot to do with how listeners resolve ambiguities. Theorists have come up with various lists of “speech acts” and what they do to the context of a conversation. For example, you might create an intention in someone else with a request; commit yourself to an intention with a promise; express an attitude with an opinion; and so on. These acts are remembered by your listeners, who use the newly changed context to determine their own speech acts.

This perspective really useful if you want abstract conversation in a way that lends itself to tactical gameplay. What you need is a set of variables that define the state of the conversation. They should be general and abstract but also meaningful enough to remind you of real conversation. The Sims does some of this, but the only real variable is the “Like” score between two sims, and some relationship stuff. That’s a start, but once you include more semantic things like intentions and viewpoints, things start getting really interesting. You can actually start playing with an NPC’s AI by changing their goals, for example. That sounds like fun. At least I think so.

There’s a lot of moving parts here, and a lot of this hinges on whether I can come up with a good set of abstractions for the four domains. Ideally, they’d be as simplified as possible while still leaving lots of hooks for good gameplay. Figuring that out will probably be a pretty agonizing process. But everyone needs a hobby.

Share Button

Related Posts

  • Mouthwash: Emotions and StatsMouthwash: Emotions and Stats The time has come to check in again on the Mouthwash conversation engine. Earlier I mentioned that speech acts in Mouthwash have effects in four domains: goals, emotions, viewpoints, and […]
  • Mouthwash: ViewpointsMouthwash: Viewpoints In previous discussions of Mouthwash, I've mentioned that there are four domains in which actions can take place: goals, emotions, viewpoints, and relationships. I've talked a bit about […]
  • Conversation Games in the Wild: DeadwoodConversation Games in the Wild: Deadwood One of the things I've been doing in the course of developing the Mouthwash conversation gameplay is to analyze transcripts of TV shows with good dialogue. The idea is to have some good […]

› tags: dialogue /


  1. onefinemess says:

    That sounds like a solid hobby to me. Like game modding, but much less specific :).

    Looking forward to seeing what you make of it. It sounds like the kind of thing that could be pretty interesting just to mess with sans game… provide dialogue options and starting points and see what happens.

    It kind of reminds me of the ai bots that they do the Turing testing with… Something like that with conversational goal maybe?

  2. Line Hollis says:

    Yeah, totally! Even if I never get around to the games, I’m looking forward to playing with the prototypes.

    Interesting you should bring up chatbots. I actually messed around with that kind of thing for a while back in grad school, as part of a supposedly serious research project. I ended up really frustrated with the whole idea. The problem is, as soon as you make something resemble a human being, people get the most insanely high expectations for it. Consciously, they might be perfectly aware that AI isn’t that great, and the thing is pretty much just searching for canned responses based on the text they type in. But there’s still this gut sense that it’s all or nothing: the second it responds with something slightly nonsensical, people give up on it ever working again. It’s kinda fascinating! But frustrating, as a programmer.

    But maybe if the agent has goals other than “appear like a human,” you actually could sidestep some of that? Maybe part of what makes chatbots seem weird to people is that they don’t seem to have minds of their own? It’d be interesting to see whether you can create the illusion of a goal-directed speaker without too much crazy AI.

  3. […] touched in quite a while*, and reading other people’s thoughts on the matter (like this one from Line Hollis) has sparked my interest again on the […]

Leave a Reply

Your email address will not be published. Required fields are marked *