Saturday, December 1, 2012

CEE (Computer Emotion Engine)


I've been thinking for many months now about what makes us (human beings) so different from computers (machines).  One of the main things that I had previously identified, was this massive divide between biological and non-biological organisms.  To be more precise:

  • I saw "emotion" (or some form thereof) in virtually every biological organism that I looked at from the smallest single celled organism struggling to "get to the light".
  • At the same time, I was struck by the complete (100%) lack of "emotion" in all of the various (non-biological) machines around us.

Further thinking on the subject of this duality lead me to the question of whether "life" might be the source of what we call emotion, and therefore if it was possible that machines would simply never "feel" as we do.  Period.

This idea was further reinforced by the fact that, when trying to imagine how I would go about programming a computer to "feel", I couldn't come up with  even the remotest idea of where to start.  I imagined having virtually unlimited resources and hundreds of developers at my disposal, and was disappointed with myself that I wouldn't even know where to begin work on such a project.

Today that all changed.  After a discussion with my friend Matt Cherwin last week, I realized that there "might" be a path from where we are now, to where I've been trying to get.  In our discussion, Matt maintained that the what we call "emotions" are really just the interactions of a number of complex systems.  I maintained that IBM's Watson, for example, seemed like the kind of complex system that he was describing, and yet didn't display anything that even began to approach what we refer to as emotion.  At the same time, I could think back on our own evolution, and see even the simplest of organisms that did seem to display such characteristics.  This seemed like an unbridgeable divide!

Over the weekend I continued to ponder on our discussion, and today I think that I might have made a breakthrough in my own thinking on the subject.  Specifically, if I redefine what I mean by "emotion" as follows, I think that we might very quickly end up with a system that, surprisingly to me, very much starts to feel like human emotion.

Basic System Design
So, here are some overarching concepts to this CEE (Computer Emotion Engine - pronounced SEE, see? :)

The "System" would be comprised of a base class called EmotionBase that defined some basic characteristics of all emotions in "the system".  At the very least, these properties would include the name of the emotion and an Emotion Indicator (EI) scale - for example 1-100 which would indicate the extent of this emotion.

This class would further allow for weighted input sources and an algorithm that would combine these input sources together into one final EI number for that emotion.

Specific Use-Case
So, with this simple starting point, let's take a specific use case.  The goal of this use-case is to allow our computer to interact with us at an "emotional level".  In other words, I want to be able to ask my computer how happy it is, and have it respond, not randomly, but based on a variety of specific "inputs".  To begin with, I might just have a little emoticon icon in the start bar that either Smiled or Frowned based on how "happy" it was.

So, the emotion created would be called Happiness as follows.
  •  100 = :) 
  •  50 = :|
  •  0 = :(


Now - what dictates happiness on our computer?  Here's where the various inputs come into play.  Some factors that could play into the computer's "happiness" include.

  • Free disk space (get's unhappy as disk space runs out)
  • CPU Usage - (happy as long as the CPU isn't totally and consistently pinned)
  • CPU Heat (get's unhappy as the CPU heat goes higher).
  • Available system resources
  • etc.
So - our "happiness" emotion would have 4-5 values, input on a regular basis, and the computer's happiness would change as these values change.

Other examples of emotions that could be built on a similar type of model include:
  • Boredom
    If the computer isn't "doing anything", it could get more and more bored.  If it's got 15 programs open, is running a backup routine in the background, and doing a virus scan while also downloading a 300 MB service-pack, it would not be board at all (0).  If it's just sitting there, it would be quite "bored" (100).
  • Nervousness
    If the virus scanner has recently detected a number of problems, it's running out of disk space on drive F, the CPU is running at 95%, and 3 new, unidentified processes are running - this could make the computer quite "nervous".
  • Anger
    100's of unidentified (and new) errors in the system event log could cause the computer to get angry!
The benefits of such a system
So - the first question you might be inclined to ask is - what's the benefit of such an "emotion engine"?  I can think of a number of them.  To begin with, it would allow us to change how the computer "chooses" what to do, and potentially how we interact with it, to be based on such emotions.  Since computers don't have such  such emotional programming at this point, this will seem quite odd - but if such a system were in place, here are some examples of how things might look different.

Right now, we run virus scans at 3:00 AM, Mon-Friday and again on Sunday night.  We do this because we assume that these will be times of inactivity.  Instead, we could program our virus scans to run "whenever you're bored, or if you are nervous about new viruses for any reason".  

We could program the system to look for and clean up unused disk space when it gets "concerned' about low disk space.  Now, what makes the computer "nervous" about disk space could vary from system to system in a way that our current "precise", "non emotion based" systems make more difficult.  For example, if one system has 50GB of free space, maybe it's not "nervous" at all, because it's only a 70GB drive.  On the other hand, a 5 TB drive with only 50GB of space should make the system very nervous.  Similarly, if the system has 50GB of free space, and it knows that it has a project coming up that is going to need 60 GB of space, that could make it anxious about this - causing it to seek out a remedy to this anxiousness eg. freeing up some more space.

From our perspective, it would change the "language" that we use to communicate with our machines to match our own "emotional" verbiage.

Now - all this said, there are limits to where these benefits begin/end.  We don't ever want to ask our word-processor to print a document and get back the response "No, I don't really 'feel' like it...".  So - this would not be a simple thing to "get right".  That being said, in my limited thinking on this so far, I think that A) It could be done and B) it could provide some powerful benefits.

How the system could be extended
To really be helpful, CEE emotions would have to be programmed carefully.  In addition, the "decision engines" of applications and larger systems would need to evolve to take advantage of these emotions.  As with people, this would be a delicate balancing act that would need to be managed carefully.

That being said, it seems like new emotions could be designed to serve specific purposes.  In addition, common "system emotions" could be extended by third parties.  In other words, the system could allow for 3rd parties to write "plugins" of sorts that could influence the emotional state of the machine.

This "emotion model" might need to evolve so that certain emotions were kept "private", while others were shared more freely.

To begin with, the emotion state of the machine might be more global, but might eventually get quite specific, and granular for specific sub-systems to take advantage of.  In other words, overall, the system might be quite happy, while at the same time being very nervous about the length of time since it's last backup had run.

The emotion systems would also need to feed back on each other in quite a sophisticated way that would be difficult to "nail" on a first pass (version 1) of such a system.  For example, if the overall system has a happiness quotient, this overall happiness index should really (eventually at least) be influenced by the individual (local) emotions of:

  • The various applications running
  • The hardware, and the state of each of the individual hardware components
  • Various performance indicators
  • etc.
The interplay of these "emotions" together would be difficult to "get right" - that's for sure.

In addition to systems being able to make decisions based on these emotions, "the system" could also evolve to better and better answer the question "Why are you sad?", "Why are you scared?", etc.

Two types of emotions
Seems like there are at least 2 different types of emotions.  There are singular and bidirectional emotions.  There may be others, but these are the only two I've thought of so far.  Examples of singular emotions/feelings are:

  • Fear
  • Hunger
Most emotions/feelings however seem to (consistently) have two opposing ends of the spectrum:
  • Happy/Sad
  • Love/Hate
  • Energetic/Tired
  • Good/Bad
It's possible that all of these could be singled out into one "Frame" that works for all of them, but I have so far been unable to think of the opposite of "afraid".

Conclusion
The bottom line, is that - for me at least, for the first time - I feel like I have a model that would produce a machine (computer) that could possibly pass a Turing test, specifically designed to detect emotion - that would not be "fake" (ie. Not just a rule-set that dictates: "20% of the time, say you're 'happy'."  or "If it's dark, say you're 'scared'.") and further, could actually provide a different way for us (human beings) to interact with machines/computers/programs/etc.

No comments:

Post a Comment