Most of us have been taught that certain words should be kept out of our vocabulary, or at least minimized in their usage.  As unsavory as these words are, they're nothing compared to the words we can say that actually have semantic weight.

What I mean is, with silly slips of the tongue or keyboard, we can say things we don't mean, hurt feelings we don't want to hurt, etc...  It's a sentiment I'm sure I'm not alone in feeling, as all of you have experienced this.  We say one thing, without a thought, not knowing who might be paying attention to it.  Pretty soon, we're up to our eyeballs in guilt.

Note to all of us and myself: learn to watch one's joking language until you know EXACTLY who you are talking to.

More generally, always watch your tongue.  You never know who might be listening.
 
In preparation for future work with the Polyscheme cognitive framework developed by Prof. Nick Cassimatis at RPI, I've been studying up on the theory that underlies the framework.

In particular, the Cognitive Substrate Theory: "There is a relatively small set of computational problems such that once the problems of artificial intelligence are solved for these, that is to say, once a machine called here a "cognitive substrate," is created that effectively solves these problems, then the rest of Human-level intelligence can be achieved by the relatively simpler problem of adapting the cognitive substrate to solve other problems."
~N. Cassimatis

The basis for this theory is somewhat rooted in theories concerning the evolution of mind, and motivated by the problem of knowledge and procedural profusion that plagues many AI research paradigms.  Allow me to attempt to define those for you.
  1. Knowledge Profusion Problem: There is a large amount of information that is required for intelligent systems in even a small domain.
  2. Procedural Profusion Problem: Most domains involve computational problems that require many different difficult to integrate computational methods.

In short, the CogSub Theory proposes to provide an elegant solution to these problems.  From what I can gather from the writing, the CogSub theory would help us move towards these solutions in three ways:
  1. Making the problems smaller: progress in the substrate constitutes progress across all/many/multiple domains of human-level intelligence.
  2. Quicker intelligent system development: development would be accelerated as domain-specific intelligent systems would be based off of the same/similar mechanisms.
  3. Easier integration across domains: Since the same mechanisms will form the basis across multiple domains of intelligent systems, they will be easier to integrate.
So as you can see, the knowledge profusion problem is perhaps solved by a graceful increase in knowledge base size as well as the ability access and use knowledge that's already been accumulated across several different domains.  I hope I'm correct in this assessment, that perhaps this is a way of maximizing the mileage we get out of particular pieces of knowledge to minimize the problem of massive knowledge requirements.

Procedural profusion is then solved by improving the ease of which different computational methods across domains can be integrated with each other.  In particular, the 3rd point of the CogSub Theory's strengths is the direct approach to the problem.

One thing I'd like to look into with more depth is the mechanisms that we take as primitive, and their evolutionary basis.  There is evidence that the framework for human cognition was built via evolution before we had things as complicated as computers and complex languages, specifically what was there is still somewhat mysterious to me.  Physical reasoning about the world is, in this set of papers, proposed as the reasoning mechanism that can be used to illustrate understanding across several domains including natural language, as reasoning about the physical world is extremely important for any being with a physical existence.  It is quite intuitive to see this form of reasoning as perhaps having been evolved prior to and providing the framework for other domains that were incorporated later on in human development.

The question I have for myself and others is, then, how confident that physical reasoning was the first form of reasoning that was evolved in humans?  If so, are we representing it internally in the correct way?  Are the mechanisms in physical complete and intricate enough to express the wide range of human language versatility, usage, and ambiguity?  If we rely on the same mechanisms for physical reasoning that we do with language, how do we account for ambiguities in language?  For each ambiguity and linguistic phenomena, is there a parallel of confusion in our physical reasoning skills i.e. what's the physical equivalent to a "garden path" sentence?  Or vice versa: for each confusion in physical reasoning, can it be illustrated by a confusion in our understanding of natural language?

As you can see, there are a lot of questions that still need to be answered I feel.
 
People often complain that AI is not developing as well as expected.
They say, "Progress was quick in the early years of AI, but now it is
not growing so fast." I find this funny, because people have been
saying the same thing as long as I can remember. In fact we are
still rapidly developing new useful systems for recognizing patterns
and for supervising processes. Furthermore, modern hardware is so
fast and reliable that we can employ almost any programs we can
create. Good new systems appear every year, for different "expert"
applications.


However, progress has been slow in other areas, for example, in the
field of understanding natural language. This is because our
computers have no access to the meanings of most ordinary words and
phrases. To see the problem, consider a word like "string" or
"rope." No computer today has any way to understand what those
things mean. For example, you can pull something with a string, but
you cannot push anything. You can tie a package with string, or fly
a kite, but you cannot eat a string or make it into a balloon. In a
few minutes, any young child could tell you a hundred ways to use a
string -- or not to use a string -- but no computer knows any of
this. The same is true for ten thousand other common words. Every
expert must know such things.


This is why our "expert systems" cannot communicate. We have
programs to play chess, and programs to recognize characters. But no
chess program can read text, and no OCR system can play chess. It is
almost impossible today to get any two different programs to
cooperate. I think that this is because we have not yet developed
any systems that can use "common sense." This involves several
different kinds of problems. Once we solve them, we will all benefit
from the great AI systems of the future.


The first problem of common sense is how to build up that
"commonsense knowledge-base." It will have to contain information
about strings, roads, tools, energy, books, houses, clothing -- all
the knowledge that most children know. This will be hard because
much of that knowledge is so "obvious" that people don't need to have
words for them. Also, our future commonsense systems will need to
understand the "functions" or "uses" of all those things because, to
solve real problems, our machine must know which tools or ideas are
useful for each kind of purpose. This also means that those
knowledge-machines will need to know a lot about human psychology, to
understand what kinds of solutions are acceptable. Some research is
already under way in this area. One such project is at CM in the
United States, and there are other attempts in Japan. Generally,
there is so little research in this area that the best applications
of AI are still waiting.


Another problem is that you cannot put knowledge into a computer
until you find a way to "represent" that knowledge. Technically
speaking, for each fragment of knowledge we must first choose some
sort of "data-structure" or other way to build the knowledge into
memory. It is here that I believe the Computer Science community has
used good reasons to make bad decisions! Everywhere I go I find
people arguing about which representation to use. One person says,
"It is best to use Logic." The next person says, "No, logic is too
inflexible. Use Neural Networks." The third person says, "No,
Neural Nets are even less flexible, because you have to reduce
everything to mere numbers. Instead, you should use Semantic
Networks. Then, the different kinds of things can be linked by
concepts instead of mere numbers!" But then the first person might
complain, "No, Semantic Nets are too arbitrary and undefined. If you
use Formal Logic, that will remove those ambiguities."


What is the answer? My opinion is that we can make versatile AI
machines only by using several different kinds of representations in
the same system! This is because no single method works well for all
problems; each is good for certain tasks but not for others. Also
different kinds of problems need different kinds of reasoning. For
example, much of the reasoning used in computer programming can be
logic-based. However, most real-world problems need methods that are
better at matching patterns and constructing analogies, making
decisions based on previous experience with examples, or using types
of explanations that have worked well on similar problems in the
past. How can we encourage people to make systems that use multiple
methods for representing and reasoning? First we'll have to change
some present-day ideas. For example, many students like to ask, "Is
it better to represent knowledge with Neural Nets, Logical Deduction,
Semantic Networks, Frames, Scripts, Rule-Based Systems or Natural
Language?" My teaching method is to try to get them to ask a
different kind of question. "First decide what kinds of reasoning
might be best for each different kind of problem -- and then find out
which combination of representations might work well in each case."
A trick that might help them to start doing this is to begin by
asking, for each problem, "How many different factors are involved,
and how much influence does each factor have?" This leads to a sort
of "theory-matrix."



SIZES OF NUMBERS OF CAUSES
EFFECTS few <------> moderate <--------> many
------------------------------------------------------
Small| TRIVIAL |Analytic, Linear |Neural-Connectionist|
|Table Lookup|Statistical |Fuzzy |
|---------------------------------------------------
Mid- | Logic |Heuristic-Symbolic|Commonsense |
Size | | | Knowledge-Based |
Size | |("Classical AI") |Analogy-Based |
------------------------------------------------------
Large| Rule-Based | Case-Based | INTRACTABLE |
| Symbolic |Explanation-Based | (Reformulate) |
-----------------------------------------------------


When there only a few causes, each with small effects, the problem
will be trivial.


When there are many causes, each with a small effect, then
statistical methods and neural networks may work well.


But when we have only a few large-effect causes, that might be a good
domain for logical and symbolic problems.


Then, between those extremes, we may be able to reason by using
semantic network. Or we might be able to recognize the important
patterns by using techniques like Toshiba's multiple similarity
methods, or other ways to discover which combinations of features are
the most important ones, when there are too many possibilities to
explore them all.


When there too many causes with large effects, problems tend to
become intractable. Yet even when that happens, we may be able to
find useful solutions by using learning and analogy.


In the field of theoretical physics, the researchers have a well-
founded hope to discover a "theory of everything" that has some
simple and elegant form -- like one of those "unified" theories.
However, AI faces an "everything" that includes much more than the
physicists face. Because of this, we cannot expect always to
simplify and unify our knowledge. Instead, we shall have to learn
how to combine and organize our ever-growing theories -- and we ought
to undertake that adventure with a positive view. Instead of envying
the physicists, we could recognize that ours is a harder and,
ultimately, an even deeper and more interesting problem. Our job is
to find out how to make "unified theories" about how to combine non-
unified theories.

(Revised 95/07/10)
 
It comes to border on tired cliche so often, that when people grow up, the only thing that really changes is how expensive their toys are.  Of course, size changes as well (we prefer real cars over hot wheels, watching real men beat on each other as opposed to letting our action figures and imagination's run away, etc...), but point is, beyond superficial changes, adults and kids aren't all that distanced from each other.

There is also another idea that has a little bit of traction in my mind: that there are some things that kids are just better at than adults.  Plasticity of a youthful mind allows those of a younger age to (allegedly) pick up new lessons and techniques quicker than there more aged and supposedly "wiser" counterparts.  Language, for instance, is a popular example of this.

While many of these points have been in the past made pessimistically, I tend to view this fact in a slightly different light.
What's wrong with being immature?
One of the things that I came to question here was the proposition that somebody was "mature for their age."  It came off disturbingly like posturing, acting in an meretricious manner to create an air where one was somehow more "advanced" than those who were, inferred from the above quote, considered somehow less mature.
What's wrong with being a kid sometimes?
As far as I could tell, truly mature people had no need to say it, the just knew it, and it showed in their actions.  What's more, they also knew that there was nothing wrong with being a kid.

After all, it takes a little bit of foresight to realize that someday, we all wish we could be kids again.
 
An excellent quotation from Daniel Dennet in Darwin's Dangerous Idea:

Scientists sometimes deceive themselves into thinking that philosophical ideas are only, at best, decorations or parasitic commentaries on the hard, objective triumphs of science, and that they themselves are immune to the confusions that philosophers devote their lives to dissolving. But there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.

Exceptionally profound.
 
And now time for something completely unacademic!
Well, at least unrelated to my primary research goals...
So, as some of you know, the NHL playoffs are upon us...and that I am a hockey fan...well, more so a hockey player.
Anyways, it is the season for octopus' (or is it octopi?), late nights staying up for overtime sudden death marathon sessions of big men and sticks hitting and sweating and...
Okay, now I'm all hot and bothered.
But you get the point.  The playoffs are here.  My picks?
Most likely team to disappoint: San Jose Sharks (...history's on their side)
Most likely player to disappoint: Alex Ovechkin (teams are just gonna focus on him...which means I think Backstrom's gonna have a good post season)
Most surprising team: Ottawa Senators.  I got a feeling they're gonna take down the Pens.
Most surprising player: Pekka Rinne.  While I don't see the Preds beating the Blackhawks, he's gonna make a name for himself.
Eastern Conference Champs: Buffalo Sabres
Western Conference Champs: Chicago Blackhawks
Stanley Cup Champs: Buffalo Sabres.  Goaltending, as it always is, is the difference this time of year.  And the disparities between the Sabres and the Blackhawks in this department are too great to overcome.  No contest.
 
A thought occurred to me a while ago...
Consider HAL 9000 from that one famous movie from Stanley Kubrick I can't seem to remember the title of (was it 2000? was it 2001?).  In it, the spaceship's onboard AI computer goes awry, displaying behavior that was disturbingly mislead, and as close to attacking the ship's denizens without actually striking them down.  Ultimately, HAL is shut down in response ("What are you doing Dave?"). 
So anyways, I was considering just how can we, as AI researchers, code morality into a machine.  At the same time, were HAL's actions immoral?  Do you think HAL had any direct notion of morals at all?
So I came up with a couple ideas...
1) The main cause of evil is ignorance.
2) We cannot/should not encode morality into our artificial intelligence entities.
Of course, I would need to expand these greatly, as these ideas as they are are incomplete and probably ill-formed.  I'll be brainstorming.

My general point is, is that I think we can learn a lot about the way we approach ethics, how it is developed in society's, and how an innate sense of morality is born in humans by studying the way we could instill it in an artificially intelligent entity, and how we would then treat such an entity born of our own mental image.
 
Hey all.
So recently a thought occurred to me.  What I and many others are interested in developing are cognitive architectures.  Why do we call them architectures?
I suppose it's because these architectures are supposed to be models that underlie cognition.  That is to say, an architecture forms the structure of cognition at the human level of intelligence.
The ideal is that these architectures are in some way models that are representative of the way humans actually process and respond to the environment and dynamic situations, but whether our models are adequate representations is a question that we've wrangled with for centuries.
So given that we are making "structures of cognition," that we call "architectures," does that make us architects?
Perhaps not in the traditional sense that we're making buildings with an eye towards aesthetics, but in a new sense, that we're making minds with an eye towards functionality.
However, I believe that what we do is as much an art as it is a science.  I think about this in the form of modules.  Modules form an integral part of cognitive architectures, each module specialized to a particular process (spatial reasoning, temporal reasoning, etc...).  However, which modules exist to form human level cognition is also a very difficult question, and one I suspect has no easy answers.  On top of that, there are many ways to implement a module, defining how each interact with each other, and what kinds of information hiding occur between modules.
With so much variability in views on the modularity of mind, it's a wonder we're able to come up with any functional architecture at all.  It's at this point that I believe we cease to be simply philosophers, computers scientists, cognitive scientists, psychologists, etc.., and we become artists as well.  Each architecture we use defines our style, our signature.  Just as architect Frank Gehry's work has a flow and tone all its own, each architecture we build has a feel and build unique to itself, as defined by its creator.  Each purpose we dedicate the architecture to is essentially the development of a new building.  These buildings carry the style and signature of the author, the architecture upon which it was defined, and it is now made concrete for a particular purpose, be it an art gallery, an office building, or a hotel.
Cognitive Architecture development at some point is no longer just a science, but an art as well.
 
I spoke with Professor Richard Lewis here at the University of Michigan today about some work in Cognitive Science and architectures.  Just a quick introduction, Prof. Lewis is a member of the Psychology department here, and has done work with the SOAR cognitive architecture.  Specifically, his doctoral dissertation project was on using the SOAR cognitive architecture as a human language processing unit.  Basically, he used the modules and tools present in an early version of SOAR to derive a semantic representation of natural language.

We discussed the details and differences between Polyscheme (a cognitive architecture I will be working on at RPI) and the SOAR architecture, as well as ACT-R.  Ultimately, I had two major questions:
1) Is it possible to make a cognitive architecture free of modules?
2) Is it possible to make a stateless cognitive architecture?

The short answer to #2 was no...which perhaps is indicative of a direct objection to the embodied cognition project that was originally headed up by Prof. Rodney Brooks at MIT.

As for #1, there was no easy answer.  Perhaps it's possible, but there is a biological basis for modularity in cognition.  While that glosses over many details that could perhaps be debated, it seems as though simply there are specialized regions of the brain dedicated to JUST vision processing, or JUST sensor-motor skills.  Jerry Fodor apparently has something very interesting to say about modularity.
Maybe we can make a module-free architecture.  How that might work or what it might resemble, however, is an open question.
 
Hey everyone.
I've made my choice to begin my PhD at RPI's Cognitive Science department in September of 2010.  I was very impressed with Prof. Cassimatis' work, and the incredibly innovative and friendly team he has managed to put together.
I look forward to being a solid contributor to the team in a few short months.
Until then, i'll keep you all posted.  :-)