For an AI (artificial intelligence) system to be able to reflect on itself it must be equipped with a model which is modelling a world including the system itself. Some of the problems involved in constructing a model that is itself part of the reality it is modelling can be given simple visualisations. Assume we want to put up a picture on the notice board in the room depicted to the left. We may come up with the idea that the picture should depict the room itself.

In the picture to the left there is a picture of the room hanging in the room itself. So it might seem that we now have a model (the picture on the board) which is itself part of the world it models (the room). But actually, we do not. The small grey board within the picture is representing the empty board in the room as it looked before we added the picture, and not the room as it looks now after we have put the picture on the board. So something is missing.

In the picture to the left we are getting closer, but something is still missing because the small grey board in the picture should have been a representation of the picture within the picture, and not just of the empty board.

To the left we finally have a picture on the board which is a complete representation (model) of the room in which it hangs. It is seen that we get an infinite nesting of pictures within each other (as in the Legoland example).

When we try to build any kind of model sitting inside the world it is supposed to be modelling, we end up in a situation like this with an infinite nesting of submodels, subsubmodels, etc.

    Next