I have a thought to share, and it's only a little one, but it hurts a little to think about. I want to read a book about the structures and abstractions we use to hold ideas in our heads. This is too vague a definition given the level of mummery around thought and consciousness and all of that other crap, so I'll try another definition. I want to read about the categories of symbols and shapes we manipulate in our head to simulate abstract representations of ideas. Still too vague.
Consider this thought exercise. The aim is to hold the entire picture of what's being said until you can no longer comprehend the meaning of the statement. What you want to do, is consider the relationship between two peoples perspectives of which colour the first person is thinking of and it goes like this:
- Level One: Daniel is picturing the colour red.
- Level Two: Shari thinks Daniel is picturing the colour orange.
- Level Three: Daniel thinks, Shari thinks Daniel is picturing the colour yellow.
- Level Four: Shari thinks, Daniel thinks, Shari thinks Daniel is picturing the colour green.
- Level Five: Daniel thinks, Shari thinks. Daniel thinks, Shari thinks Daniel is picturing the colour blue.
- Level Six: And so on...
The first level of this game is trivial. We can hold a direct picture of the relationship between "red" and "Daniel" in our heads, and intuitively this is the basis of higher level thought. The second level is also easy as we consider only two perspectives and so we can hold a picture of Daniels and Shari's relationship, even though it means holding three objects (Daniel, Shari, the colour red) and two relationships (thinks, picturing).
When I'm tired, the third level is barely accessible. When I'm at peak awareness I struggle to imagine level four and the levels beyond are completely beyond me. By this stage the sentences are words symbolic of some relationship between Daniel and Shari, but the number of levels and relationships is too deep to model. Most of us are reduced to creating a mental algorithm that can generate the sentence when we need it. The point of this game is to show that at least some types of higher level thought can be defined as mental pictures, but others require us to create symbols and algorithms that represent those structures.
Now here is the problem. What are the categories for the different symbolic tools and shapes we use to picture ideas in our day to day lives? There are a whole bunch of easily conceived of patterns we use, that as far as I know don't have formal categories . Here are a few: * Picturing a single relationship between entities. (Level One) * Picturing a list of relationships that can be applied to a pair of entities * Picturing a pair of entities and a set of actions that can be performed to derive relationships. * Picturing an algorithm that can recursively form a sentence that represents potentially infinitely recursive relationships between two entities. (Level Six and beyond) * Visually simulating an entities relationship with other entities on a two dimensional space whilst applying an internal list of transformations based on collisions with other entities. (Interestingly, this is easier to imagine than Level Four, but contains seemingly more complex abstractions.) * A tree of rules that apply transformations to an entity.
Some category names are obvious, such as stealing the term "Decision Tree" from computer science and intuitively, every computer science data structure definition can be used to categorise our different thought abstractions. My primary with this post is to raise the possibility that we can define useful abstracting techniques for dealing with difficult, multi-dimensional thought structures. If we are able to identify our own thought structures, we may be able to consciously optimise difficult ideas into more consumable chunks, and then communicate our internal macro's with other people in something other than our usual metaphor's to explain currently difficult to articulate ideas.
Problems with consciously analysing our own thought structures
And there most definitely are some. Here is my list so far:
- Analysing your own thought abstractions adds more complexity, possibly making it harder to hold the abstraction itself in your head.
- This might be another path of philosophical redundancy for the usual party of self-interested non-contributors to explore.
- If thought abstractions are categorised poorly, we could be restricting ourselves from those which are unnamed.
- This is something deeply personal to each of us so our differing perspectives may be true for each of us, thus rendering this concept redundant.
There is no conclusion to this thought yet. It is something to be played around with, and I hope it is interesting enough for other people to play with too. If this exists, I'd be deeply interested in learning about it, though have strong fears that as a meta-science it will be defended by an impenetrable wall of arcane jargon. I'd love if there were practical application to this idea, to better understand and describe overly complex systems in more human terms.