In my opinion, the current practice of tools for thoughts is the visualization of cognitive process.
Writing system: the visualization of knowledge.
Bi-directional link: the visualization of associative thinking.
Andy Matuschak went afar, embedded applied cognitive psychology (Spaced repetition) to the visualization of cognitive process, make the interface more useful and more humane
Bret Victor takes a more radical moonshot approach, he want to make every objects (not only computer) to be the tools for thoughts. In the past, we have different objects correspond to different mental process, such as map, post-it note. Now, we have a universal thinking tools—computer
This is my current understanding of tools for thoughts, i hope to learn more to decide whether should i work in this field after i graduate
Do anyone have a detailed understanding of tools for thoughts landscape, please comment and let me know!
I think tools for thought = intelligence augmentation. That is to say, it’s about evolving the tools that help us think by acting like a prothesis that improves our innate ability to think.
In the history of civilisation there’s a parallel history of media and tooling that improved our ability to think individually and in groups (See this very good summary by Gordon Brander). Now, at the era of the computer, the idea is still the same, but we’ve got these machines we can leverage.
The idea is in fact knowing about how we’re wired, how do we learn, imagine, create, resolve conflicting thoughts, get tired, get energised, etc and designing systems that assist us in those innate abilities and traits.
I like to divide the field “tools for thought” in two distinct areas that see lots of research and experimentation today:
Automation, with the goal of offloading certain cognitive processes completely from human to machine; think machine learning. This might not be considered part of “Tools for thought”, although I do think it does have a place there.
Augmentation, in the Doug Engelbart sense. Or “Bicycle for the mind”. The human is in charge, the machine provides support to make the human’s cognitive process or abilities “better”, whatever that means.
I think we’re struggling in this field, because:
We don’t understand our cognitive processes well enough to know how to augment them with technology well. Yes, we came up with some good ideas, but they seem mostly like lucky guesses and were built either without consideration of cognitive science at all, or built on top of now outdated knowledge (“The brain works like a computer”; no, it does not.)
We don’t consider how to split cognitive workloads between human and machine well. A computer is much faster at calculation with precision and symbol manipulation at scale. And computers can store precisely encoded information more reliably. A lot of resources are poured into making computers good at stuff humans are better at; not enough resources are invested to figure out how to split the work such that humans and machines do the things they excel at and reap the benefits from that clever division.
Product development in this space today is mostly commercial, and the quest for becoming a profitable business is incompatible with long-term research for groundbreaking solutions that are orders of magnitude “better” (again, whatever that means). This is different from the huge (military) budgets and open goals that gave researchers like Engelbart, Kay, etc. a lot more room to figure things out. Any of Kay’s talks from the last two decades explains this in detail.
The people who would be best served by products in this space are experts in fields outside technology who might not know that much about technology. It looks like most people thinking about this, however, are technologists, who understand how to build technology, but either without Deep knowledge of other domains, or — if lucky — with an interest in a different domain, but likely not enough experience to fully understand requirements and practices to develop tools that are not driven by technology, but by the needs of the target domain.
I’ve written this with your stated goal of “i hope to learn more to decide whether should i work in this field after i graduate“ in mind. I think it would be fabulous to have people coming into this field who are willing to consider new approaches to bridge the gap of domain knowledge and don’t come at this from a purely technological perspective. There’s lots of work to do, and I can’t imagine a world where this isn’t just becoming more and more important in the future.
I’m going to be blunt on this: there is only one tool for thought, and that is language. And by language I mean: an extensible and composable reality-to-concept symbolic mapping function.
Without language, the best you can get with a complex brain is imitation and basic pattern recognition. Both require experience and through them, no prediction about the future can be made.
If we interpret an idea (a particular unit of thought) to be a ‘knowledge candidate’, then it becomes clear that the ability to make predictions about the future is essential to create knowledge. Neither imitation nor pattern recognition alone are composable and so cannot be used to make predictions about the future, only assert approximate facts about the present through past experiences. Which makes language the essential tool, since it can model futures by composing concepts. A new experience can then be validated through testing and then mapped again to new concepts which can be further composed ad infinitum.
That is why computer languages work. The language itself does not dictate which software can be created, yet any software that can be created using it, can be created – it is composable and extensible. In this context, reality-to-concept mapping refers to bit-to-abstraction mapping.
Expanding, there are some orangutans who learned to use tools (sticks) to harvest ants from tree holes and, according to my own definition, in order to do that they somehow had to make a series of assumptions like ‘stick is finger-like’ and that is a sign of a symbolic reality-to-concept mapping - but can they compose and extend this knowledge?
[Edit] Just to illustrate my thought process, which may be confuse at glance. In the case of the orangutan reality equals ‘this finger of mine’ and ‘this stick object here’; concept equals ‘finger-like’ and ‘some stick’ becomes the symbol for ‘finger-like’.
Through this lens, the matter of tools for thought becomes really simple: any tool that enhances any part in the process of language is a tool for thought. There is nothing special about computers in this scenario.
What is interesting about computers is their capacity for virtualization, since speed is just a factor of the medium they operate. Through virtualization, computers become extensible, just like language, so computers can be composed and extended atop itself.
So, there can be an infinite number of tools for thought. There is no better or worse, in this case, just adequate or inadequate for a given task.
The way to really empower human activity through computers is then simply a matter of democratizing computer re-programming. If software can be made to generate any software at will, then such software will possess the same fundamental attributes which make language the ultimate, stellar tool for thought.
I disagree with the idea that innate human activities should, somehow, be offloaded to machines. If we are to keep our humanhood in the process of evolution, then machines are supposed to stay machines. In this context, what many see as an aid, I see as interference.
For a computer to make any theory about our reality, such reality would need to be constantly streamed to them. They would need to process such stream in real-time as we do, store them as we do, be able to reason on them as we do, and be able to recollect them at will as we do. If not impossible, that would be a sin. Computers are much better at being computers and making theories about their own reality. The better they are at being computers, the better we will be at being people – it must be symbiotic.
If we change context and imagine a world where a child offloads its innate tasks to an adult, how this would not have a negative impact on its ability to learn and become an authentic individual? This can be observed in wild-life: if you assist some animal in tasks which should be ultimately their own, they will forever depend on such assistance, since its innate abilities would never fully develop. I think the same applies to us: the more we offload, the more dependent we become, and further we astray from realizing our potential – individually or collectively.
That said, I am all in for this:
Unrelated to @stefanlesser, but while in this topic, there is one aspect that most people tend to overlook when they read Doug: He was as emphatic about the humane aspect as he was about the computer aspect. Let us never forget that.
This decade will surely be the stage of a race for more humane tools-for-thought – the swelling privacy crisis and the current state of technology are setting such a stage.
I’m biased, of course, but it is hard to imagine a more emergent and fundamental problem to work on. Especially if you think about how this area has plenty of great visions just waiting to become reality.
Maybe I didn’t explain myself. I talked about augmenting human capacities, the same way that a pencil and a piece of paper would help a psychotherapist write down interesting topics and bring them up later to the patient (instead of forgetting).
This is different to the current trend of “AI everything” which is as thoughtless as the 60’s idea that computers where only useful for numerical calculations.
That being said, using any tool will have the consequence of lazing some human skills. If you bike everywhere you’ll most likely develop some muscles and under-develop some others. If you use a GPS to orient in the forest you may develop less your pathfinding abilities, or your spacial awareness.
But maybe you want to play the different game you play once you use that tool. Or maybe not. I prefer not to be very opinionated on what the innate human abilities are.
I was not strictly disagreeing with you. I just used the mention as a illustration.
Strictly speaking, it is indeed rational not to be too opinionated in such open-ended matters.
But whatever are the outcomes of the future state of HCI, I believe the top priority should be education - we should allow people to make educated choices about which role they will want machines to play on each of their lives.
I tend to assume a more conservative bias towards this matter in an attempt to raise a little awareness. The undiscriminated use of computers as we witness today is already wrecking havoc in the entire world with severe negative outcomes in many nation states, and the positive aspects are clearly not being enough to balance the odds in the big picture of things - damage seems to escalate quicker than progress in this arena. And I’m an idealist, believe it or not.
Computers are just very handy tools and, sadly, its far too common to see people dealing with it like it was some sort of religion.
I have, like many people who took the same path, a real story to tell about how decreased computer exposure and dependency improved, to say the least, my cognition, attention, and social skills throughout the years of “unlearning” the thing. The surprising thing is that by unlearning it, you make so much better use of it.
That said, I really have this utter belief that it will be only when we reconcile with our own nature that we will be able to make computers realize their destinies - one step back, thousands ahead.
Machines like computers tend not to be like such single purpose tools. They have the power to capture our imagination and emotions. That’s the whole point. The innate abilities they can interfere on are our most valuable.
I do not think that it is pure accident that the only significant advances in computer science happened before computers became prevalent. Think about that.
While I’m certainly also more interested in augmentation over automation, I do think both have their uses and ultimately a good solution will likely be about a good balance of the two.
I’ve seen this pattern play out so many times: We identify one position as an extreme, just to jump on the other end of the spectrum. Opposition! That’s great for clear communication and emotional discussions. In the end, it’s almost never really obvious which extreme is The Right One™ — a much better question is: what is a good balance?
I agree. Overall, this pattern reveals the importance of accountability from parties that develops and distributes end-user technology. Until we have educated users (might take decades) capable of finding such balance for themselves, perhaps such parties should have a paternalist model of public interaction, to protect people from their own ignorance under a given domain.
What do you think of this, @stefanlesser? Can paternalism be a path to balance?
A revelation I had recently is that successful tools for thought design is and must be emergent. There are a lot of monolithic memex concepts out there that start from the top down, while writing, programming, number systems, emerged from the bottom up.
I’m partial these days to “juicing” atomic interactions that feel nice and symbiotic, and playing with what systems can emerge from those building blocks.
What is nice about this approach is that the line between design and discovery seems to blur. When working with atomic building blocks, it is not uncommon to stumble upon some immediate properties that you were not looking for. It feels refreshing.
Another great aspect is that you can design with a tech-agnostic mindset. No need to worry about programming languages, frameworks, etc just pure reason.