Designing the cognitive future, part V: Reasoning and problem solving

Authors: Juan Hourcade
Posted: Tue, October 14, 2014 - 3:38:54

I have been writing about how computers are affecting and are likely to affect cognitive processes. In previous posts I have touched on perception, memory, attention, and learning. In this post, I discuss reasoning and problem solving.

Computers are quite adept at deductive reasoning. If all facts are known (at least those we would use to make a decision), computers can easily use logic to make deductions without mistakes. Because of this, it is likely that we will see more and more involvement of computers in our lives to help us make decisions and guide our lives through deductive reasoning. We can see this already happening, for example, with services that tell us when to leave our home for our flight based on our current location and traffic on the way to the airport. 

These trends could go further, with many other activities that involve often-problematic human decision-making moving to the realm of computers. This includes driving cars, selecting what to eat, scheduling our days, and so forth. In all these cases computers, when compared with people, would be able to process larger amounts of information in real time and provide optimal solutions based on our goals.

So what will be left for us to do? One important reasoning skill will involve an understanding of the rules used to determine optimal outcomes in these systems, and how these relate to personal goals. People who are better able to do this, or go further determining their own sets of rules, are likely to derive greater benefits from these systems. One of the bigger challenges in this space comes from systems that could be thrown off balance by selfish users (e.g., traffic routing). People who are able to game these systems could gain unfair advantages. There are design choices to be made, including whether to make rules and goals transparent or instead choose to hide them due to their complexity.

What is clear is that the ability to make the most out of the large amounts of available data relevant to our decision-making will become a critical reasoning skill. Negative consequences could occur if system recommendations are not transparent and rely on user trust, which could facilitate large-scale manipulation of decision-making.

The other role left for people is reasoning when information is incomplete. In these situations, we usually make decisions based on heuristics we developed based on past outcomes. In other words, based on our experiences, we are likely to notice patterns and develop basic “rules of thumb.” Our previous experiences therefore are likely to go a long way in determining whether we develop useful and accurate heuristics. The closer these experiences are to a representative sample of all applicable events we could have experienced, the better our heuristics will be. On the other hand, being exposed to a biased sample of experiences is likely to lead to poorer decision-making and problem solving.

Computers could help or hurt in providing us with experiences from which we can derive useful rules of thumb. One area of concern is that information is increasingly delivered to us based on our personal likes and dislikes. If we are less likely to come across any information that challenges our biases, these are likely to become cemented, even if they are incorrect. Indeed, nowadays it is easier than ever for people with extreme views to find plenty of support and confirmation for their views online, something that would have been difficult if they were only interacting with family, friends, and coworkers. Not only that, but even people with moderate but somewhat biased views could be led into more extreme views by not seeing any information challenging those small biases, and instead seeing only information that confirms them.

There is a better way, and it involves delivering information that may sometimes make people uncomfortable by challenging their biases. This may not be the shortest path toward creating profitable information or media applications, but people using such services could reap significant long-term benefits from having access to a wider variety of information and better understanding other people’s biases.

How would you design the cognitive future for reasoning and problem solving? Do you think we should let people look “under the hood” of systems that help us make decisions? Would you prefer to experience the comfort of only seeing information that confirms your biases, or would you rather access a wider range of information, even if it sometimes makes you feel uncomfortable?

Posted in: on Tue, October 14, 2014 - 3:38:54

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts

Post Comment

No Comments Found