Common sense for humans and machines: Making decisions in the absence of information

Artificial intelligence, deep neural networks, and machine learning systems all make use of data, lots of data. But what about the data or information that is not present? The absent?

Photo by Robynne Hu on Unsplash.

People make everyday decisions while lacking information about the options and their outcomes. To do that, we engage abductive inference (taking the best guess). This type of thinking is often referred to as common sense and it works for humans. But what about machines? Can we make algorithms use human-level common sense? Should we?

At this seminar, we will be discussing what we can learn from human intelligence, rationality and types of decision-making, and how this knowledge can be used in developing algorithmic decision-making. Specifically, we will be focusing on the productive force of absence in decision-making processes.

Attendance is free but requires registration. Register for the seminar.

Programme

 

Time

Activity

9.30-10.15

Arrival, coffee, and croissants

10.15-10.30

Welcome and introduction

10.30-12.00

A new competition for insight: Democratizing intelligence?

Keynote Kira Vrist Rønn, University of Southern Denmark

12.00-13.00

Lunch

13.00-14.30

Presentations and Q&A

Ekaterina Pashevich: Absence-based inference in human and algorithmic decision-making

Stefano Calzati: Designing Insightful Machines: From Absence of Information to Fundamental Uncertainty

14.30-15.00

Coffee break

15.00-16.30

Presentations and Q&A

Sebastian Felix Schwemer: Humans-in-the-loop in the EU: from lip-service to concept?

Abhishek Gupta: Bridging the Gap: Combining Human Intuition and Machine Logic for Optimal Decision-Making

16.30-16.45

Wrap up for the day

18.00-21.00

Dinner at Il Buco, Njalsgade 19C, 2300 Copenhagen S

 

 

Time

Activity

8.30-9.00

Arrival, coffee, and croissants

9.00-10.30

Truth, AI and the Epistemic Condition

Keynote Michael P. Lynch, University of Connecticut

10.30-10.45

Coffee break

10.45-12.15

Presentations and Q&A

Tanja Anna Wiehn: Synthetic (Data) Universality

Jens Ulrik Hansen: Dealing with missing data in deep learning and machine learning

12.15-12.30

Closing remarks

12.30-14.00

Working Lunch: Advisory Board

 

Abstracts

 

Kira Vrist Rønn, University of Southern Denmark

Sophisticated intelligence capabilities have become commonplace within civil society during the past decade. The amount and types of information which can now be retrieved from open sources is ever-expanding. As a result, open-source intelligence (OSINT) has grown in volume and value in the past decade, both from the perspective of traditional intelligence services and for wider civil society.

The war in Ukraine serves as an urgent example of the role played by OSINT and information from social media in warfare i.e., when it comes to geolocation of troops, for the purposes of identifying war crimes and when it comes to sourcing information from the local public.

This emphasis on OSINT has led prominent scholars to announce a “democratization of intelligence” referring to the entry of civil society into the otherwise sealed walls of intelligence services. In my presentation I will flesh out what this means, what it implies, and the potentials and pitfalls connected to this ‘democratization’-tendency of intelligence challenging the common-sense notion that intelligence is “inherently governmental”.

 

 

Michael P. Lynch, University of Connecticut

This past year, the richest man in the world, Elon Musk, asserted that he was aiming to create TruthGPT, a “maximum truth-seeking” generative AI system.

This should alarm us for many reasons, not the least of which was the context of the announcement, namely during an interview with the far-right political commentator Tucker Carlson. But it should also get us to stop and ask some basic questions about the epistemological impact that AI will have for democracies – especially democracies, like my own, that are in crisis. In this talk I ask – and attempt to answer – some of those questions, three in particular:

(1)  In what sense is it even possible to design generative AI to seek truth?

(2)  Even if that – or something like it – is possible, what effects might actually using AI have on human epistemic agency?

(3)  How might this in turn impact democracy?