페이지 안내

서울대 소식 / SNU 캘린더

전체 행사

SNU 캘린더 /

전체 행사

[철학사상연구소]서울대 과학철학 분야 해외학자(Prof. Florian Boge) 초청 강연

2024.08.28. ~ 2024.08.29.

서울대학교 철학사상연구소는, 과학데이터혁신연구소, AI연구원 인공지능ELSI센터와 함께, 해외학자 초청강연을 아래와 같이 개최합니다. 이번 강연의 연사는 독일 도르트문트 대학의 플로리안 보그(Florian Boge) 교수입니다. 보그 교수는 인공지능과 관련된 과학철학 분야를 이끌고 있는 주요 연구자 가운데 하나이며, 특히 인공지능이 과학적 이해에 미치는 영향을 주로 연구하고 있습니다. 이번 강연에서 보그 교수는 기계학습, 특히 심층학습(Deep Learning) 모형의 불투명성이 무엇을 의미하는지 해명하고, 인공지능이 이해를 가질 수 있는지를 탐구하며, 설명가능 인공지능(XAI) 개발의 목표를 이해와 관련하여 규명할 것입니다. 많은 관심과 참여 바랍니다.

참석을 원하시는 분은 다음 양식에 기입해 주시면 됩니다.

https://forms.gle/VbkiWBf8EuMHAUo27


==
2024-Summer | Seoul National University

Special Lectures on Philosophy of Science


Theme: Machine Learning, Black Box, and Understanding

Speaker: Prof. Dr. Florian Boge (TU Dortmund University)

Moderator: Hyundeuk Cheon (Seoul National University)




프로그램 (Program)


1. 특별 강연

일시: 2024년 8월 28일 (수) 오후 3시 - 6시

장소: 서울대학교 인문대 신양학술정보관 302호

Lecture 1. What is Special About Deep Learning Opacity?

Deep Learning systems, also called Deep Neural Networks (DNNs), are state-of-the-art in Artificial Intelligence (AI). It is well known that these systems are in some sense “black boxed” or opaque, roughly meaning that it is not easy to understand details about their functioning on various levels and in various respects. However, similar things have been known to be true for a long time about more traditional scientific devices, such as simulation models. Hence, why is there such a big fuzz about Deep Learning opacity, and is there anything special about it? In this lecture, I am going to elaborate on an independent dimension to the opacity of DNNs, which is unlike the opacity associated with computer simulations. As I will show, it is this second dimension that makes DNNs special devices at least within scientific research.

Lecture 2. Re-Assessing Machine Cognition in the Age of Deep Learning

How seriously should we take the “I” in AI? Do ChatGPT and co literally understand our prompts? This question has long puzzled philosophers and scientists alike, with verdicts ranging from outright enthusiasm to profound pessimism. In this lecture, I will re-address the issue from two vantage points. First, I will suggest that Searle’s classic “Chinese Room Argument” can be revived in the age of Deep Learning, but in ways quite different from those Searle himself envisioned. Combining a more careful approach to Deep Learning theory with a slight alteration of the original scenario I call “The Chinese Library”, I will show that, insofar as Searle’s arguments were applicable in the 1980s, they are still applicable today. Second, I will suggest a close connection between understanding and the possession of concepts and, based on evidence from the technical literature, suggest that we should not assume that present-day DNNs have concepts – and hence, that they do understand anything.


2. 과학철학 Mini-Workshop

일시: 2024년 8월 29일 (목) 오후 1시 - 3시 30분

장소: 서울대학교 인문대 7동 309호

Lecture 3. Understanding (and) Machine Learning's Black Box Explanation Problems

Practitioners in eXplainable Artificial Intelligence (XAI) view themselves as addressing a range of problems they call ‘black box explanation problems’ (Guidotti et al., 2018): Problems either related to rendering a Machine Learning (ML) model transparent or to rendering its outputs transparent. Many (Páez, 2019; Langer et al., 2021; Zednik, 2021) have argued that standards of explanation in XAI vary with the stakeholder. Buchholz (2023) extends this idea into a means-ends approach: Different stakeholders use different instruments of XAI to render different aspects of ML transparent, and with different goals in mind. In my talk, I shall argue for a more unified view within the context of scientific application. In particular, I suggest that we need to antecedently distinguish between two sets of aims in deploying XAI methods: proximate and ultimate aims. While the proximate aim of deploying XAI methods within the context of a scientific application may be to render either the model or its outputs understandable, the ultimate aim here is to increase one’s understanding of a given subject matter. Furthermore, building on the literature on objectual understanding (Elgin, 2017; Dellsén, 2019), and following a number of suggestions from other philosophers of science (Sullivan, 2019; Knüsel & Baumberger, 2020; Meskhidze, 2021; Räz & Beisbart, 2022), I ask whether the ultimate aim cannot also be pursued by means of ML but without any explanations.

신진 연구자 발표


주최: 서울대학교 과학데이터혁신연구소, 철학사상연구소, AI연구원 ELSI센터

문의: 구본진 조교 (koobon1998@snu.ac.kr)