Making AI intelligible : : philosophical foundations / / Herman Cappelen and Josh Dever.

Can humans and artificial intelligences share concepts and communicate? This book shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can underst...

Full description

Saved in:
Bibliographic Details
Superior document:Oxford scholarship online
VerfasserIn:
TeilnehmendeR:
Place / Publishing House:Oxford : : Oxford University Press,, 2021.
Year of Publication:2021
Edition:First edition.
Language:English
Series:Oxford scholarship online.
Physical Description:1 online resource (192 pages).
Notes:
  • This edition also issued in print: 2021.
  • "This is an open access publication, available online and distributed under the terms of a Creative Commons Attribution - Non Commercial - No Derivatives 4.0 International licence (CC BY-NC-ND 4.0)"--Home page.
Tags: Add Tag
No Tags, Be the first to tag this record!
id 993545041804498
ctrlnum (CKB)5590000000462657
(StDuBDS)EDZ0002526633
(MiAaPQ)EBC6568369
(Au-PeEL)EBL6568369
(OCoLC)1249471030
(oapen)https://directory.doabooks.org/handle/20.500.12854/70055
(EXLCZ)995590000000462657
collection bib_alma
record_format marc
spelling Cappelen, Herman, author.
Making AI intelligible : philosophical foundations / Herman Cappelen and Josh Dever.
First edition.
Oxford : Oxford University Press, 2021.
1 online resource (192 pages).
text rdacontent
computer rdamedia
online resource rdacarrier
Oxford scholarship online
Specialized.
Cover -- Making AI Intelligible Philosophical Foundations: Philosophical Foundations -- Copyright -- Contents -- Part I: Introduction and Overview -- Chapter 2: Alfred (the Dismissive Sceptic): Philosophers, Go Away! -- A Dialogue with Alfred (the Dismissive Sceptic) -- Part II: A Proposal for how to Attribute Content to AI -- Chapter 3: Terminology: Aboutness, Representation, and Metasemantics -- Loose Talk, Hyperbole, or 'Derived Intentionality'? -- Aboutness and Representation -- AI, Metasemantics, and the Philosophy of Mind -- Chapter 4: Our Theory: De-Anthropocentrized Externalism -- First Claim: Content for AI Systems Should Be Explained Externalistically -- Second Claim: Existing Externalist Accounts of Content Are Anthropocentric -- Third Claim: We Need Meta-Metasemantic Guidance -- A Meta-Metasemantic Suggestion: Interpreter-centric Knowledge-Maximization -- Chapter 5: Application: The Predicate 'High Risk' -- The Background Theory: Kripke-Style Externalism -- Starting Thought: SmartCredit Expresses High Risk Contents Because of its Causal History -- Anthropocentric Abstraction of 'Anchoring' -- Schematic AI-Suitable Kripke-Style Metasemantics -- Complications and Choice Points -- Taking Stock -- Appendix to Chapter 5: More on Reference Preservation in ML Systems -- Chapter 6: Application: Names and the Mental Files Framework -- Does SmartCredit Use Names? -- The Mental Files Framework to the Rescue? -- Epistemically Rewarding Relations for Neural Networks? -- Case Studies, Complications, and Reference Shifts -- Taking Stock -- Chapter 7: Application: Predication and Commitment -- Predication: Brief Introduction to the Act Theoretic View -- Turning to AI and Disentangling Three Different Questions -- The Metasemantics of Predication: A Teleofunctionalist Hypothesis -- Some Background: Teleosemantics and Teleofunctional Role.
Predication in AI -- AI Predication and Kinds of Teleology -- Why Teleofunctionalism and Not Kripke or Evans? -- Teleofunctional Role and Commitment (or Assertion) -- Theories of Assertion and Commitment for Humans and AI -- Part III: Conclusion -- Chapter 8: Four Concluding Thoughts -- Dynamic Goals -- A Story of Neural Networks Taking Over in Ways We Cannot Understand -- Why This Story is Disturbing and Relevant -- Taking Stock and General Lessons -- The Extended Mind and AI Concept Possession -- Background: The Extended Mind and Active Externalism -- The Extended Mind and Conceptual Competency -- From Experts Determining Meaning to Artificial Intelligences Determining Meaning -- Some New Distinctions: Extended Mind Internalist versus Extended Mind Externalists -- Kripke, Putnam, and Burge as Extended Mind Internalists -- Concept Possession, Functionalism, and Ways of Life -- Implications for the View Defended in This Book -- An Objection Revisited -- Reply to the Objection -- What Makes it a Stop Sign Detector? -- Adversarial Perturbations -- Explainable AI and Metasemantics -- Bibliography -- Index.
English
This edition also issued in print: 2021.
"This is an open access publication, available online and distributed under the terms of a Creative Commons Attribution - Non Commercial - No Derivatives 4.0 International licence (CC BY-NC-ND 4.0)"--Home page.
Includes bibliographical references and index.
Can humans and artificial intelligences share concepts and communicate? This book shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.
Description based on online resource; title from home page (viewed on April 22, 2021).
Open access.
Artificial intelligence Philosophy.
0-19-289472-2
Dever, Josh, author.
Oxford scholarship online.
language English
format eBook
author Cappelen, Herman,
Dever, Josh,
spellingShingle Cappelen, Herman,
Dever, Josh,
Making AI intelligible : philosophical foundations /
Oxford scholarship online
Cover -- Making AI Intelligible Philosophical Foundations: Philosophical Foundations -- Copyright -- Contents -- Part I: Introduction and Overview -- Chapter 2: Alfred (the Dismissive Sceptic): Philosophers, Go Away! -- A Dialogue with Alfred (the Dismissive Sceptic) -- Part II: A Proposal for how to Attribute Content to AI -- Chapter 3: Terminology: Aboutness, Representation, and Metasemantics -- Loose Talk, Hyperbole, or 'Derived Intentionality'? -- Aboutness and Representation -- AI, Metasemantics, and the Philosophy of Mind -- Chapter 4: Our Theory: De-Anthropocentrized Externalism -- First Claim: Content for AI Systems Should Be Explained Externalistically -- Second Claim: Existing Externalist Accounts of Content Are Anthropocentric -- Third Claim: We Need Meta-Metasemantic Guidance -- A Meta-Metasemantic Suggestion: Interpreter-centric Knowledge-Maximization -- Chapter 5: Application: The Predicate 'High Risk' -- The Background Theory: Kripke-Style Externalism -- Starting Thought: SmartCredit Expresses High Risk Contents Because of its Causal History -- Anthropocentric Abstraction of 'Anchoring' -- Schematic AI-Suitable Kripke-Style Metasemantics -- Complications and Choice Points -- Taking Stock -- Appendix to Chapter 5: More on Reference Preservation in ML Systems -- Chapter 6: Application: Names and the Mental Files Framework -- Does SmartCredit Use Names? -- The Mental Files Framework to the Rescue? -- Epistemically Rewarding Relations for Neural Networks? -- Case Studies, Complications, and Reference Shifts -- Taking Stock -- Chapter 7: Application: Predication and Commitment -- Predication: Brief Introduction to the Act Theoretic View -- Turning to AI and Disentangling Three Different Questions -- The Metasemantics of Predication: A Teleofunctionalist Hypothesis -- Some Background: Teleosemantics and Teleofunctional Role.
Predication in AI -- AI Predication and Kinds of Teleology -- Why Teleofunctionalism and Not Kripke or Evans? -- Teleofunctional Role and Commitment (or Assertion) -- Theories of Assertion and Commitment for Humans and AI -- Part III: Conclusion -- Chapter 8: Four Concluding Thoughts -- Dynamic Goals -- A Story of Neural Networks Taking Over in Ways We Cannot Understand -- Why This Story is Disturbing and Relevant -- Taking Stock and General Lessons -- The Extended Mind and AI Concept Possession -- Background: The Extended Mind and Active Externalism -- The Extended Mind and Conceptual Competency -- From Experts Determining Meaning to Artificial Intelligences Determining Meaning -- Some New Distinctions: Extended Mind Internalist versus Extended Mind Externalists -- Kripke, Putnam, and Burge as Extended Mind Internalists -- Concept Possession, Functionalism, and Ways of Life -- Implications for the View Defended in This Book -- An Objection Revisited -- Reply to the Objection -- What Makes it a Stop Sign Detector? -- Adversarial Perturbations -- Explainable AI and Metasemantics -- Bibliography -- Index.
author_facet Cappelen, Herman,
Dever, Josh,
Dever, Josh,
author_variant h c hc
j d jd
author_role VerfasserIn
VerfasserIn
author2 Dever, Josh,
author2_role TeilnehmendeR
author_sort Cappelen, Herman,
title Making AI intelligible : philosophical foundations /
title_sub philosophical foundations /
title_full Making AI intelligible : philosophical foundations / Herman Cappelen and Josh Dever.
title_fullStr Making AI intelligible : philosophical foundations / Herman Cappelen and Josh Dever.
title_full_unstemmed Making AI intelligible : philosophical foundations / Herman Cappelen and Josh Dever.
title_auth Making AI intelligible : philosophical foundations /
title_new Making AI intelligible :
title_sort making ai intelligible : philosophical foundations /
series Oxford scholarship online
series2 Oxford scholarship online
publisher Oxford University Press,
publishDate 2021
physical 1 online resource (192 pages).
edition First edition.
contents Cover -- Making AI Intelligible Philosophical Foundations: Philosophical Foundations -- Copyright -- Contents -- Part I: Introduction and Overview -- Chapter 2: Alfred (the Dismissive Sceptic): Philosophers, Go Away! -- A Dialogue with Alfred (the Dismissive Sceptic) -- Part II: A Proposal for how to Attribute Content to AI -- Chapter 3: Terminology: Aboutness, Representation, and Metasemantics -- Loose Talk, Hyperbole, or 'Derived Intentionality'? -- Aboutness and Representation -- AI, Metasemantics, and the Philosophy of Mind -- Chapter 4: Our Theory: De-Anthropocentrized Externalism -- First Claim: Content for AI Systems Should Be Explained Externalistically -- Second Claim: Existing Externalist Accounts of Content Are Anthropocentric -- Third Claim: We Need Meta-Metasemantic Guidance -- A Meta-Metasemantic Suggestion: Interpreter-centric Knowledge-Maximization -- Chapter 5: Application: The Predicate 'High Risk' -- The Background Theory: Kripke-Style Externalism -- Starting Thought: SmartCredit Expresses High Risk Contents Because of its Causal History -- Anthropocentric Abstraction of 'Anchoring' -- Schematic AI-Suitable Kripke-Style Metasemantics -- Complications and Choice Points -- Taking Stock -- Appendix to Chapter 5: More on Reference Preservation in ML Systems -- Chapter 6: Application: Names and the Mental Files Framework -- Does SmartCredit Use Names? -- The Mental Files Framework to the Rescue? -- Epistemically Rewarding Relations for Neural Networks? -- Case Studies, Complications, and Reference Shifts -- Taking Stock -- Chapter 7: Application: Predication and Commitment -- Predication: Brief Introduction to the Act Theoretic View -- Turning to AI and Disentangling Three Different Questions -- The Metasemantics of Predication: A Teleofunctionalist Hypothesis -- Some Background: Teleosemantics and Teleofunctional Role.
Predication in AI -- AI Predication and Kinds of Teleology -- Why Teleofunctionalism and Not Kripke or Evans? -- Teleofunctional Role and Commitment (or Assertion) -- Theories of Assertion and Commitment for Humans and AI -- Part III: Conclusion -- Chapter 8: Four Concluding Thoughts -- Dynamic Goals -- A Story of Neural Networks Taking Over in Ways We Cannot Understand -- Why This Story is Disturbing and Relevant -- Taking Stock and General Lessons -- The Extended Mind and AI Concept Possession -- Background: The Extended Mind and Active Externalism -- The Extended Mind and Conceptual Competency -- From Experts Determining Meaning to Artificial Intelligences Determining Meaning -- Some New Distinctions: Extended Mind Internalist versus Extended Mind Externalists -- Kripke, Putnam, and Burge as Extended Mind Internalists -- Concept Possession, Functionalism, and Ways of Life -- Implications for the View Defended in This Book -- An Objection Revisited -- Reply to the Objection -- What Makes it a Stop Sign Detector? -- Adversarial Perturbations -- Explainable AI and Metasemantics -- Bibliography -- Index.
isbn 0-19-264756-3
0-19-191560-2
0-19-264755-5
0-19-289472-2
callnumber-first Q - Science
callnumber-subject Q - General Science
callnumber-label Q334
callnumber-sort Q 3334.7
illustrated Not Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 006 - Special computer methods
dewey-full 006.301
dewey-sort 16.301
dewey-raw 006.301
dewey-search 006.301
oclc_num 1249471030
work_keys_str_mv AT cappelenherman makingaiintelligiblephilosophicalfoundations
AT deverjosh makingaiintelligiblephilosophicalfoundations
status_str n
ids_txt_mv (CKB)5590000000462657
(StDuBDS)EDZ0002526633
(MiAaPQ)EBC6568369
(Au-PeEL)EBL6568369
(OCoLC)1249471030
(oapen)https://directory.doabooks.org/handle/20.500.12854/70055
(EXLCZ)995590000000462657
hierarchy_parent_title Oxford scholarship online
is_hierarchy_title Making AI intelligible : philosophical foundations /
container_title Oxford scholarship online
author2_original_writing_str_mv noLinkedField
_version_ 1804866486909534208
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02133nam a2200397 i 4500</leader><controlfield tag="001">993545041804498</controlfield><controlfield tag="005">20230125203726.0</controlfield><controlfield tag="006">m|||||o||d||||||||</controlfield><controlfield tag="007">cr#|||||||||||</controlfield><controlfield tag="008">210305s2021 enk fob 001|0|eng|d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0-19-264756-3</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0-19-191560-2</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0-19-264755-5</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)5590000000462657</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(StDuBDS)EDZ0002526633</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)EBC6568369</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL6568369</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1249471030</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(oapen)https://directory.doabooks.org/handle/20.500.12854/70055</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)995590000000462657</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">StDuBDS</subfield><subfield code="b">eng</subfield><subfield code="c">StDuBDS</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q334.7</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.301</subfield><subfield code="2">23</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Cappelen, Herman,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Making AI intelligible :</subfield><subfield code="b">philosophical foundations /</subfield><subfield code="c">Herman Cappelen and Josh Dever.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">First edition.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Oxford :</subfield><subfield code="b">Oxford University Press,</subfield><subfield code="c">2021.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (192 pages).</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Oxford scholarship online</subfield></datafield><datafield tag="521" ind1=" " ind2=" "><subfield code="a">Specialized.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover -- Making AI Intelligible Philosophical Foundations: Philosophical Foundations -- Copyright -- Contents -- Part I: Introduction and Overview -- Chapter 2: Alfred (the Dismissive Sceptic): Philosophers, Go Away! -- A Dialogue with Alfred (the Dismissive Sceptic) -- Part II: A Proposal for how to Attribute Content to AI -- Chapter 3: Terminology: Aboutness, Representation, and Metasemantics -- Loose Talk, Hyperbole, or 'Derived Intentionality'? -- Aboutness and Representation -- AI, Metasemantics, and the Philosophy of Mind -- Chapter 4: Our Theory: De-Anthropocentrized Externalism -- First Claim: Content for AI Systems Should Be Explained Externalistically -- Second Claim: Existing Externalist Accounts of Content Are Anthropocentric -- Third Claim: We Need Meta-Metasemantic Guidance -- A Meta-Metasemantic Suggestion: Interpreter-centric Knowledge-Maximization -- Chapter 5: Application: The Predicate 'High Risk' -- The Background Theory: Kripke-Style Externalism -- Starting Thought: SmartCredit Expresses High Risk Contents Because of its Causal History -- Anthropocentric Abstraction of 'Anchoring' -- Schematic AI-Suitable Kripke-Style Metasemantics -- Complications and Choice Points -- Taking Stock -- Appendix to Chapter 5: More on Reference Preservation in ML Systems -- Chapter 6: Application: Names and the Mental Files Framework -- Does SmartCredit Use Names? -- The Mental Files Framework to the Rescue? -- Epistemically Rewarding Relations for Neural Networks? -- Case Studies, Complications, and Reference Shifts -- Taking Stock -- Chapter 7: Application: Predication and Commitment -- Predication: Brief Introduction to the Act Theoretic View -- Turning to AI and Disentangling Three Different Questions -- The Metasemantics of Predication: A Teleofunctionalist Hypothesis -- Some Background: Teleosemantics and Teleofunctional Role.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Predication in AI -- AI Predication and Kinds of Teleology -- Why Teleofunctionalism and Not Kripke or Evans? -- Teleofunctional Role and Commitment (or Assertion) -- Theories of Assertion and Commitment for Humans and AI -- Part III: Conclusion -- Chapter 8: Four Concluding Thoughts -- Dynamic Goals -- A Story of Neural Networks Taking Over in Ways We Cannot Understand -- Why This Story is Disturbing and Relevant -- Taking Stock and General Lessons -- The Extended Mind and AI Concept Possession -- Background: The Extended Mind and Active Externalism -- The Extended Mind and Conceptual Competency -- From Experts Determining Meaning to Artificial Intelligences Determining Meaning -- Some New Distinctions: Extended Mind Internalist versus Extended Mind Externalists -- Kripke, Putnam, and Burge as Extended Mind Internalists -- Concept Possession, Functionalism, and Ways of Life -- Implications for the View Defended in This Book -- An Objection Revisited -- Reply to the Objection -- What Makes it a Stop Sign Detector? -- Adversarial Perturbations -- Explainable AI and Metasemantics -- Bibliography -- Index.</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">English</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">This edition also issued in print: 2021.</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">"This is an open access publication, available online and distributed under the terms of a Creative Commons Attribution - Non Commercial - No Derivatives 4.0 International licence (CC BY-NC-ND 4.0)"--Home page.</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references and index.</subfield></datafield><datafield tag="520" ind1="8" ind2=" "><subfield code="a">Can humans and artificial intelligences share concepts and communicate? This book shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on online resource; title from home page (viewed on April 22, 2021).</subfield></datafield><datafield tag="506" ind1="0" ind2=" "><subfield code="a">Open access.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence</subfield><subfield code="x">Philosophy.</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">0-19-289472-2</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dever, Josh,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Oxford scholarship online.</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2024-07-17 23:16:27 Europe/Vienna</subfield><subfield code="f">system</subfield><subfield code="c">marc21</subfield><subfield code="a">2021-05-11 07:57:52 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&amp;portfolio_pid=5337835070004498&amp;Force_direct=true</subfield><subfield code="Z">5337835070004498</subfield><subfield code="b">Available</subfield><subfield code="8">5337835070004498</subfield></datafield></record></collection>