Large Language Models in Cybersecurity : : Threats, Exposure and Mitigation / / edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.

This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also...

Full description

Saved in:
Bibliographic Details
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer Nature Switzerland :, Imprint: Springer,, 2024.
Year of Publication:2024
Edition:1st ed. 2024.
Language:English
Physical Description:1 online resource (249 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
id 993679264604498
ctrlnum (MiAaPQ)EBC31361132
(Au-PeEL)EBL31361132
(CKB)32221860000041
(DE-He213)978-3-031-54827-7
(EXLCZ)9932221860000041
collection bib_alma
record_format marc
spelling Kucharavy, Andrei.
Large Language Models in Cybersecurity : Threats, Exposure and Mitigation / edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.
1st ed. 2024.
Cham : Springer Nature Switzerland : Imprint: Springer, 2024.
1 online resource (249 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work's risks for cybersecurity and provides them with tools to mitigate those risks. The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allowsafe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principles would look like and the other presenting a summary of the duality of LLMs in cyber-security. This book represents the second in a series published by the Technology Monitoring (TM) team of the Cyber-Defence Campus. The first book entitled "Trends in Data Protection and Encryption Technologies" appeared in 2023. This book series provides technology and trend anticipation for government, industry, and academic decision-makers as well as technical experts.
Part I: Introduction -- 1. From Deep Neural Language Models to LLMs -- 2. Adapting LLMs to Downstream Applications -- 3. Overview of Existing LLM Families -- 4. Conversational Agents -- 5. Fundamental Limitations of Generative LLMs -- 6. Tasks for LLMs and their Evaluation -- Part II: LLMs in Cybersecurity -- 7. Private Information Leakage in LLMs -- 8. Phishing and Social Engineering in the Age of LLMs -- 9. Vulnerabilities Introduced by LLMs through Code Suggestions -- 10. LLM Controls Execution Flow Hijacking -- 11. LLM-Aided Social Media Influence Operations -- 12. Deep(er)Web Indexing with LLMs -- Part III: Tracking and Forecasting Exposure -- 13. LLM Adoption Trends and Associated Risks -- 14. The Flow of Investments in the LLM Space -- 15. Insurance Outlook for LLM-Induced Risk -- 16. Copyright-Related Risks in the Creation and Use of ML/AI Systems -- 17. Monitoring Emerging Trends in LLM Research -- Part IV: Mitigation -- 18. Enhancing Security Awareness and Education for LLMs -- 19. Towards Privacy Preserving LLMs Training -- 20. Adversarial Evasion on LLMs -- 21. Robust and Private Federated Learning on LLMs -- 22. LLM Detectors -- 23. On-Site Deployment of LLMs -- 24. LLMs Red Teaming -- 25. Standards for LLM Security -- Part V: Conclusion -- 26. Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses -- 27. Towards Safe LLMs Integration.
Open Access
Artificial intelligence.
Data protection.
Artificial Intelligence.
Data and Information Security.
Plancherel, Octave.
Mulder, Valentin.
Mermoud, Alain.
Lenders, Vincent.
3-031-54826-4
language English
format eBook
author Kucharavy, Andrei.
spellingShingle Kucharavy, Andrei.
Large Language Models in Cybersecurity : Threats, Exposure and Mitigation /
Part I: Introduction -- 1. From Deep Neural Language Models to LLMs -- 2. Adapting LLMs to Downstream Applications -- 3. Overview of Existing LLM Families -- 4. Conversational Agents -- 5. Fundamental Limitations of Generative LLMs -- 6. Tasks for LLMs and their Evaluation -- Part II: LLMs in Cybersecurity -- 7. Private Information Leakage in LLMs -- 8. Phishing and Social Engineering in the Age of LLMs -- 9. Vulnerabilities Introduced by LLMs through Code Suggestions -- 10. LLM Controls Execution Flow Hijacking -- 11. LLM-Aided Social Media Influence Operations -- 12. Deep(er)Web Indexing with LLMs -- Part III: Tracking and Forecasting Exposure -- 13. LLM Adoption Trends and Associated Risks -- 14. The Flow of Investments in the LLM Space -- 15. Insurance Outlook for LLM-Induced Risk -- 16. Copyright-Related Risks in the Creation and Use of ML/AI Systems -- 17. Monitoring Emerging Trends in LLM Research -- Part IV: Mitigation -- 18. Enhancing Security Awareness and Education for LLMs -- 19. Towards Privacy Preserving LLMs Training -- 20. Adversarial Evasion on LLMs -- 21. Robust and Private Federated Learning on LLMs -- 22. LLM Detectors -- 23. On-Site Deployment of LLMs -- 24. LLMs Red Teaming -- 25. Standards for LLM Security -- Part V: Conclusion -- 26. Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses -- 27. Towards Safe LLMs Integration.
author_facet Kucharavy, Andrei.
Plancherel, Octave.
Mulder, Valentin.
Mermoud, Alain.
Lenders, Vincent.
author_variant a k ak
author2 Plancherel, Octave.
Mulder, Valentin.
Mermoud, Alain.
Lenders, Vincent.
author2_variant o p op
v m vm
a m am
v l vl
author2_role TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
author_sort Kucharavy, Andrei.
title Large Language Models in Cybersecurity : Threats, Exposure and Mitigation /
title_sub Threats, Exposure and Mitigation /
title_full Large Language Models in Cybersecurity : Threats, Exposure and Mitigation / edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.
title_fullStr Large Language Models in Cybersecurity : Threats, Exposure and Mitigation / edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.
title_full_unstemmed Large Language Models in Cybersecurity : Threats, Exposure and Mitigation / edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.
title_auth Large Language Models in Cybersecurity : Threats, Exposure and Mitigation /
title_new Large Language Models in Cybersecurity :
title_sort large language models in cybersecurity : threats, exposure and mitigation /
publisher Springer Nature Switzerland : Imprint: Springer,
publishDate 2024
physical 1 online resource (249 pages)
edition 1st ed. 2024.
contents Part I: Introduction -- 1. From Deep Neural Language Models to LLMs -- 2. Adapting LLMs to Downstream Applications -- 3. Overview of Existing LLM Families -- 4. Conversational Agents -- 5. Fundamental Limitations of Generative LLMs -- 6. Tasks for LLMs and their Evaluation -- Part II: LLMs in Cybersecurity -- 7. Private Information Leakage in LLMs -- 8. Phishing and Social Engineering in the Age of LLMs -- 9. Vulnerabilities Introduced by LLMs through Code Suggestions -- 10. LLM Controls Execution Flow Hijacking -- 11. LLM-Aided Social Media Influence Operations -- 12. Deep(er)Web Indexing with LLMs -- Part III: Tracking and Forecasting Exposure -- 13. LLM Adoption Trends and Associated Risks -- 14. The Flow of Investments in the LLM Space -- 15. Insurance Outlook for LLM-Induced Risk -- 16. Copyright-Related Risks in the Creation and Use of ML/AI Systems -- 17. Monitoring Emerging Trends in LLM Research -- Part IV: Mitigation -- 18. Enhancing Security Awareness and Education for LLMs -- 19. Towards Privacy Preserving LLMs Training -- 20. Adversarial Evasion on LLMs -- 21. Robust and Private Federated Learning on LLMs -- 22. LLM Detectors -- 23. On-Site Deployment of LLMs -- 24. LLMs Red Teaming -- 25. Standards for LLM Security -- Part V: Conclusion -- 26. Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses -- 27. Towards Safe LLMs Integration.
isbn 3-031-54827-2
3-031-54826-4
callnumber-first Q - Science
callnumber-subject Q - General Science
callnumber-label Q334-342
callnumber-sort Q 3334 3342
illustrated Not Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 006 - Special computer methods
dewey-full 006.3
dewey-sort 16.3
dewey-raw 006.3
dewey-search 006.3
work_keys_str_mv AT kucharavyandrei largelanguagemodelsincybersecuritythreatsexposureandmitigation
AT planchereloctave largelanguagemodelsincybersecuritythreatsexposureandmitigation
AT muldervalentin largelanguagemodelsincybersecuritythreatsexposureandmitigation
AT mermoudalain largelanguagemodelsincybersecuritythreatsexposureandmitigation
AT lendersvincent largelanguagemodelsincybersecuritythreatsexposureandmitigation
status_str n
ids_txt_mv (MiAaPQ)EBC31361132
(Au-PeEL)EBL31361132
(CKB)32221860000041
(DE-He213)978-3-031-54827-7
(EXLCZ)9932221860000041
carrierType_str_mv cr
is_hierarchy_title Large Language Models in Cybersecurity : Threats, Exposure and Mitigation /
author2_original_writing_str_mv noLinkedField
noLinkedField
noLinkedField
noLinkedField
_version_ 1803515085068435456
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>05217nam a22005535i 4500</leader><controlfield tag="001">993679264604498</controlfield><controlfield tag="005">20240601125507.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240601s2024 sz | o |||| 0|eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">3-031-54827-2</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/978-3-031-54827-7</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)EBC31361132</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL31361132</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)32221860000041</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)978-3-031-54827-7</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)9932221860000041</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q334-342</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">UYQ</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">COM004000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">UYQ</subfield><subfield code="2">thema</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">006.3</subfield><subfield code="2">23</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Kucharavy, Andrei.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Large Language Models in Cybersecurity :</subfield><subfield code="b">Threats, Exposure and Mitigation /</subfield><subfield code="c">edited by Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud, Vincent Lenders.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed. 2024.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer Nature Switzerland :</subfield><subfield code="b">Imprint: Springer,</subfield><subfield code="c">2024.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (249 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work's risks for cybersecurity and provides them with tools to mitigate those risks. The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allowsafe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principles would look like and the other presenting a summary of the duality of LLMs in cyber-security. This book represents the second in a series published by the Technology Monitoring (TM) team of the Cyber-Defence Campus. The first book entitled "Trends in Data Protection and Encryption Technologies" appeared in 2023. This book series provides technology and trend anticipation for government, industry, and academic decision-makers as well as technical experts.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Part I: Introduction -- 1. From Deep Neural Language Models to LLMs -- 2. Adapting LLMs to Downstream Applications -- 3. Overview of Existing LLM Families -- 4. Conversational Agents -- 5. Fundamental Limitations of Generative LLMs -- 6. Tasks for LLMs and their Evaluation -- Part II: LLMs in Cybersecurity -- 7. Private Information Leakage in LLMs -- 8. Phishing and Social Engineering in the Age of LLMs -- 9. Vulnerabilities Introduced by LLMs through Code Suggestions -- 10. LLM Controls Execution Flow Hijacking -- 11. LLM-Aided Social Media Influence Operations -- 12. Deep(er)Web Indexing with LLMs -- Part III: Tracking and Forecasting Exposure -- 13. LLM Adoption Trends and Associated Risks -- 14. The Flow of Investments in the LLM Space -- 15. Insurance Outlook for LLM-Induced Risk -- 16. Copyright-Related Risks in the Creation and Use of ML/AI Systems -- 17. Monitoring Emerging Trends in LLM Research -- Part IV: Mitigation -- 18. Enhancing Security Awareness and Education for LLMs -- 19. Towards Privacy Preserving LLMs Training -- 20. Adversarial Evasion on LLMs -- 21. Robust and Private Federated Learning on LLMs -- 22. LLM Detectors -- 23. On-Site Deployment of LLMs -- 24. LLMs Red Teaming -- 25. Standards for LLM Security -- Part V: Conclusion -- 26. Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses -- 27. Towards Safe LLMs Integration.</subfield></datafield><datafield tag="506" ind1="0" ind2=" "><subfield code="a">Open Access</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Data protection.</subfield></datafield><datafield tag="650" ind1="1" ind2="4"><subfield code="a">Artificial Intelligence.</subfield></datafield><datafield tag="650" ind1="2" ind2="4"><subfield code="a">Data and Information Security.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Plancherel, Octave.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mulder, Valentin.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mermoud, Alain.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lenders, Vincent.</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">3-031-54826-4</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2024-07-03 00:37:33 Europe/Vienna</subfield><subfield code="f">system</subfield><subfield code="c">marc21</subfield><subfield code="a">2024-06-05 14:00:35 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&amp;portfolio_pid=5356559210004498&amp;Force_direct=true</subfield><subfield code="Z">5356559210004498</subfield><subfield code="b">Available</subfield><subfield code="8">5356559210004498</subfield></datafield></record></collection>