| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Autor | Link | Nacionalidad | Año | Palabras clave | ||||||||||||||||||||
2 | AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful | EU AI Impact | https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#ai-act-different-rules-for-different-risk-levels-0 | European Union | 2024 | AI-generated, manipulated image, audio, video, content, resembles, existing persons, objects, places, entities, events, falsely , authentic, thruthful | |||||||||||||||||||
3 | Multimedia that has either been synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology. | NSA, EU | https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3523329/nsa-us-federal-agencies-advise-on-deepfake-threats/ | United States | 2023 | multimedia, synthetically created, manipulated | |||||||||||||||||||
4 | ‘Deepfakes’ and synthetic media are new forms of audiovisual manipulation that allows people to create realistic simulations of someone’s face, voice or actions. | LAB WITNESS | https://lab.witness.org/brazil-deepfakes-prepare-now/ | Brasil | Sin datos | synthetic media, audiovisual manipulation, realistic simulations, face, voice, actions | |||||||||||||||||||
5 | Increasingly powerful deep learning algorithms—a subset of machine learning—accompanied by rapid advances in computing power have enabled the generation of hyper-realistic synthetic media; malicious synthetic media is commonly referred to as “deepfakes” (a portmanteau of deep learning and fakes). Deepfakes include all forms of digital content—video, text, images, or audio—that have been either manipulated or created from scratch using deep learning algorithms to primarily mislead, deceive, or influence audiences. | UNIDIR, ONU | https://unidir.org/wp-content/uploads/2023/05/UNIDIR_2021_Innovations_Dialogue.pdf | No data | 2021 | hyper-realistic, synthetic media, malicious, digital content, video, text, images, audio, manipulated, scratch, mislead, deceive, influence audiences. | |||||||||||||||||||
6 | A deepfake is a fraudulent piece of content—typically audio or video—that has been manipulated or created using artificial intelligence. This content replaces a real person’s voice, image, or both with similar looking and sounding artificial likenesses. | Microsoft | https://www.microsoft.com/en-us/microsoft-365-life-hacks/privacy-and-safety/deepfakes | United States | 2022 | Fraudulent content, audio, video, maniulated, artificial intelligence, real person, voice, image, similar looking, artificial likenesses | |||||||||||||||||||
7 | Deepfakes (stemming from “deep learning” and “fake”) are created by techniques that can superimpose face images of a target person onto a video of a source person to make a video of the target person doing or saying things the source person does [...] In a broader definition, deepfakes are artificial intelligence-synthesized content that can also fall into two other categories, i.e., lip-sync and puppet-master | Thanh Thi Nguyena , Quoc Viet Hung Nguyenb , Dung Tien Nguyena , Duc Thanh Nguyena , Thien Huynh-Thec , Saeid Nahavandid , Thanh Tam Nguyene , Quoc-Viet Phamf , Cuong M. Nguyeng | https://arxiv.org/pdf/1909.11573 | No data | 2022 | fake, face images, target person, video, synthesized content | |||||||||||||||||||
8 | A deepfake, coming from the words deep learning and fake, is a synthetic media in which a person in an existing image or video is replaced with someone else's likeness. Deepfakes make use of powerful techniques from machine learning and artifical intellegence to manipulate and generate visual and audio content with a high potential to decieve. The underlying mechanism for deepfake creation is deep learning models such as autoencoders and generative adverisal networks (GAN). These models are used to to examine facial expressions and movements of a person and syntesixe facial images of another person making analogous expressions and movements. | Jaume Clave | https://colab.research.google.com/github/JaumeClave/deepfakes_first_order_model/blob/master/first_order_model_deepfakes.ipynb#scrollTo=cdO_RxQZLahB | No data | 2020 | fake, synthetic media, person, image, video, likeness, manipulate, generate, visual content, audio content, decieve, autoencoders, generative adverisal networks (GAN), facial expressions, facial movements, facial images | |||||||||||||||||||
9 | Representing a specific type of synthetic media, deepfakes are created using sophisticated AI algorithms to depict individuals in highly realistic, and often deceptive, situations that never occurred.These advanced AI models are capable of creating highly convincing visual and audio content, often featuring individuals in fabricated scenarios or expressing statements they have never stated | INTERPOL | https://www.interpol.int/content/download/21179/file/BEYOND%20ILLUSIONS_Report_2024.pdf | No data | 2024 | Synthetic media, inividuals, realistic situations, never ocurred, convincing content, video, audio, fabricated scenarios, statements | |||||||||||||||||||
10 | Manipulated or synthetic audio or visual media that seem authentic, and which feature people that appear to say or do something they have never said or done, produced using artificial intelligence techniques, including machine learning and deep learning | Parlamento europeo | https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf | European Union | 2021 | manipulated, synthetic media, audio, visual, authentic, people, artificial intelligence | |||||||||||||||||||
11 | “Deepfakes refer to synthetic or doctored media digitally manipulated and altered to convincingly misrepresent or impersonate someone using a form of artificial intelligence or AI | Khurana & Khurana | https://www.khuranaandkhurana.com/2024/04/19/the-use-of-deep-fake/#_ftnref2 | India | 2024 | synthetic media, doctored media, digitally manipulated, altered, misrepresent, impersonate | |||||||||||||||||||
12 | deepfakes are a type of synthetic media mostly disseminated with malicious intent, although they are now often used for positive applications too [...] Deepfake technology uses the power of deep learning technology to audio and audio-visual content. Employed properly, these models can produce content that convincingly shows people saying or doing things they never did, or create people that never existed in the first place. The rise of the application of AI to generate deepfakes is already having, and will have, further implications for the way people treat recorded media. | Europol | https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf | European Union | 2022 | synthetic media, disseminated, malicious intent, deep learning, audio, audio-visual content, convincingly, generated content, people, never existed, artificial intelligence, recorded media | |||||||||||||||||||
13 | The term “deepfakes” is derived from the fact that the technology involved in creating this particular style of manipulated content (or “fakes”) involves the use of deep learning techniques [...] All of these types of deepfake media – image, video, audio, and text – could be used to simulate or alter a specific individual or the representation of that individual. This is the primary threat of deepfakes. However, this threat is not restricted to deepfakes alone, but incorporates the entire field of “Synthetic Media” and their use in disinformation. | Homeland Security | https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf | United States | 2019 | manipulated content, media, image, video, audio, text, simulate, alter, specific individual, representation, threat, synthetic media, disinformation, | |||||||||||||||||||
14 | Deepfakes are a type of synthetic media—commonly generated using artificial intelligence/machine learning (AI/ML)—presenting plausible and realistic videos, pictures, audio or text of events which never happened | Homeland Security | https://www.dhs.gov/sites/default/files/2022-10/AEP%20DeepFake%20PHASE2%20FINAL%20corrected20221006.pdf | United States | 2021 | synthetic media, machine learning, realistic, videos, pictures, audio, text, events, never happened | |||||||||||||||||||
15 | Compositional deepfakes are a newly emerging threat and are defined as the combination of multiple fabricated media sources that seem disparate but corroborate each other, leading to synthetic histories that are very believable[...] is the generation of realistic audio, video, text [...] and images that reinforce the story. Compositional deepfake plans could be used to monitor real-time world events, introduce specific deepfake media stories that influence the narrative enough to lead to an engineered real-world event that is a direct consequence of the ‘fake’ media stories. Similarly concerning is the ability to create interactive deepfakes [...] a convincing real-time created without an individual’s consent. | UNESCO (Chowdhury, Rumman y Lakshmi Dhanya) | https://unesdoc.unesco.org/ark:/48223/pf0000387483 | No data | 2023 | emerging threat, fabricated media sources, synthetic histories, disparate, believable, generation, realistic, audio, video, text, images, real-time, world events, influence, narrative, fake media | |||||||||||||||||||
16 | Deepfake is an artificial intelligence technique that allows you to edit fake videos of apparently real people, using unsupervised learning algorithms and existing videos or images. The result of this technique is a very realistic, albeit fictitious video. | UNESCO (Zuazo Natalia) | https://unesdoc.unesco.org/ark:/48223/pf0000388124_eng?posInSet=17&queryId=cb0cf487-db86-4f81-9127-89fb45448eb0 | No data | 2023 | edit, fake, videos, real people, unsupervised learning algorithms, existing videos, images, realistic, fictitious video | |||||||||||||||||||
17 | Deepfakes – realistic AI-generated audio, video, or images that can recreate a person's likeness – are one of the most pressing challenges posed by generative AI, given the potential for bad actors to use it to undermine democracy, exploit artists and performers, and harass and harm everyday people. | IBM | https://newsroom.ibm.com/Blog-Heres-What-Policymakers-Can-Do-About-Deepfakes | United States | 2024 | realistic, AI generated, video, audio, images, recreate, person's likeness, challenges, generative AI, bad actors, undermine democracy, exploid artists, harass, harm, everyday people | |||||||||||||||||||
18 | Deepfakes are highly realistic digital forgeries created using artificial intelligence technology. These manipulated videos or audio recordings can make it appear that someone is saying or doing something they never actually said or did, posing significant challenges for distinguishing truth from fiction in digital media. | IBM | https://www.ibm.com/new/announcements/deepfake-detection | United States | 2024 | realistic, digital forgeries, maniplated videos, audio recordins, someone, saying, doing, challenges, thruth, fiction, digital media | |||||||||||||||||||
19 | La palabra "deepfake" es una combinación de "deep learning" (aprendizaje profundo) y "fake" (falso). Deepfakes son imágenes, videos o grabaciones de audio falsificados. A veces, las personas que aparecen en ellos son identidades falsas generadas por computadora que se ven y suenan como si pudieran ser personas reales. Otras veces las personas son reales, pero sus imágenes y voces son manipulados para hacer y decir cosas que no hicieron ni dijeron. Por ejemplo, un video falso podría usarse para recrear a una celebridad o un político diciendo algo que nunca dijeron. Usando estas falsificaciones realistas, los atacantes pueden generar una realidad alternativa en la que no siempre puedes confiar en tus ojos y oídos | Security awarness | https://www.seguridad.unam.mx/sites/default/files/ouch_march_2022_spanish_latin_america_learn_a_new_survival_skill_spotting_deepfake.pdf | Mexico | 2022 | fake, imágenes, videos, grabaciones de audio, falsificados, personas, identidades falsas, generadas por computadora, personas reales, manipulados, recrear, falsificaciones realistas, atacantes, realidad alternativa | |||||||||||||||||||
20 | A deepfake is a photo, video, or audio track created using artifical intelligence techniques to realistically simulate or alter people's faces, movements, and voices, among other simulations. | Spot the deepfake | https://www.spotdeepfakes.org/en-US/quiz | United States | Sin datos | photo, video, audio track, realistically simulate, people's faces, movements, voices, simulations | |||||||||||||||||||
21 | El deepfake es el contenido generado mediante inteligencia artificial, que puede imitar la apariencia y voz de una persona. | Guardia Nacional | https://www.gob.mx/gncertmx/articulos/recomendaciones-ante-falsos-mensajes-creados-con-inteligencia-artificial#:~:text=El%20deepfake%20es%20el%20contenido,uso%20indebido%20de%20tu%20informaci%C3%B3n. | Mexico | 2025 | contenido, imitar, apariencia, voz, persona | |||||||||||||||||||
22 | utiliza una técnica de IA para generar vídeos realistas, pero falsos, basados en personas auténticas. Sus usos van desde escenas cómicas (un imitador de Tom Cruise en TikTok) o cinematográficas (un Luke Skywalker rejuvenecido en La Guerra de las Galaxias), hasta actuaciones (una colaboración musical no oficial entre Drake y The Weeknd), pasando por delitos (robo de la identidad y chantaje) y actividades antidemocráticas (difusión de noticias falsas). | UNESCO (Jeongki Lim) | https://www.un.org/es/cr%C3%B3nica-onu/inteligencia-artificial-generativa-qu%C3%A9-es-qu%C3%A9-no-es-y-qu%C3%A9-puede-significar-para-naciones | No data | 2023 | videos realsitas, falsos, personas auténticas, escenas, actuaciones, delitos, actividades antidemocráticas | |||||||||||||||||||
23 | Newly emerging technology and knowhow in analysing automatically generated fake content (synthetic media) across audio, text, images and video | UNESCO (Gregory, Sam; Bontcheva, Kalina, Meyer, Trisha y Teyssou Denis:169) | No data | automatically generated, content, ssynthetic mediaa, audio, text, images, video | |||||||||||||||||||||
24 | Synthetic media applications (e.g. videos or sound recordings) that alter a person’s appearance or voice in an attempt to deceive viewers or listeners that what they are seeing or hearing is real (Somers, 2020[86]). Like fake news, deepfakes can be a mixture of real and unreal elements or completely fabricated. | Organisation for Economic Co-operation and Development | https://one.oecd.org/document/DSTI/CDEP(2021)19/FINAL/en/pdf | No data | 2022 | synthetic media, videos, sound, recordings, person's appareance, voice, decieve, real, fake news, unreal, fabricated | |||||||||||||||||||
25 | manipulated or synthetic audio or visual media that seem authentic, and which feature people that appear to say or do something they have never said or done, produced using artificial intelligence techniques, including machine learning and deep learning. | Panel for the Future of Science and Technology (STOA) | https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690039/EPRS_STU(2021)690039_EN.pdf | European Union | 2021 | manipulated, synthetic, audio, visual, media, authenthic, never said, never done | |||||||||||||||||||
26 | content (video, audio or otherwise) that is wholly or partially fabricated or existing content (video, audio or otherwise) that has been manipulated. Several technologies can be used for this purpose, but the most popular is based on what is known as Generative Adversarial Networks (GAN) | Bart van der Sloot, Yvette Wagensveld, Bert-Jaap Koops | https://repository.wodc.nl/bitstream/handle/20.500.12832/3134/3137-deepfakes-summary.pdf | Netherlands | 2021 | content, video, audio, fabricated, manipulated, Generative Adversarial Networks | |||||||||||||||||||
27 | AI-generated image, video, or audio files that can often create convincingly realistic yet deceptive content. | Jack Clark, Loredana Fattorini, Amelia Hardy, Katrina Ligett, Nestor Maslej, Vanessa Parli, Ray Perrault, Anka Reuel, Andrew Shi | https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf | United States | 2024 | image, video, audio, files, convincingly, realistic, deceptove, content | |||||||||||||||||||
28 | Deepfakes are manipulated video files, or sometimes sound files. The term is a combination of the terms 'deep learning' and 'fake.' Artificial intelligence learns to imitate targeted people from existing material. As a result, a deceptively real-looking Tom Cruise can fascinate an audience of millions on TikTok – or actors from the fields of politics and business can become victims of this dangerous cybercrime weapon. | Lauralie Mylène Schweiger | https://www.deutschland.de/en/topic/culture/deepfakes-in-germany-fake-news | Germany | 2022 | manipulated, video, files, sound, fake, imitate, targeted people, real-looking, victims, cybercrime, weapon | |||||||||||||||||||
29 | For a long time, it was very time-consuming to produce high-quality manipulations of dynamic media such as videos or audio recordings. Methods from the field of artificial intelligence (AI) have now made this much easier, and high-quality fakes can be created with comparatively little effort and expertise. Because they use deep neural networks, such methods are often referred to as 'deep fakes'.Methods for manipulating media identities can be divided into three forms of media: video/images, audio and text. | Federal Office for Information Security | https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Kuenstliche-Intelligenz/Deepfakes/deepfakes_node.html | Germany | No data | high-quality, media, manipulations, videos, audio, fakes, manipulating, media, identities, text | |||||||||||||||||||
30 | artificially created or altered photos as well as video or voice recordings that look and sound deceptively real. | Stiftung Wissenschaft und Politik | https://www.swp-berlin.org/publikation/deepfakes-when-we-can-no-longer-believe-our-eyes-and-ears | Germany | 2023 | artificially created, altered photos, video, voice recordings, look, sound, deceptively, real | |||||||||||||||||||
31 | Image manipulation and disinformation have always existed, especially in the run-up to elections. But creating deceptively realistic fake content has never before been as easy as it is today: With the right prompts and a littleadjustment,photos of situations that never occurred can be created en masse. | Fraunhofer Magazine | https://www.fraunhofer.de/en/research/current-research/deepfake.html | Germany | 2024 | image, manipulation, disinformation, deceptively, realistic, fake content, littleadjustment, photos, situations, never ocurred, created, masse | |||||||||||||||||||
32 | A deepfake is a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say. Deepfakes are created using artificial intelligence software that currently draws on a large number of photos or recordings of the person to model and create content. | eSafetyCommisioner | https://www.esafety.gov.au/sites/default/files/2022-01/Deepfake-position-statement%20_v2.pdf | Australia | 2022 | photo, video, sound, file, real person, edited, realistic, false, depiction, doing, saying, photos, recordings , doing, saying | |||||||||||||||||||
33 | Deepfakes refer to images, video or audio of a real person that have been edited, usually with AI technology, to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say. It can also include wholly computer-generated images of humans that do not exist in real life. | Government of South Australia | https://www.agd.sa.gov.au/law-and-justice/consultation/deepfakes#:~:text=Deepfakes%20refer%20to%20images%2C%20video,not%20exist%20in%20real%20life. | Australia | 2024 | images, video, audio, real person, edited, realistic, false, depiction, computed-generated, do not exist, doing, saying | |||||||||||||||||||
34 | A 'deepfake' can be defined as a digital photo, video or sound file of a real person that has been edited to create an extremely realistic, but false depiction of them doing or saying something that they did not actually do or say. | Australian Human Rights Commision | https://humanrights.gov.au/our-work/legal/submission/criminalising-deepfake-sexual-material | Australia | 2024 | photo, video, sound, file, real person, edited, realistic, false, depiction, doing, saying | |||||||||||||||||||
35 | Deepfakes are digital photos, videos or voices of real people that have either been synthetically created or manipulated using artificial intelligence (AI) and can be hard to distinguish from the real thing. | Candy Gibson | https://unisa.edu.au/connect/enterprise-magazine/articles/2024/what-are-deepfakes-and-should-we-be-worried/ | Australia | 2024 | photos, videos, voices, real people, synthetically, created, manipulated, hard to dintinguis, real thing | |||||||||||||||||||
36 | Deepfakes are videos, audio, or images that seem real but have been manipulated with AI. They've been used to try to influence elections and to create non-consensual pornography | U.S. Government Accountability | https://www.gao.gov/products/gao-24-107292 | United Stated | 2024 | videos, audio, images, seem real, manipulated, influence, elections, non-consensual, pornography | |||||||||||||||||||
37 | a video or sound recording that replaces someone's face or voice with that of someone else, in a way that appears real | Cambridge Dictionary | https://dictionary.cambridge.org/us/dictionary/english/deepfake | United Kingdom | No data | video, sound, recording, replaces, someone's face, someone's voice, appears, real | |||||||||||||||||||
38 | The term deepfake is used to describe audio, video, or an image that has been edited or created using artificial intelligence (AI) to simulate a real person. Deepfakes can be used in negative ways such as to commit fraud or trick people, or positive ways such as education or entertainment. | Western Carolina University | https://researchguides.wcu.edu/fakenews/deepfakes | United States | 2024 | audio, video, image, edited, created, simulate, real person, negative, fraud, trick, positive, education, entertainment | |||||||||||||||||||
39 | Deepfakes refer to forged or fake videos created via deep learning, a form of artificial intelligence, where a person’s likeness, including their face and voice, can be realistically swapped with someone else’s. | Organization for Social Media Safety | https://www.socialmediasafety.org/advocacy/deepfake-technology/ | United States | No data | forged, fake, videos, created, person's likeness, face, voice, realistically, swapped, someone else | |||||||||||||||||||
40 | Deepfakes are fake news on steroids. By using Artificial Intelligence (AI), videos can be created by swapping out one person's face and/or voice, for another person's. While the ability to alter videos isn't new, AI has automated the process, and as the technology improves, deepfakes are getting more realistic, and are easier and cheaper to make. | Milwaukee Area Technical College Library | https://guides.matc.edu/fakenews/deepfakes | United States | No data | fake news, videos, created, swapping, person's face, person's voice, alter, realistic | |||||||||||||||||||
41 | |||||||||||||||||||||||||
42 | |||||||||||||||||||||||||
43 | |||||||||||||||||||||||||
44 | |||||||||||||||||||||||||
45 | |||||||||||||||||||||||||
46 | |||||||||||||||||||||||||
47 | |||||||||||||||||||||||||
48 | |||||||||||||||||||||||||
49 | |||||||||||||||||||||||||
50 | |||||||||||||||||||||||||
51 | |||||||||||||||||||||||||
52 | |||||||||||||||||||||||||
53 | |||||||||||||||||||||||||
54 | |||||||||||||||||||||||||
55 | |||||||||||||||||||||||||
56 | |||||||||||||||||||||||||
57 | |||||||||||||||||||||||||
58 | |||||||||||||||||||||||||
59 | |||||||||||||||||||||||||
60 | |||||||||||||||||||||||||
61 | |||||||||||||||||||||||||
62 | |||||||||||||||||||||||||
63 | |||||||||||||||||||||||||
64 | |||||||||||||||||||||||||
65 | |||||||||||||||||||||||||
66 | |||||||||||||||||||||||||
67 | |||||||||||||||||||||||||
68 | |||||||||||||||||||||||||
69 | |||||||||||||||||||||||||
70 | |||||||||||||||||||||||||
71 | |||||||||||||||||||||||||
72 | |||||||||||||||||||||||||
73 | |||||||||||||||||||||||||
74 | |||||||||||||||||||||||||
75 | |||||||||||||||||||||||||
76 | |||||||||||||||||||||||||
77 | |||||||||||||||||||||||||
78 | |||||||||||||||||||||||||
79 | |||||||||||||||||||||||||
80 | |||||||||||||||||||||||||
81 | |||||||||||||||||||||||||
82 | |||||||||||||||||||||||||
83 | |||||||||||||||||||||||||
84 | |||||||||||||||||||||||||
85 | |||||||||||||||||||||||||
86 | |||||||||||||||||||||||||
87 | |||||||||||||||||||||||||
88 | |||||||||||||||||||||||||
89 | |||||||||||||||||||||||||
90 | |||||||||||||||||||||||||
91 | |||||||||||||||||||||||||
92 | |||||||||||||||||||||||||
93 | |||||||||||||||||||||||||
94 | |||||||||||||||||||||||||
95 | |||||||||||||||||||||||||
96 | |||||||||||||||||||||||||
97 | |||||||||||||||||||||||||
98 | |||||||||||||||||||||||||
99 | |||||||||||||||||||||||||
100 |