Deepfake: The Challenge of the Digital Age and Why Media Literacy is More Important than Ever

In a world where information moves faster than we can check it, deepfake technology is becoming one of the biggest challenges of modern information exchange. It is artificially generated video and audio content that looks so believable that many people can no longer distinguish truth from manipulation (Masood, Nawaz, Malik, Javed & Irtaza, 2021).

And that’s where media literacy comes into play.

Deepfakes are not just a technological innovation. They change the way we build trust in, verify, and understand the content we watch.

Short History

The term ‘deepfake’ appeared around 2017, when a Reddit user began sharing AI-manipulated explicit videos in which the faces of celebrities were superimposed onto the bodies of actors in adult films. Since then, technology has developed rapidly, becoming more accessible and sophisticated. Initially limited to niche internet communities, deepfakes are now present in mainstream media, politics, and the entertainment industry (Maras & Alexandrou, 2019).

From Innovation to Manipulation

Deepfake technology was developed using deep learning and models capable of recreating someone’s appearance, movements or voice (Cetinski, 2024). Although born out of research curiosity and the creative needs of the film industry, it quickly showed its dark side.

  • It can show a politician uttering sentences they never said, which poses a serious threat to democracy and public discourse (Misirlis & Munawar, 2023).
  • It can plausibly imitate the voice of a CEO and initiate a fraudulent financial transaction, leading to fraud (Cherifi, 2025).
  • It can place someone in a compromising video without consent, endangering the privacy and safety of that person.

Therefore, understanding this technology has become part of modern ‘digital survival’.

Why is Media Literacy Crucial?

Today, media literacy does not only mean knowing how to use technology, but recognising what is real and what is manipulation. Deepfake teaches us to no longer trust blindly.

A person literate in media:

  • checks sources,
  • recognises unusual behaviour, tone and pace of speech,
  • understands how to manipulate content,
  • knows that even ‘perfect’ shots can be fake,
  • thinks critically before sharing content.

 

But be careful! Since technology is advancing every day, all of us must stay alert and these criteria may soon change. It would be irresponsible to think we are literate for good, just because we are literate at this moment.

Deepfake brings us back to the basic questions: Who is the author? How do I know the footage is authentic? Is there other evidence? This ability to analyse content is the essence of media literacy in the digital age (Drobnjak, 2025).

How to Recognise Deepfakes? A Practical Guide for Every Day

Although deepfake recordings are increasingly sophisticated, there are still visible traces of manipulation. Here are the signs to look out for:

  1. Eyes and blinking
    Earlier generations of deepfakes had an unnaturally infrequent blink; even today one can notice a fixed gaze or a mismatch of eye and head movements (The Guardian, 2024).
  2. Mouth and speech synchronisation
    Pay attention to the edges of the lips that ‘spill’, a mismatch between the intonation of the voice and facial expressions, or an unusual rhythm of speech (The Guardian, 2024).
  3. Lighting and shadows
    Deepfakes sometimes do not realistically predict how the shadows fall, so unnatural changes in the lighting of the face can be seen (Cetinski, 2024).
  4. Artifacts around the edges of the face
    When inserting a face onto another body, there may be a blurred line around the chin or cheek, sliding of the face during movements, or flickering of the image (Cherifi, 2025).
  5. Uncoordinated body movements
    A more advanced deepfake can display microexpressions or neck movements in a way that do not accompany speech (Misirlis & Munawar, 2023).
  6. The sound that ‘doesn’t sit’
    Deepfake audio may have uneven noise, overly clean sound with no background ambience, or abrupt pitch changes (Masood et al., 2021).
  7. Checking the source
    Before you believe the recording, check if it is published by the relevant media, search for the same event from several sources and try to find the original recording or context (Drobnjak, 2025).

The best weapon against deepfakes is still critical thinking.

How to Protect Yourself?

  • Do not share content before verification. A large number of manipulations spread because people share ‘shocking’ footage without thinking (NUNS, 2025).
  • Check for traces of manipulation – pay attention to visual and sound irregularities (Cherifi, 2025).
  • Use detection tools. There are tools and forensic methods that analyse video and audio and detect anomalies (Springer, 2025).
  • Work on your own media literacy. Understanding how content is created – and misused – is the best long-term protection (Drobnjak, 2025).

Conclusion: A New Era Requires New Skills

Deepfake technology is not going away. It will be better, faster and more accessible. As expert literature warns, this brings a significant ethical, social and security dilemma (Masood et al., 2021; Cherifi, 2025).

But that doesn’t mean we are helpless. The more we understand how audio-visual content is manipulated, the more resilient we are to misinformation, fraud and abuse. Media literacy becomes digital immunisation, and everyone should develop it.

References:

  • Cetinski, A. (2024). DEEPFAKE tehnologija – predstavitvene tehnike.
  • Cherifi, H. (2025). Deepfake media forensics: Status and future challenges. Journal of Imaging, 11(3), 73.https://doi.org/10.3390/jimaging11030073 PMC
  • Drobnjak, J. (2025, June 10). Deepfake tehnologija – izazovi, prijetnje i prilike – tko kontrolira stvarnost? Točno.hr.Tocno
  • Maras, M.-H., & Alexandrou, A. (2019). Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos. The International Journal of Evidence & Proof, 23(3), 255-262.
  • Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes Generation and Detection: State‑of‑the‑art, Open Challenges, Countermeasures, and Way Forward. arXiv. arXiv
  • Misirlis, N., & Munawar, H. B. (2023). From deepfake to deep useful: Risks and opportunities through a systematic literature review. arXiv. arXiv
  • Springer, A. (2025). A survey on multimedia-enabled deepfake detection: State-of-the-art tools and techniques, emerging trends, current challenges & limitations, and future directions. Discover Computing. SpringerLink
  • Udruženje novinara Srbije & Centar za razvoj omladinskog aktivizma (2025). Većina ljudi ne može da prepozna deepfake sadržaj. rs. Euronews.rs
  • NUNS, I. K. (2025, 05/06). Dipfejk – sofisticirana tehnologija za kreiranje dezinformacija. NUNS