Generative AI Overview
Generative AI refers to
artificial intelligence systems that can create new content, such as text,
images, and music. It is based on machine learning, particularly deep learning
techniques involving neural networks. These neural networks are trained on
large datasets to generate outputs that mimic the training data. For example,
in text generation, models like GPT (Generative Pretrained Transformer) analyze
vast amounts of text to produce coherent and contextually relevant content.
Similarly, in image generation, models like Generative Adversarial Networks
(GANs) create realistic images by learning from a dataset of existing images.
In other words, they are a “class of very powerful AI models that can be used
as the basis for other models: they can be specialized, or retrained, or
otherwise modified for specific applications” (Loukides, 2023, p. 2).
The transformative
potential of generative AI as a technological force is profound, promising to
redefine creativity, enhance productivity, and reshape ethical norms. This
technology, built on advanced machine learning algorithms and neural networks,
is not only automating tasks but also creating novel content, thus challenging
our traditional understanding of human creativity.
Historical
Context
Early AI Development: The evolution of AI
from its inception in the mid-20th century includes several key milestones.
Alan Turing's computational theories, especially his concept of the Turing Test
(Turing, 1950, pp. 433-460) laid foundational ideas for AI. The development of
neural networks, a significant leap in AI, began with the perceptron in the
1950s. Over the decades, advancements in computational power and data
availability have led to more sophisticated neural networks, enabling modern AI
capabilities.
Early technological
advancements like the internet and personal computing have significantly
influenced societal structures, cultural norms, and economic models (Castells,
p. 1). The advent of the internet enabled unprecedented connectivity, reshaping
communication, and fostering the rise of digital culture. Personal computing
democratized access to technology, changing how people work, learn, and
interact. These technologies also catalyzed the emergence of new economic
models, such as e-commerce and the gig economy (which is a labor market
characterized by the prevalence of short-term contracts or freelance work).
AI's Growing Influence: The trajectory of AI's
impact on society has evolved from basic automation to complex decision-making.
Initially, AI was used for routine tasks, like calculations and data processing
(Kaplan, 2016, p. 144). Over time, advancements in machine learning and neural
networks have enabled AI to tackle more sophisticated tasks, including pattern
recognition, natural language processing, and predictive analytics. This
evolution has led to AI systems that can make decisions in complex
environments, influencing fields like healthcare, finance, and transportation.
Technological
Determinism and Generative AI
Through the lens of
technological determinism, generative AI can be seen as significantly shaping
societal norms and behaviors, as it is influencing various facets of life.
1. Creativity and Art:
The emergence of AI in creative fields is redefining the concept of creativity,
traditionally seen as a uniquely human trait. AI-generated art and literature
challenge our understanding of creativity and originality (Boden, 2016, pp.
57-59). This technological advancement is not only creating new forms of art
but is also influencing how people perceive and interact with artistic works.
2. Work and Employment:
Generative AI is transforming the workplace. It automates tasks, thus shifting
the nature of jobs and required skills (Brynjolfsson & McAfee, 2014). This
shift could lead to job displacement in certain sectors while creating new
opportunities in others, fundamentally altering employment landscapes.
3. Ethics and Society:
The capabilities of generative AI bring forth ethical dilemmas, especially
concerning data privacy, the authenticity of information, and the potential for
misuse in creating deepfakes. These concerns necessitate a reevaluation of
ethical frameworks and legal standards to keep pace with technological
advancements.
4. Media and
Communication: In media, generative AI's ability to produce realistic content
is transforming how information is created and consumed. This raises concerns
about the authenticity of information and the potential for spreading
misinformation (Tufekci, 2015). As AI becomes more involved in content
creation, it also influences the way narratives are shaped and disseminated.
Current Generative AI Applications
Generative AI
is increasingly being used in journalism, although its impact and the extent to
which it can replace human authors are subjects of ongoing debate and
exploration.
Several news
organizations are experimenting with AI for various aspects of journalism. The
Associated Press, for instance, has created a detailed module with specific
guidelines for using AI, employing it for tasks like compiling digests of
stories for newsletters and creating short news stories from sports scores or
corporate earning reports (Bauder, 2023). AP stated
that any item produced by AI must be “carefully vetted - just like material
from any other source, and that a photo, video, or audio segment generated by
AI should not be used unless that segment is the subject of a story itself”
(Hurst, 2023).
The Guardian (2023), on the
other hand, has outlined its
approach to generative AI, focusing on using the technology to assist
journalists in managing large data sets, with strict human oversight and a
senior editor's permission required for any editorial use of AI. Similarly,
local newsrooms are exploring AI to publish a high volume of local stories on
topics such as weather, fuel prices, and traffic conditions, as seen with News
Corp Australia's production of 3,000 articles a week using generative AI.
However, the role of AI
in journalism is not without challenges. Ethical considerations and the
potential for factual inaccuracies are major concerns. For instance, tech
outlet CNET faced criticism for publishing AI-generated content without clear
disclosure, leading to an update in their processes for greater transparency. Not
only did CNET publish AI-generated material, but it also created “articles
generated by artificial intelligence, on topics such as personal finance, that
proved to be riddled with errors” (Harrington, 2023).
Ethical guidelines
suggest that AI-generated content should be clearly disclosed to audiences and
not presented as human-written. Additionally, there are challenges related to
the accuracy of information, especially in breaking news reporting, as AI models
often struggle with generating accurate and factual information regarding
current events or real-time data.
Overall, while generative AI presents opportunities for enhancing productivity in journalism, it also requires careful consideration of ethical, human, and editorial implications. The technology is viewed not as a replacement for human journalists but as a tool to augment their capabilities, allowing them to focus on tasks that require human judgment and creativity. Therefore, the complete elimination of authors by AI in journalism is not currently foreseeable, given the technology's limitations and the value placed on human insight and analysis in the field.
Societal and Ethical Implications
The implications of
AI-generated content on concepts like authorship, intellectual property, and
truth in media are complex and multifaceted.
In terms of authorship
and intellectual property, the rise of AI-generated content has led to
significant legal challenges. A fundamental issue is determining the actual
creator of AI-generated works and, consequently, who owns the copyright. Under
U.S. copyright law, generally, the creator of the content owns the copyright,
but this becomes complicated when an AI algorithm creates the work. For
instance, in the case of Thaler v. Perlmutter et al., the court upheld the
United States Copyright Office’s decision that human authorship is a
prerequisite for valid copyright protection. This decision underscores the
importance of human creativity in copyright law but leaves unresolved how to
handle content created from both AI and human input (Clarida and Kjellberg, 2023)
There's also the concern of potential plagiarism or copyright infringement with AI-generated content. “The instances of academic plagiarism have escalated in educational settings, as it has been identified in various student work, encompassing reports, assignments, projects, and beyond” (Elkhatat, Elsaid and Almeer, 2023). AI writing assistants, while designed to generate original content, could inadvertently produce work substantially similar to existing material, potentially leading to accusations of plagiarism or copyright infringement. In such cases, "I didn't know" is not a viable defense, as most forms of infringement are strict liability torts. This highlights the inherent risk in using AI for content creation, as users often cannot verify the source of information or ensure content originality.
Regarding the
truth factor in the media, the use of AI in journalism raises ethical and
factual accuracy concerns. While AI can assist in synthesizing information and
informing reporting, its current capabilities lack originality, analytical
skills, and a developed voice, essential for quality journalism. Another
factory that should be stressed is that AI “is a ‘language machine…not a truth
machine’, so the human factor is still a vital element in producing journalism”
(Hurst, 2023). Moreover, AI models often struggle with generating accurate and
factual information, particularly in real-time or current events, posing a
challenge for breaking news reporting. Thus, while AI has a role in journalism,
it cannot solely be relied upon, especially for complex and nuanced reporting.
In conclusion, while
AI-generated content offers many opportunities for innovation and efficiency,
it also brings significant challenges in authorship, intellectual property, and
maintaining the integrity of information. These challenges necessitate careful
consideration and adaptation of legal and ethical frameworks in the digital
age.
Case Study: The Associated Press
A specific example of
generative AI impacting journalism is the use of AI-driven tools in the
newsroom of the Associated Press (AP). The AP has integrated AI into its
journalistic processes, primarily for automating the creation of
straightforward news reports, especially in areas like sports and finance.
The AP uses AI to
automatically generate news stories from structured data. This began with their
use of a tool called Wordsmith, developed by Automated Insights, to produce
news stories on corporate earnings reports. By inputting data into Wordsmith,
AP was able to automate the creation of earnings reports articles, a task that
was previously time-consuming for human reporters (Lewis-Kraus, 2016).
This automation
significantly increased productivity. Before implementing AI, AP reporters
wrote about 300 earnings reports stories per quarter. After the adoption of AI,
this number increased to over 3,000, demonstrating a tenfold increase in output
without sacrificing accuracy (Philips, 2013). Moreover, this automation freed
journalists to focus on more complex, investigative stories where human insight
and analysis are irreplaceable.
However, the
implementation of AI in journalism also raises concerns regarding job
displacement and the potential for errors in automated content. While AI has
enhanced efficiency in news production, it has also led to debates about the
evolving role of journalists in an increasingly automated news environment
(Graefe, 2016).
The AP's approach to AI
in journalism reflects a broader trend in the industry: leveraging AI for
routine, data-heavy tasks, while retaining human journalists for more nuanced
and analytical work. This strategy underscores the complementary role of AI in
journalism, augmenting human capabilities rather than replacing them entirely.
Benefits:
- Increased Productivity: The use of AI has
allowed the AP to increase the volume of content produced. For instance, their
earnings reports coverage expanded from 300 to over 3,000 articles per quarter
after implementing AI (Philips, 2013).
- Resource Allocation: By automating routine
reports, AI frees up journalistic resources, allowing human reporters to
dedicate more time to in-depth, qualitative reporting (Graefe, 2016).
Challenges:
- Accuracy and Reliability: While AI improves
efficiency, there are concerns in regards to accuracy of the generated content,
especially in complex or nuanced reporting scenarios.
- Ethical and Employment Concerns: The
integration of AI in journalism also raises ethical questions about
transparency and the potential for job displacement in the industry
(Lewis-Kraus, 2016).
Broader Impact:
- Impacts on Journalism: AI is transforming the
journalism industry by changing how news is produced and consumed. It
encourages a shift towards more data-driven journalism and may change the skill
sets required for future journalists.
- Societal Implications: The widespread use of
AI in media can influence public perception and understanding of news,
underscoring the need for clear guidelines and ethical standards in
AI-generated content (Graefe, 2016).
Conclusion:
The adoption of generative AI by organizations like the AP highlights both the potential and challenges of this technology in journalism. While it enhances efficiency and allows journalists to focus on more complex and high-quality reporting, it also brings up questions about the future of the human factor, the accuracy information, and ethical considerations in media. As this technology evolves, its integration into journalism will likely continue to influence both the industry and societal perceptions of news and information.
When considering the use
of generative AI in journalism, exemplified by the Associated Press and its
adoption of automated news writing/generation, it should be stated that it can
be analyzed through the lens of technological determinism because it confirms
that fact that technological advancements play a primary role in shaping
societal structures, cultural norms, and human behavior (McLuhan, 1964, pp. 7-8).
In the context of journalism, the integration of AI technologies aligns with
this theory in several ways:
- Shaping News
Production: The adoption of AI for routine news generation signifies a complete
modification of the journalistic processes, driven by technology. The increased
efficiency and capacity for producing large volumes of content demonstrate how
technology can redefine industry practices in general. (Philips, 2013).
- Influencing
Journalistic Roles: As AI takes over more routine and data-driven tasks, the
role of journalists in the field evolves. This aligns with technological
determinism, where technology influences human roles and skills required in a
profession (Graefe, 2016).
- Impacting News
Consumption: The way audiences consume news can also be influenced by the
presence of AI-generated content, potentially leading to changes in how people
interact with and perceive news media. This is a direct implication of
technological change influencing societal behavior, a core concept of
technological determinism.
- Ethical and Societal
Considerations: The ethical concerns and the potential for misinformation with
AI in journalism highlight the broader societal impacts of technology. These
implications reflect technological determinism's assertion that technology not
only changes practices but also raises new ethical and societal questions
(Lewis-Kraus, 2016).
In summary, the
application of generative AI in journalism and its subsequent effects on the
industry and society exemplify the principles of technological determinism. The
technology is not merely a tool but a transformative force that reshapes
industry norms, professional roles, and societal interactions with news media.
Bibliography:
Ahmed M.
Elkhatat, Khaled Elsaid & Saeed Almeer (2023) Evaluating the efficacy of AI
content detection tools in differentiating between human and AI-generated text.
Available at:
https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00140-5/
(Accessed: 17 November 2023).
Bauder, D.
(2023) AP, other news organizations develop standards for use of
artificial intelligence in newsrooms. Available at:
https://apnews.com/article/artificial-intelligence-guidelines-ap-news-532b417395df6a9e2aed57fd63ad416a/ (Accessed: 28 November 2023).
Boden, M. (2016) AI: Its nature and future. Oxford, the UK: Oxford
University Press.
Brynjolfsson,
E. and McAfee, A. (2016) The Second Machine [Ebook]. New York: W. W.
North & Company.
Castells, M.
(2000). The Rise of the Network Society. Blackwell Publishers.
Clarida, R. and Kjellberg, T. (2023) ‘Thaler v.
Perlmutter’: AI Output is Not Copyrightable. Available at: https://www.law.com/newyorklawjournal/2023/09/14/thaler-v-perlmutter-ai-output-is-not-copyrightable/
(Accessed: 22 November 2023).
Graefe, A. (2016) Guide to automated Journalism, Tow
Center for Digital Journalism, New York: Tow Foundation. Available at:
https://academiccommons.columbia.edu/doi/10.7916/D8QZ2P7C/download/ (Accessed: 14 November 2023).
Harrington, C.
(2023) CNET Published AI-Generated Stories. Then Its Staff Pushed Back. Available at: https://www.wired.com/story/cnet-published-ai-generated-stories-then-its-staff-pushed-back/
(Accessed: 20 November 2023).
Hurst, L.
(2023) Robot reporters? Here’s how news organisations are using AI in
journalism. Available
at:
https://www.euronews.com/next/2023/08/24/robot-reporters-heres-how-news-organisations-are-using-ai-in-journalism/
(Accessed: 20 November 2023).
Kaplan, J.
(2016). Artificial Intelligence: What Everyone Needs to Know. Oxford,
UK: Oxford University Press.
Lewis-Kraus,
G. (2016). The Great A.I. Awakening. Available at: https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html/
(Accessed: 15 November 2023).
McLuhan, M.
(1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill
(PP. 7-8).
Mike Loukides
(2023) What Are ChatGPT and Its Friends? California, USA: O’Reilly
Media.
Philips, M.
(2013) How the Robots Lost: High-Frequency Trading's Rise and Fall. Available at:
https://www.bloomberg.com/news/articles/2013-06-06/how-the-robots-lost-high-frequency-tradings-rise-and-fall/
(Accessed: 19 November 2023).
Tufekci, Z.
(2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest.
Connecticut, USA: Yale University Press.
Turing, A. M.
(1950). ‘Computing Machinery and Intelligence.’ Mind, 59(236), 433-460.
Viner, K. and
Bateson, A. (2023) The Guardian’s approach to
Generative AI. Available
at:
https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai/
(Accessed: 24 November 2023).