On the Need for Critical Neuro-Aesthetic Data and an Interdisciplinary Approach to the Impact of State-of-the-Art AI Tools on Human Cognition

  1. Home
  2. Articles

On the Need for Critical Neuro-Aesthetic Data and an Interdisciplinary Approach to the Impact of State-of-the-Art AI Tools on Human Cognition

   

Isil Ezgi Celik*

Director capitArt, Co-Founder capitArtX 

*Corresponding author: Isil Ezgi Celik, Director capitArt, Co-Founder capitArtX

Citation: Celik EI, On the Need for Critical Neuro-Aesthetic Data and An Interdisciplinary Approach to The Impact of State-of-The-Art AI Tools on Human Cognition. J Neurol Sci Res. 5(1):1-3.

Received: December 28, 2024 | Published:  January 19,
2025

Copyright©️2025 genesis pub by Celik IE. BY-NC-ND 4.0 DEED. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non-Commercial-No Derivatives 4.0 International License. This allows others distribute, remix, tweak, and build upon the work, even commercially, as long as they credit the authors for the original creation.

DOI: http://doi.org/10.52793/JNSRR.2025.5(1)-43

Abstract

Artificial Intelligence (AI) technologies have been evolving for over half a century, with the advent of deep learning and quantum computing marking a significant technological leap. These advancements profoundly influence human social organisation and cognition. While AI holds promise for innovation in neuroscience and neurology—facilitating advanced research, treatments, and diagnostic tools—it also raises concerns about neurological marginalisation, algorithmic control, and cultural standardisation. This review, adopting a socio-critical perspective, argues for the necessity of neuro-aesthetic studies to critically examine AI’s cognitive impacts, particularly in the digital art field, and calls for an interdisciplinary approach to addressing these challenges.

Keywords

Artificial intelligence; Algorithmic control; Cultural homogenization; Neuro-Aesthetics; Critical Epistemologies

Introduction

Artificial Intelligence (AI) technologies have been evolving for over half a century, with the advent of deep learning and quantum computing marking a significant technological leap. These advancements profoundly influence human social organisation and cognition. While AI holds promise for innovation in neuroscience and neurology—facilitating advanced research, treatments, and diagnostic tools—it also raises concerns about neurological marginalisation, algorithmic control, and cultural standardisation. This review, adopting a socio-critical perspective, argues for the necessity of neuro-aesthetic studies to critically examine AI’s cognitive impacts, particularly in the digital art field, and calls for an interdisciplinary approach to addressing these challenges.

Algorithmic Control, Cultural Homogenisation, and Cognitive Health

The increasing prevalence of algorithmic governance in digital platforms has significant implications for human cognition and creativity. Neuro-aesthetic digital engagements are often mediated by algorithmic systems optimised for profit and efficiency, which prioritise neurotypical patterns and suppress marginalised perspectives. This results in cultural homogenisation, eroding the diversity that underpins human creativity and cognitive vitality.

Gilles Deleuze (1992), in Postscript on the Societies of Control, highlights a transition from disciplinary societies—characterised by rigid institutional structures like schools and prisons—to societies of control, which operate through continuous modulation, surveillance, and self-regulation. These mechanisms profoundly affect neurological and mental health by promoting standardisation and suppressing inclusivity and creativity. A more contemporary thinker, Matteo Pasquinelli (2023) critiques the conceptualisation of AI as akin to human intelligence, framing it instead as computational models of social relations. This perspective highlights that biases and systemic inequities in AI are reflections of broader societal issues. Addressing these biases requires not only technological intervention but also socio-political change.

While AI technologies promise innovations in fields such as personalised medicine, their reliance on biased datasets risks deepening inequities. In mental health, for example, diagnostic disparities disproportionately affect underrepresented populations, perpetuating systemic marginalisation. The commodification of creativity and instrumentalisation of cognitive health for profit and social governance further necessitates ethical scrutiny of these technologies and their impact on cognition.

Neuro-Aesthetics, Critical Epistemologies and Digital Art

AI-assisted digital art offers a unique platform to explore and critique the cognitive impacts of algorithmic governance. Certain generative AI artworks encourage reflective engagement, disrupting passive consumption and problematising societal control and standardisation. By engaging with critical epistemologies, these works question normative models of cognition and computational rationality, inspiring alternative frameworks for understanding human thought.

However, there remains a notable lack of neuro-aesthetic studies examining the transformative relationship between reflective AI-assisted artworks and audience cognitive response. For instance, AI-generated works depicting marginalised experiences may have the potential to activate empathy circuits and promote social awareness, providing neurological evidence of art’s capacity to inspire societal change and support mental health.

A significant challenge in this field lies in defining concepts like “reflective engagement,” as its conceptualisation varies across disciplines. Neuroscience, for example, may approach the term in terms of measurable cognitive and emotional responses, while critical epistemologies would emphasise its socio-political dimensions and resistance to normative frameworks. This highlights the importance of interdisciplinary collaboration to develop a comprehensive understanding of reflective engagement. Only through such an approach can studies bridge the gap between the neurological, aesthetic, and socio-political dimensions of AI’s impact on cognition and creativity.

Safeguarding cognitive diversity remains vital for maintaining creative vitality and mental well- being. Comprehensive and diverse neuroscientific data are essential not only to substantiate these claims but also to guide policies that respect the multiplicity of human experience and thought.

Epilogue: The Dual Potential of AI

AI technologies embody a dual potential: they can perpetuate discrimination and oppression by reflecting existing social configurations, yet they also serve as powerful tools for innovation. On one hand, AI tools designed for neurological inclusion can offer tailored therapeutic interventions and empower marginalised populations, particularly in mental health “management”. On the other hand, insights from Deleuze and Pasquinelli caution against the co-optation of these technologies into systems of oppressive “governance”.

Ethical considerations must remain central to AI development. Balancing its innovative potential with its risks requires ongoing critical engagement and interdisciplinary collaboration. Reliable and critically interpreted neuroscientific data must inform policymaking to mitigate the risks.

The intersections of neurology and AI present transformative possibilities. While AI offers opportunities for societal and techno-scientific innovation, it also risks standardising cognitive processes and instrumentalising neurological health for societal control. This review has pointed to the importance of integrating perspectives from neurology, ethics, and aesthetics into AI research and development. Moving forward, policymakers must adopt a critical, ethical approach to AI, grounded in interdisciplinary and critical studies. Neuroscience, as a discipline central to understanding AI’s impact on cognition and social relations, has a pivotal role in shaping the ethical and aesthetic future of these technologies.

References

  1. Deleuze G. (1992) 'Postscript on the societies of control'. 59:3–7. 
  2. Pasquinelli M. (2023) The eye of the master: A social history of artificial intelligence. New York: Verso Books.
whatsapp