Examining culture allows us to discern existing strengths and resistance patterns rather than solely fixating on risks. Societies and their inhabitants possess agency in navigating AI advancement, or any novel technology, by relying on collective and personal cultural approaches to adapt to these transformations. However it can be tempting to think of technology choices, in particular in relation to the new but opaque developments driven by Artificial Intelligence (AI) and Big Data, as disconnected individual or institutional decisions. As a society we have often become accustomed to viewing technologies as products we buy (even when they are free of cost), rather than doors into complex ecosystems of data collection and processing that have an impact far beyond our personal view of system function. While this framing has proven effective for driving technology innovation and adoption, it obscures the wider communal impact of these systems, and the manner in which our individual decisions affect our perceptions, and ultimately the lives, of those others with whom we are interdependent, sharing our communities and natural resources.
The rapid progress of technology is in this way surpassing the capacity of individuals, societies, and democracies to assimilate them. The resulting damage to collective sense-making occurs largely on the level of culture, the mostly tacit, but vital, web of values, practices, beliefs and norms that underpin stable and productive individual and shared identities. Due to this incursion into cultural sensemaking not just by technologies, but by the corporate interests that profit from them, democratic processes and civic engagement have come increasingly under threat as these technologies seamlessly integrate into our daily lives, families, and social networks, bringing with them filter bubbles, disinformation and polarisation.
Culture may be more varied and sensitive than critical systems that are more visible (such as water mains or medical records), yet it still merits and requires protection from threats such as hacking or manipulation. Protecting the cultural dimension from potential AI harms will require a proactive policy approach that not only addresses the evolving landscape of technology, but also seeks to foster a harmonious relationship between technological advancements and the preservation of democratic principles and civic participation. Adopting a culture-centric approach offers a more comprehensive understanding of societal responses to AI development, moving beyond the narrow focus on risks advocated by the AI Act. As Lisa Gitelman writes: “new media are less points of epistemic rupture than they are socially embedded sites for the ongoing negotiation of meaning” (2006:6). These cultural responses must be acknowledged, supported, and protected by policymakers in their regulatory effort, not just because this is the ethical thing to do, but also because they are invaluable tools in developing effective and fair regulations.
This policy brief, created by the Knowledge Technologies for Democracy (KT4D) project, illustrates some of the current gaps in the policy landscape with regards to technology and cultural risk, and suggests some preliminary measures to be taken to address them.
To read more, please download the PDF.