As the use of AI grows and new technology models are introduced to us, calls for the ethical use of artificial intelligence are also moving at a fast pace. However, the question is: can ethical usage even exist if AI is constantly shaping our behaviour and values?
Cultivation and cumulative effects theory suggest “using” AI may already be reshaping what we consider to be ethical.
First developed by George Gerbner, cultivation theory argues that long-term exposure to media gradually shapes our perceptions of reality. Over time, what we see, read or experience repeatedly becomes normal, not because it’s right, but because it’s familiar.
The same principle applies to AI. Each time we turn to ChatGPT for ideas, Midjourney for visuals, we reinforce not just convenience, but trust and dependence.
Cumulative effects theory adds another layer: subtle, repeated exposure compounds over time. In the AI context, this means ethical boundaries don’t break overnight; they blur slowly, until we no longer notice when “assistance” becomes authority.
From an ubuntu perspective
The slogan “use AI ethically” assumes humans remain in control and unaffected by AI. According to a 2025 study by the International Journal of Economic Behaviour and Organization, 68% of respondents across sectors stated that AI tools influence their ethical judgement.
Suggesting that as we integrate these technologies into our everyday work, we adapt our ethics to its logic, due to its efficiency and automation. This leads us to rely more on AI, internalising its assumptions, which ultimately leads to a conflict with traditional human-centred ethics.
Importantly, the word ‘ethical’ can also be judged as subjective, as different people have different beliefs, behaviours and morals. From an Ubuntu perspective, ethics are relational, centring community and empathy, whereas AI’s logic is often utilitarian, optimising outcomes over relationships.
Slow shifts
This clash highlights how imported algorithmic ethics can dilute local moral reasoning. Similar to the media’s constant reporting about violence, to the point where it no longer shocks us when hearing such reports, our ethical stance regarding AI slowly shifts without us realising it, as constant behaviour eventually becomes the norm.
A 2025 report by Gartner reflects that 77% of companies in the world use AI in some form. Additionally, 90% of companies are either using or exploring AI, suggesting near-universal adoption is on the horizon. This means that not using AI can no longer be seen as an ethical solution.
Avoiding it doesn’t protect us from its influence, but raises questions regarding how to regulate it. Therefore, the real challenge isn’t whether we should use AI, but how we ensure that its systems reflect, rather than reshape, human ethics.
This issue has created strong advocacies around AI ethics with countries, academia, and influential organisations across the globe opting for the implementation of AI guidelines and policies as a way to mitigate.
Not legally binding
An example of this is the South African National Artificial Intelligence Policy Framework adopted in 2024; like most policies, it’s not legally binding.
These guidelines focus on the user behaviour only, creating surface conversations, significantly reducing their effectiveness. Some policies, like Canada’s Directive on Automated Decision-Making, have shown partial progress by requiring algorithmic impact assessments, yet they remain exceptions, not norms.
Equally, most AI technologies are created and manufactured privately and in silos, far from the public. Although open-source initiatives like Hugging Face and emerging frameworks such as the EU AI Act aim for transparency, their impact remains limited by uneven global enforcement and private sector dominance.
The behaviour of these models depends solely on those who create them. Furthermore, AI learns from datasets reflecting human inequality, designed by creators under capitalist and efficiency pressures. This makes unethical outcomes inevitable, regardless of user intent.
Shaping ethical reflexes
Though AI exposure shapes our ethical reflexes, it doesn’t eliminate human agency. Critical AI literacy can help users question and resist automated assumptions. Yet, without transparent and accountable design from creators, even the most informed users operate within systems that quietly normalise bias.
If anything, real ethics should be built into AI’s architecture, through accountability, transparent design, and value-driven development. Until creators are held to the same moral standards we demand of users, AI will continue to redefine what ethics mean, not reflect them, and ultimately rewriting them.
Ethical AI use may not be impossible, but it demands shared responsibility, where creators embed values transparently, policymakers enforce them, and users stay critically aware.

Rethabile Molehe is a third-year public relations student at Vaal University of Technology.













