Ethics in AI: Yet Another Call for Human Dignity and Responsibility
The ethical governance of artificial intelligence is not a technical problem to be solved, but a sustained civic challenge. It requires the continual reassertion of human values in an increasingly machine mediated society. This article first identifies a few core ethical foundations and abstracts them into a civic framework of shared moral principles, which are then applied to AI—particularly large language models—culminating in a set of policy recommendations that frame the ethical use of AI as an ongoing, collective responsibility.
We start with religion because it has played a foundational role in human history, shaping moral systems, social structures, and conceptions of meaning across cultures and eras. It has provided frameworks for understanding existence, regulating behavior, and fostering cohesion within communities, while serving as a vehicle for expressing shared values, confronting mortality, and situating the individual within a larger moral and cosmic order. Religion has been a persistent mode through which humans have grappled with what it means to live well and live together. Christian theology is used due to its familiarity, with Biblical passages highlighting the foundational nature of the ethical principals.
Christian theology is rooted in several foundational claims, among which three are particularly relevant to how Christians understand the nature of human life and ethical responsibility: that human beings are created in the image of God, that God is present in all things, and that the ultimate purpose of life is union with God. These tenets establish a framework of profound relationality and interconnectedness through God’s continuous presence in the world and in each person. If, as Jesus teaches, we are to love one another as we love ourselves (John 13:34, et al), then seeking unity with God must entail seeking unity with each other.
The claim that human beings are made in the image of God (Imago Dei) originates in Genesis 1:26–27, when God declares “Let us make man in our image, after our likeness.” This concept implies that every person possesses intrinsic worth and reflects something essential about the divine nature of creation. Genesis 9:6 reaffirms the enduring nature of our intrinsic worth even after humanity's moral failure and alienation from God (aka, the Fall of Man, or the Fall), grounding the moral prohibition against murder in the sanctity imparted by God’s image. This idea is further developed in the New Testament: Colossians 3:10 describes believers as being “renewed in knowledge after the image of the Creator,” and 2 Corinthians 3:18 portrays the spiritual life as a progressive transformation “into the same image from one degree of glory to another.” Romans 8:29 identifies the purpose of humanity as conformity to the image of Christ.
The belief that God is present in all things (Divine immanence) is similarly grounded in scripture. Psalm 139 describes God as inescapably near: “If I ascend to heaven, you are there! If I make my bed in Sheol, you are there!” (Ps. 139:8). Jeremiah 23:24 declares, “Do I not fill heaven and earth?” The New Testament reinforces this view. Paul, speaking to the Athenians, asserts that in God “we live and move and have our being” (Acts 17:28), while Colossians 1:17 states that “in him all things hold together.” These passages suggest that the created order is not merely a product of divine will, but a domain in which God remains actively present. The cosmos is not spiritually neutral, but is infused with the presence of its Creator.
The idea that the ultimate aim of human life is union with God (Teleology) is central to Christian thought. Augustine writes in his Confessions that “our hearts are restless until they rest in you,” expressing the soul’s innate orientation toward its source. But this goal is not just eschatological, that is, a concern for the Last Judgement; it begins in the present life through moral, intellectual, and spiritual formation. In John 17:21, Jesus prays “that they may all be one; just as you, Father, are in me, and I in you, that they also may be in us,” indicating that the unity of believers with each other and with God is a single, integrated reality. Union with God is not individualistic or abstract—it is realized in concrete acts of relational unity and mutual love.
From these three premises—humanity’s creation in God’s image, the divine presence in all things, and the call to union with God—follows a relatively simple, theologically rooted ethical conclusion: all persons are connected through God, and this connection imposes mutual obligations. The commandment to love one another as oneself (John 13:34, Luke 6:31) becomes not merely a moral ideal, but a practical pathway toward unity with God. To treat another with love and dignity is to acknowledge and honor the divine presence in them. Conversely, to violate that dignity is to obscure the path to union with the divine.
These theological concepts are easily reframed in purely secular terms, yielding a coherent civic ethic. Imago Dei is a commitment to human dignity and universal rights. Divine immanence is a recognition of ecological and social interdependence. Teleology is the pursuit of meaning through solidarity and mutual understanding. The commandment to love others as oneself is a principle of reciprocity and relational ethics. Together, these form a seemingly self-evident civic philosophy: every person has intrinsic worth, we inhabit a shared world, and we flourish through cooperative, compassionate coexistence.
Artificial intelligence increasingly shapes the conditions under which human beings act, decide, and relate to one another. It influences outcomes in education, healthcare, law, labor, and communication—domains in which the stakes are high and the consequences often irreversible. These systems now mediate access to opportunity, determine the distribution of resources, and structure the terms of social interaction; outcomes that carry substantial moral weight. Yet despite its name, artificial intelligence is not intelligence in any meaningful sense. It is a human-created tool—one that lacks consciousness, agency, intentionality, and moral awareness.
Large Language Models, in particular, operate by processing vast quantities of human language data. They internalize patterns in how people speak, write, and reason, thereby encoding not only our knowledge but also our assumptions, biases, normative judgments—and even our social cues. Their outputs are persuasive precisely because they mimic the form and fluency of human discourse. As a result, users may begin to treat them as conversational partners or intellectual collaborators. But these systems do not understand the words they generate. They produce output tokens by calculating statistical probabilities in response to input sequences. No cognition. No comprehension.
This disjunction between appearance and capacity has serious implications. Because these tools increasingly operate in contexts that involve trust, interpretation, and discretion, their deployment cannot be governed solely by technical standards or performance metrics. They require ethical evaluation by and for humans.
Several core principles follow from this ethical foundation. First, AI systems must be designed to support humanity. Their role is to augment human reasoning and creativity, not to replace or obscure them. We are not merely a datapoint or profile abstracted from context. Wherever automated systems are used to make or inform consequential decisions, we must be recognized as full moral agents, knowable within the system and by the system, and we must retain the ability to understand and contest how decisions are made about us.
Second, responsibility for the design, deployment, and impact of AI must remain human. Moral accountability cannot be assigned to a statistical model or delegated to an automated process. Developers, institutions, and decision-makers must retain clear lines of obligation for the choices embedded in the tools they use or authorize. This includes ensuring that the systems in question are interpretable, auditable, and subject to meaningful forms of oversight.
Third, the social function of AI must be evaluated relationally. These tools do not operate in isolation; they are embedded within networks of communication, labor, governance, and care. Their legitimacy depends on whether they sustain or erode the conditions of mutual recognition and shared responsibility. An AI system that fragments attention, polarizes discourse, or entrenches inequity may succeed technically while failing ethically.
Fourth, AI must be just in both design and effect. Statistical neutrality is not ethical neutrality. These tools embody the priorities, tradeoffs, and assumptions of those who build and implement them. Every parameter selected, dataset curated, and threshold set reflects normative judgments about what matters and what is permissible. These judgments have consequences; tools trained on biased data or deployed in structurally unequal contexts will replicate and often exacerbate existing forms of exclusion. If AI systems sort, filter, or allocate, they must do so in ways that are fair, contestable, and aligned with broader commitments to inclusion and due process.
Fifth, AI systems must be situated within sustainable ecological and institutional frameworks. Their development and use consume material and cognitive resources, often in ways that are opaque to end users. These environmental, psychological, and infrastructural costs must be measured, disclosed, and minimized. No technology that compromises planetary viability or institutional trust can be justified on the grounds of convenience or innovation alone.
Finally, and most importantly, the long-term legitimacy of AI will depend on whether it enables human beings to become more fully themselves; not just more efficient or informed, but also more attentive, responsible, and capable of acting in relation to others. A system that undermines these capacities, no matter how advanced, represents a failure of ethical imagination.
Taken together, these principles offer a framework for the governance of artificial intelligence, resisting both the mystification of technical systems and the reduction of ethical human life to calculable outcomes. These principles hopefully remind us that our task is not to make machines moral, but to ensure that their use remains accountable to human beings, human values, and the shared conditions of human existence.
To be useful, these principles must be operationalized within regulatory, professional, and civic frameworks. Following are some suggestions. First, the deployment of AI systems in public or high-stakes domains should be preceded by independent impact assessments, addressing not only technical performance but also social, ethical, and ecological implications. They should include input from affected communities, and their findings must be made publicly accessible.
Second, legal and institutional structures must ensure that responsibility for AI outcomes remains traceable. This includes requirements for human oversight in systems used for critical decisions as well as clearly defined liability for harm or error. Black-box systems that preclude accountability should not be authorized for use where lives, rights, or livelihoods are at stake.
Third, all AI systems used in public institutions or commercial platforms of significant scale should be publicly disclosed, describing the system’s purpose, training data provenance, update history, and known limitations. Transparency is a precondition for human oversight and public trust.
Fourth, data protection and digital rights must be treated as fundamental civic guarantees. This includes the right to privacy, the right to opt out of automated decision-making, and the right to explanation when automated systems are used. Individuals must retain meaningful agency over how they are represented, classified, and acted upon by AI systems.
Fifth, governments and educational institutions should invest in broad-based AI literacy initiatives. These programs should equip individuals not only to use AI tools, but to understand their limitations, interrogate their assumptions, and assess their social effects. Ethical discernment must become a widespread civic competency.
Finally, we need to support interdisciplinary ethics bodies tasked with evaluating emerging AI applications, to ensure that new systems are not only feasible, but justifiable. These bodies must include not only technical experts but also ethicists, legal scholars, educators, labor representatives, and members of affected communities.
"I use both personal and company ChatGPT accounts regularly for writing, research assistance, and promotional activity. I edit outputs for tone and content, verify claims independently, and correct errors as needed. I take full responsibility for all published content."