Nuance, moral judgment and equity: AI must not replace critical human role: SC
ISLAMABAD: The Supreme Court has ruled that Artificial Intelligence (AI) must not replace the critical human role of considering nuance, moral judgment and equity.
The judgment authored by Syed Mansoor Ali Shah in rent case, affirmed that AI is not, and must never become a substitute for judicial decision-making.
The apex court observed that while AI tools such as ChatGPT and DeepSeek can significantly enhance judicial capacity, they cannot replace human reasoning, discretion, and empathy, which are central to justice delivery.
SC signs MoUs with LUMS, GIKI to enhance judicial efficiency, technological advancement
He stated that for any justice system to remain fair and just in the age of AI, certain core universal values must be preserved as non-negotiable ethical foundations; First and foremost, human dignity and compassion must remain central, ensuring that algorithms never override mercy or individualized consideration in judgments.
Fairness and anti-discrimination principles must be hardwired into AI systems to prevent the replication of historical biases, guaranteeing equal treatment under the law.
The rule of law must always prevail over the rule of data, with human judges retaining ultimate authority to interpret evolving legal and moral standards.
Due process protections, including presumption of innocence and right to confront evidence cannot be compromised by automation.
Finally, the system must preserve space for restorative justice and rehabilitation, recognizing that punishment should serve societal healing rather than mere efficiency.
The Court recommended that the National Judicial (Policy Making) Committee in collaboration with the Law and Justice Commission of Pakistan considers developing comprehensive guidelines on the permissible uses of AI within the judiciary.
These must delineate clear boundaries, ensuring that AI is used only as a facilitative tool and never in a manner that compromises human judicial autonomy, constitutional fidelity, or public trust in the justice system.
The SC Office is directed to dispatch a copy of the judgment to the Law and Justice Commission of Pakistan and the National Judicial (Policy Making) Committee for preparing guidelines to regulate this emerging intersection of law and AI.
The judgment noted that AI may enhance efficiency and consistency, it cannot replicate the normative judgment, ethical reflection, or contextual sensitivity essential to the act of judging.
AI can support judicial functions, it cannot replace the human conscience that animates the judicial role. AI remains merely an auxiliary resource, while indispensable human judgment and individualised discernment remain paramount in judicial decision-making. “As we explore AI’s role in the judicial system, we must continue to be vigilant in ensuring that these advancements do not undermine the core principles of justice, fairness, and impartiality,” it said.
The judgment noted that in AI, a hallucination is a fabricated or unfounded output that arises from a purely statistical pattern-matching process rather than an internal fact-checking mechanism. Therefore, hallucinations present both a technical and judicial challenge. Technically, models like GPT-4 have shown a significant incidence of fabricating legal references (58 per cent in one study), underscoring their reliance on statistical predictions over fact verification.
Contemporary deep learning AI models often function as “black boxes” - a system where inputs and outputs are visible, but the internal decision-making process is unclear or too complex to interpret. This lack of transparency makes it difficult to understand how or why the model reaches its conclusions. A lack of “explainability” mechanisms further undermines accountability: AI-generated outcomes may lack adequate justification, compromising transparency and public trust.
Accountability is a central principle of judicial proceedings, especially when AI may inform or influence a ruling. It requires that identifiable human authorities hold final responsibility and that mechanisms exist to question and correct any uncertain or objectionable outcomes arising from AI outputs. Judges retain sole accountability for rulings, ensuring that AI acts only as supportive guidance rather than dictating final outcomes.
The Court reiterated the necessity of human oversight, emphasising that judges should examine and, when necessary, override AI-generated recommendations to preserve judicial integrity.
The right to a fair trial before a competent, independent, and impartial judge is a fundamental principle of due process. AI must not overshadow the core guarantee of judicial autonomy. While AI has the potential to improve consistency and efficiency in legal processes, it also carries the risk of introducing biases and limiting judicial discretion.
The judgment emphasised that fairness and transparency must apply equally to AI-assisted rulings, in alignment with Article 14 of the International Covenant of Civil and Political Rights and General Comment No 3240 of the United Nations Human Rights Committee.
Copyright Business Recorder, 2025
Comments