Prof. Yucheng Zhang

1 SEMINAR: CASE ANALYSIS OF AI ETHICS IN MANAGEMENT

1.1 Overview

1.1.1 Case Selection: Each group can choose one from two cases

  • Case 1 : Large model assistant in recruitment: Efficiency improvement and invisible discrimination

  • Case 2 : Employee monitoring AI system: Efficiency management or “Digital surveillance”?

1.1.2 Process

Stage Time Content description
Grouping and case selection 5 min

- Students form groups freely

- Each group selects one case from the two provided as the analysis object

Group reading and role division 5 min

- Group members read the case content together

- Clarify the role division of each member in the group (e.g., recorder, speaker, etc.)

Group discussion and analysis 20 min

- Conduct discussions based on the case task sheet and guiding questions

① Identify ethical issues and risks;

② Apply ethical frameworks (fairness, accountability, trustworthy AI, etc.)

③ Propose technical and organizational improvement plans

④ Prepare presentation content

Group presentation and demonstration 15-20 min

- Each group presents for 3-5 minutes:

① Problem diagnosis

② Ethical analysis

③ Improvement suggestions

- Other students ask brief questions or make supplements.

1.2 Two Cases

1.2.1 Case 1: Large model assistant in recruitment: Efficiency Improvement and invisible discrimination

1.2.1.1 Background

  • You are a human resources manager at a multinational retail group with approximately 60,000 employees worldwide. To improve the recruitment efficiency for technical personnel, the company has recently launched a generative AI-driven large recruitment assistant (AI Recruitment Assistant, AIRA), which is used to automate initial resume screening, generate interview outlines, and assist HR in writing evaluation reports after interviews. This system is developed based on a multimodal model similar to GPT and fine-tuned with the company’s recruitment and performance data over the past decade

  • The typical process of AIRA is as follows:

    • Resume Screening Stage: The model automatically screens out the top 50 “most suitable candidates” from 1,000 resumes based on the job description

    • Interview Preparation Stage: The model automatically generates personalized interview questions (e.g., “Please talk about how you deal with high-intensity project pressure”)

    • Interview Summary Stage: The model generates a comprehensive evaluation based on interview records (e.g., “This candidate shows strong leadership potential but has a conservative communication style”)

    • HR Decision Stage: Finally, human recruiters only review the list of the top 10 candidates recommended by AIRA

  • In the early stage of the project, the system achieved significant efficiency improvements in technical positions (such as software engineers and data analysts) — the recruitment cycle was shortened by 40%, and candidate satisfaction also increased. However, three months later, the Employee Committee and the company’s ethics advisor discovered abnormal data: the pass rate of female candidates for technical positions dropped significantly compared to before using AIRA, while that of male candidates rose sharply

1.2.1.2 Current challenges

  • Data and Algorithm Aspects

    • Training Data Bias: AIRA’s corpus mainly comes from the company’s annual recruitment documents and “high-performing employee cases”, among which about 80% of the engineer samples are male

    • Implicit Gender Signals: The textual style of “successful candidates” learned by the system often resembles male expression patterns (such as using a more assertive and direct tone)

    • Question Bias: AI-generated questions often contain terms like “Can you accept business trips or overtime”, which creates implicit disadvantages for applicants who take care of their families (especially women)

  • Organizational and Ethical Aspects

    • Fairness Risk: Systemic bias has amplified gender inequality
    • Lack of Transparency: AIRA’s generation logic is not transparent, and the human resources team cannot fully explain the basis for the model’s recommendations
    • Unclear Accountability: In the event of discrimination complaints, who should be responsible — the algorithm development team, the HR department, or the company’s senior management?
    • Employee trust crisis: Candidates question the fairness of AI initial screening, and internal employees are worried that AI will eventually replace the judgment role of HR

1.2.1.3 Illustrative questions for discussion

  • The questions below are designed to illustrate possible directions for discussion. However, they are for demonstration purposes, and your group discussion should not be limited to these examples.

  • Illustrative questions:

    • Bias source identification

      • Which specific parts (data selection, prompt design, evaluation criteria) may lead to systemic bias?
      • How to diagnose these biases (e.g., gendered language analysis, output statistical differences)?
    • Fairness intervention strategies

      • If you want to correct the bias, do you prefer to intervene in the data stage, model training stage, or post-result processing stage? Why?
      • When introducing “quota adjustment” or “threshold re-ranking”, how to explain that this is not equivalent to “reverse discrimination”?
    • Accountability and transparency mechanisms

      • If a candidate asks to “explain why they didn’t pass the screening”, what should your explanation template include?
      • How to divide the responsibility: model developers vs HR users?
    • Trust and governance

      • Among the four principles of Trustworthy AI (respect for autonomy, prevention of harm, fairness, explainability), which one do you think is the hardest to achieve? Why?

      • If you were on the team, how would you restore candidates’ trust without sacrificing efficiency?

1.2.1.4 Group presentation requirements

  • Each group should complete a concise presentation within 5 minutes based on the group discussion, answering the following three core aspects
Focus Specific Requirements
Problem diagnosis Briefly describe the bias phenomena of the AIRA recruitment system and their potential consequences (supported by data/logic)
Ethical analysis Use an ethical framework to explain why this is a “moral issue” and identify the involved core value conflicts
Improvement plan Propose a set of bias correction and governance plans including at least three measures (from technical, organizational, and policy levels)

1.2.2 Case 2: Employee monitoring AI system: Efficiency management or “Digital surveillance”?

1.2.2.1 Background

  • You are a human resources manager at a multinational financial services company (FinTrust Group). After the pandemic, the company adopted a hybrid work model. To monitor the productivity of remote employees, the company launched an AI-driven employee behavior analysis system (WorkSight AI) in 2023

  • The system collects and analyzes data in the following ways:

    • Computer activity records (keystroke rate, active window duration, application usage frequency)
    • Video conference participation analysis (speech duration, facial expression recognition, concentration score)
    • Email and communication volume
    • Regular generation of “efficiency scores” for supervisors to review
  • The management believes that this system has improved productivity and made performance management “more objective”. However, six months later, the labor union and employee representatives protested, stating that WorkSight AI constitutes “digital surveillance” and “erosion of trust”

1.2.2.2 Current challenges

  • Privacy and Autonomy Risks

    • The reasons for employees being automatically judged as “low efficiency” by the system are vague, such as “excessively long time away from the keyboard” or “insufficiently focused expression”

    • The system collects data around the clock, making employees feel continuously monitored and causing psychological stress

    • Data is used for performance evaluation without explicit consent

  • Fairness and Bias Risks

    • “Active performance in front of the camera” has become a new standard, adversely affecting introverted employees or those who take care of their families at home
  • Governance and Accountability Dilemmas

    • Managers regard AI reports as “objective facts”, undermining human judgment

    • The data retention period is unclear, posing risks of security and abuse

    • Employees lack channels to refuse, explain, or correct (the data or evaluations)

1.2.2.3 Illustrative questions for discussion

  • The questions below are designed to illustrate possible directions for discussion. However, they are for demonstration purposes, and your group discussion should not be limited to these examples.

  • Illustrative questions:

    • How should the boundary of employee data collection be defined? Which data belongs to the “private domain”?

    • Can the “efficiency score” generated by the system be used for performance evaluation? What conditions need to be met?

    • Which principle of Trustworthy AI (autonomy, fairness, transparency, harm prevention) is most threatened in this scenario?

    • If the company still insists on using the system, how would you design the notification and consent process?

    • How to prevent “surveillance panic” and maintain trust in organizational culture?

1.2.2.4 Group presentation requirements

Focus Content description
Problem diagnosis Identify the main ethical risks of the system and the affected groups
Ethical analysis Select a framework for analysis (e.g., Consequentialism or Trustworthy AI)
Improvement plan Propose 3 policies to balance efficiency and trust

1.3 Analytic Guide For Tutor

1.3.1 Case 1: Large Model Assistant in Recruitment – Efficiency and Invisible Discrimination

1.3.1.1 Problem Diagnosis

AIRA, an AI-powered recruitment assistant, uses large language models to screen résumés and generate candidate scores. However, because it was trained on biased historical data, the system significantly under-represents women and applicants with disabilities.

  • Observed Bias: Male candidates’ selection rate is much higher.
  • Consequences: Damaged workforce diversity, potential legal risks, and loss of employer reputation.

Core Issue: The algorithm reproduces and amplifies existing social inequalities.

1.3.1.2 Ethical Analysis

Using Utilitarianism and Deontological ethics:

  • Utilitarian view: Although AIRA improves efficiency, overall social welfare decreases if fairness is sacrificed.
  • Deontological view: Firms have a moral duty to treat individuals equally; excluding groups for convenience violates ethical responsibility.

Value Conflicts:

Tension Description
Efficiency vs. Fairness Rapid automation may undermine equal opportunity.
Automation vs. Human Agency Over-reliance on AI weakens human accountability.

1.3.1.3 Improvement Plan

Three-level corrective actions:

  1. Technical: Apply re-sampling and fairness-constrained algorithms to balance gender and disability representation.
  2. Organizational: Maintain human-in-the-loop review and establish ethical oversight in recruitment.
  3. Policy: Introduce AI fairness audits and transparent accountability standards.

1.3.2 Case 2: Employee Monitoring AI System – Efficiency Management or “Digital Surveillance”?

1.3.2.1 Problem Diagnosis

The company deploys an AI monitoring system that tracks employees’ computer use, keystrokes, webcam activity, and work duration to optimize productivity.

Main ethical risks:

  • Privacy intrusion: Continuous tracking blurs the boundary between work and private life.
  • Autonomy loss: Employees feel constantly observed, leading to self-censorship and stress.
  • Fairness and bias: Monitoring metrics (e.g., idle time) may penalize certain job types or employees with health or caregiving needs.

Affected groups:

  • Office employees and remote workers under constant data capture.
  • HR and line managers facing moral pressure between compliance and empathy.

Core tension: Efficiency-driven management vs. human dignity and trust.

1.3.2.2 Ethical Analysis

Framework: Trustworthy AI (EU Ethics Guidelines, 2019)

Relevant principles:

  1. Respect for human autonomy – Monitoring should not undermine employee agency.
  2. Prevention of harm – Excessive data surveillance may cause psychological harm.
  3. Fairness – Performance data must not lead to discrimination or unequal treatment.
  4. Explicability – Employees should understand what data is collected and how it’s used.

Ethical insight: Productivity gains lose legitimacy if they erode trust, fairness, and dignity.

1.3.2.3 Improvement Plan

Three balanced policy actions:

  1. Transparent data policy – Clearly communicate what is monitored, for what purpose, and who can access the data.
  2. Human-centered oversight – Involve employee representatives or ethics committees in monitoring design and review.
  3. Purpose limitation & opt-out options – Use monitoring data only for improvement, not punishment; allow limited non-tracked zones or time.