Prof. Yucheng Zhang
Case 1 : Large model assistant in recruitment: Efficiency improvement and invisible discrimination
Case 2 : Employee monitoring AI system: Efficiency management or “Digital surveillance”?
| Stage | Time | Content description |
| Grouping and case selection | 5 min | - Students form groups freely - Each group selects one case from the two provided as the analysis object |
| Group reading and role division | 5 min | - Group members read the case content together - Clarify the role division of each member in the group (e.g., recorder, speaker, etc.) |
| Group discussion and analysis | 20 min | - Conduct discussions based on the case task sheet and guiding questions ① Identify ethical issues and risks; ② Apply ethical frameworks (fairness, accountability, trustworthy AI, etc.) ③ Propose technical and organizational improvement plans ④ Prepare presentation content |
| Group presentation and demonstration | 15-20 min | - Each group presents for 3-5 minutes: ① Problem diagnosis ② Ethical analysis ③ Improvement suggestions - Other students ask brief questions or make supplements. |
You are a human resources manager at a multinational retail group with approximately 60,000 employees worldwide. To improve the recruitment efficiency for technical personnel, the company has recently launched a generative AI-driven large recruitment assistant (AI Recruitment Assistant, AIRA), which is used to automate initial resume screening, generate interview outlines, and assist HR in writing evaluation reports after interviews. This system is developed based on a multimodal model similar to GPT and fine-tuned with the company’s recruitment and performance data over the past decade
The typical process of AIRA is as follows:
Resume Screening Stage: The model automatically screens out the top 50 “most suitable candidates” from 1,000 resumes based on the job description
Interview Preparation Stage: The model automatically generates personalized interview questions (e.g., “Please talk about how you deal with high-intensity project pressure”)
Interview Summary Stage: The model generates a comprehensive evaluation based on interview records (e.g., “This candidate shows strong leadership potential but has a conservative communication style”)
HR Decision Stage: Finally, human recruiters only review the list of the top 10 candidates recommended by AIRA
In the early stage of the project, the system achieved significant efficiency improvements in technical positions (such as software engineers and data analysts) — the recruitment cycle was shortened by 40%, and candidate satisfaction also increased. However, three months later, the Employee Committee and the company’s ethics advisor discovered abnormal data: the pass rate of female candidates for technical positions dropped significantly compared to before using AIRA, while that of male candidates rose sharply
Data and Algorithm Aspects
Training Data Bias: AIRA’s corpus mainly comes from the company’s annual recruitment documents and “high-performing employee cases”, among which about 80% of the engineer samples are male
Implicit Gender Signals: The textual style of “successful candidates” learned by the system often resembles male expression patterns (such as using a more assertive and direct tone)
Question Bias: AI-generated questions often contain terms like “Can you accept business trips or overtime”, which creates implicit disadvantages for applicants who take care of their families (especially women)
Organizational and Ethical Aspects
The questions below are designed to illustrate possible directions for discussion. However, they are for demonstration purposes, and your group discussion should not be limited to these examples.
Illustrative questions:
Bias source identification
Fairness intervention strategies
Accountability and transparency mechanisms
Trust and governance
Among the four principles of Trustworthy AI (respect for autonomy, prevention of harm, fairness, explainability), which one do you think is the hardest to achieve? Why?
If you were on the team, how would you restore candidates’ trust without sacrificing efficiency?
| Focus | Specific Requirements |
|---|---|
| Problem diagnosis | Briefly describe the bias phenomena of the AIRA recruitment system and their potential consequences (supported by data/logic) |
| Ethical analysis | Use an ethical framework to explain why this is a “moral issue” and identify the involved core value conflicts |
| Improvement plan | Propose a set of bias correction and governance plans including at least three measures (from technical, organizational, and policy levels) |
You are a human resources manager at a multinational financial services company (FinTrust Group). After the pandemic, the company adopted a hybrid work model. To monitor the productivity of remote employees, the company launched an AI-driven employee behavior analysis system (WorkSight AI) in 2023
The system collects and analyzes data in the following ways:
The management believes that this system has improved productivity and made performance management “more objective”. However, six months later, the labor union and employee representatives protested, stating that WorkSight AI constitutes “digital surveillance” and “erosion of trust”
Privacy and Autonomy Risks
The reasons for employees being automatically judged as “low efficiency” by the system are vague, such as “excessively long time away from the keyboard” or “insufficiently focused expression”
The system collects data around the clock, making employees feel continuously monitored and causing psychological stress
Data is used for performance evaluation without explicit consent
Fairness and Bias Risks
Governance and Accountability Dilemmas
Managers regard AI reports as “objective facts”, undermining human judgment
The data retention period is unclear, posing risks of security and abuse
Employees lack channels to refuse, explain, or correct (the data or evaluations)
The questions below are designed to illustrate possible directions for discussion. However, they are for demonstration purposes, and your group discussion should not be limited to these examples.
Illustrative questions:
How should the boundary of employee data collection be defined? Which data belongs to the “private domain”?
Can the “efficiency score” generated by the system be used for performance evaluation? What conditions need to be met?
Which principle of Trustworthy AI (autonomy, fairness, transparency, harm prevention) is most threatened in this scenario?
If the company still insists on using the system, how would you design the notification and consent process?
How to prevent “surveillance panic” and maintain trust in organizational culture?
| Focus | Content description |
| Problem diagnosis | Identify the main ethical risks of the system and the affected groups |
| Ethical analysis | Select a framework for analysis (e.g., Consequentialism or Trustworthy AI) |
| Improvement plan | Propose 3 policies to balance efficiency and trust |
AIRA, an AI-powered recruitment assistant, uses large language models to screen résumés and generate candidate scores. However, because it was trained on biased historical data, the system significantly under-represents women and applicants with disabilities.
Core Issue: The algorithm reproduces and amplifies existing social inequalities.
Using Utilitarianism and Deontological ethics:
Value Conflicts:
| Tension | Description |
|---|---|
| Efficiency vs. Fairness | Rapid automation may undermine equal opportunity. |
| Automation vs. Human Agency | Over-reliance on AI weakens human accountability. |
Three-level corrective actions:
The company deploys an AI monitoring system that tracks employees’ computer use, keystrokes, webcam activity, and work duration to optimize productivity.
Main ethical risks:
Affected groups:
Core tension: Efficiency-driven management vs. human dignity and trust.
Framework: Trustworthy AI (EU Ethics Guidelines, 2019)
Relevant principles:
Ethical insight: Productivity gains lose legitimacy if they erode trust, fairness, and dignity.
Three balanced policy actions: