AI in personnel planning – what is allowed in the EU and what is not?

In many industries, there is increasing pressure to make staff deployment more efficient and flexible – whether due to a shortage of skilled workers, cost targets or rapidly changing demand. The use of artificial intelligence (AI) in personnel planning is therefore increasingly coming into focus: AI systems can evaluate large amounts of data from time management, sales forecasts and employee profiles and use this data to automatically create schedules. AI promises to reduce planning costs and increase planning quality, for example by better compensating for unforeseen absences and complying with legal requirements. But is this legally permitted?

Advantages of AI in personnel planning

AI systems can bring the following specific benefits to workforce scheduling:

  • More precise demand forecasts: AI models analyze historical data (e.g. patient numbers, order volumes or sales figures) and identify patterns. This allows future staffing requirements to be predicted.
  • More flexible shift planning: AI also reacts to changes at short notice. If an employee is unexpectedly absent or demand changes (e.g. in retail or logistics), an AI system can calculate alternative schedules and automatically reallocate employees.
  • Efficiency and cost reduction: Automated schedules minimize idle time and overload. This allows companies to reduce overtime, avoid expensive temporary work and cut personnel costs without jeopardizing service quality. At the same time, legal regulations are better complied with (e.g. rest periods, qualification requirements) because the system automatically takes these requirements into account.
  • Precise matching of qualifications: AI assigns employees according to their skills. This results in higher quality work results and greater satisfaction. The “demand-competence mapping” can reduce errors that are overlooked during manual planning.
  • Industry diversity: These benefits apply to many industries. AI is particularly worthwhile where staffing requirements fluctuate greatly, for example in the event industry.

All these advantages sound as if AI should be used immediately for all types of personnel planning. However, for these benefits to be realized, AI must be able to access huge amounts of data. This must include not only historical data, but also employee assessments, preferences and qualifications. In the event industry in particular, many of these criteria exist primarily in the heads of the planners – they need to be digitized and made available to the AI.

And once this hurdle has been overcome, a huge pile of mostly personal and sometimes highly sensitive data remains. For this reason, the European Union has already created a legal framework for the use of AI with the AI Act.

Regulation through the EU AI Act

The new EU AI Act came into force on August 1, 2024. It is risk-based: AI applications in human resources generally belong to high-risk systems, as decisions on recruitment, promotion, transfer or assignment of tasks affect fundamental employment relationships. For example, “AI tools for employment, employee management and access to self-employment (e.g. software for selecting CVs for recruitment)” are explicitly considered high-risk. The same applies to AI systems that monitor or evaluate employee performance or behavior.

High-risk AI is subject to strict requirements: companies must introduce comprehensive risk management and ensure data quality and data handling. Specific obligations include, for example, impact assessments on fundamental rights, regular anti-discrimination tests, complete logging of all AI decisions and detailed technical documentation. It is envisaged that such AI systems will be recorded in an EU register. The AI Act also stipulates that there must always be human supervision: An AI may only support decisions, the final responsibility lies with the human.

In addition to the high-risk rules, there are transparency obligations for lower risk classes. For example, when engaging in direct dialog with AI (e.g. chatbots or virtual assistants in HR), employers must disclose that it is

What is allowed and what is not?

There are clear limits depending on the use case:

  • Classic planning tools, chatbots and forecasting systems that work without sensitive data are permitted. For example, AI can calculate scheduling software or evaluate availabilities. Pure information AI (e.g. an internal chatbot for vacation inquiries) is also only subject to transparency obligations – in this case, it is sufficient to tell users “I am an AI tool”. In principle, a company may evaluate its working time and absence data using AI and, for example, automatically take shift requests into account (as long as data protection and company agreements are complied with).
  • AI systems that intervene in recruitment or personnel decisions are subject to restrictions. For example, CV scanners or applicant pre-selection are permitted in principle, but are considered high-risk. These may categorize applications or make initial assessments – however, the system must be documented and a final human decision must be made. Fully automated rejections without a human review would be inadmissible (according to Art. 22 GDPR, no computer decision alone may “significantly affect” an applicant). In the event industry in particular, “applications” are often not just initial, but are used by companies in such a way that casual employees have to apply for every single job. Accordingly, the requirements of the AI Act must always be observed if this process is supported by AI. Tools for measuring performance or assessing employee behavior are also permitted, but only with strict controls. AI that collects productivity-related data or makes predictions about an employee’s performance is a high-risk application and therefore requires accountability and regular bias testing. Companies must also comply with the General Data Protection Regulation (GDPR) and may only process data that is necessary for the respective purpose.
  • AI applications with an “unacceptable risk” are prohibited. This includes, in particular, emotion recognition in the workplace – the law expressly prohibits the use of AI tools in the workplace that analyze employees’ feelings, moods or inner attitudes. Social scoring is also prohibited: an AI system may not evaluate or rank the personal or social characteristics of employees in order to make bonus payment or termination decisions, for example. It is also forbidden to exploit people using manipulative AI techniques – such as automatic behavior control or exploiting weaknesses (age, disability, social situation). In general, AI may not evaluate biometric or sensitive health data of employees without an explicit legal basis (this also falls under serious data protection and discrimination risks).
  • Practical examples: A permissible example would be an AI-supported demand forecast or roster adjustment based on anonymous usage data. A chatbot that coordinates shift swaps is permitted – as long as it is recognizable as AI. In contrast, fully automated applicant selection (selecting candidates without human intervention) would not be permitted. Video surveillance with AI evaluation (e.g. recognizing who goes into the office) is also illegal if it is used permanently and without good reason. Every new AI application in the HR sector should therefore be legally reviewed. In Germany, employers must also inform and generally involve the works council before using such systems.

Opportunities for the use of AI in event personnel planning

Despite all caution, the smart use of AI offers significant opportunities for event agencies and HR teams. Here are the most important potentials at a glance:

  • Efficiency and time savings: Routine tasks such as creating shift schedules or assigning staff to events can be significantly accelerated. Planners need to invest less time in Excel lists and phone calls and can generate complex duty rosters in seconds This drastically reduces planning effort – freed-up capacity can be used for more important tasks.
  • Cost optimization: AI planning helps to reduce personnel costs by avoiding overstaffing and minimizing idle time. At the same time, understaffing (which leads to overtime or loss of quality) can be proactively prevented. According to one provider, AI. Automatic compliance with pay scale rules (e.g. optimal distribution of overtime) can also prevent costly errors. Overall, staff deployment becomes more efficient, which has a direct impact on margins.
  • More accurate forecasts & better decisions: A major advantage of AI is data analysis. Algorithms recognize patterns that may remain hidden to humans. For example, seasonal fluctuations in demand, weather effects or booking trends can be incorporated into staff planning and more precise forecasts can be generated. Decisions such as “How many crew do I need for event X?” are therefore based less on gut feeling and more on facts. This increases planning reliability. AI provides those responsible for planning with recommendations for action (such as warnings in the event of bottlenecks or suggestions for rearrangements), which leads to more informed decisions – ultimately enabling better management of staffing requirements.
  • Flexibility and rapid response: In the event industry, plans often change at short notice – be it due to additional guests, program changes or sick leave in the team. AI-supported systems can react to such changes in real time. For example, in the event of a sudden loss of staff, the software can immediately suggest a replacement or alert additional staff in the event of an unplanned high number of visitors. Overall, AI increases the resilience of planning: you are better prepared for surprises and can reschedule more quickly than would be possible manually. For companies, this means being able to keep a cool head even in stressful phases because the tools provide suggestions.
  • Fairness and employee satisfaction: When used correctly, AI can improve the workforce experience. Algorithms treat employees neutrally according to the criteria entered – nepotism or unintentional disadvantages by the planner are eliminated. Employee preferences can also be systematically taken into account: For example, an AI could be programmed to rotate the distribution of unpopular shifts fairly. Objectively better and balanced duty rosters increase satisfaction. The option of submitting requests (which the AI then takes into account as far as possible) also gives employees a greater say. Ultimately, both employers and employees benefit: AI can help to reduce overload, increase the ability to plan for the workforce and reduce conflicts.
  • Adherence to rules & compliance: An often underestimated advantage: AI never forgets a rule. Working time laws, rest period regulations, maximum shifts – all these compliance requirements can be stored in the system so that the planning tool automatically generates only compliant schedules. This significantly reduces the risk of violations. Qualification requirements (who is allowed to operate which machine, who needs which safety certificate) can also be checked algorithmically to ensure that no one is deployed incorrectly. AI acts like a “digital co-pilot” here, warning the planner of rule violations. This is an important advantage, especially in the EU, where violations of working time rules can be expensive. It also relieves HR of having to manually check every schedule.

Risks and challenges: Why you can’t just “throw AI at it”

Despite all the opportunities, the use of AI must not be taken lightly. Numerous challenges must be overcome to ensure that the AI project does not turn into a failure. Here are the biggest risks associated with AI in HR planning – and why you can’t just “throw AI at it”:

  • Data quality and quantity: AI is only as good as the database. In the event industry, which is often characterized by individual projects and highly varying requirements, there is often no large, clean data history. If historical staffing requirements are incomplete or unstructured, AI forecasts remain unreliable 45 46. The principle of “garbage in, garbage out” applies here. Small event agencies may not even have enough past events to recognize sound patterns. Without valid data, you can “throw” as much AI at it as you like – but the result is little added value. Companies must therefore first invest in data maintenance and collection before AI can work its magic.
  • Complexity of implementation: Introducing an AI solution is not plug-and-play. The systems must be integrated into existing software landscapes, linked to calendar and booking systems and adapted to the specific business rules. This requires effort, expertise and often external support. In addition, planners should be trained to use the AI tools and interpret the results correctly. If AI is configured incorrectly (e.g. unsuitable parameters for the event industry) or there is a lack of internal understanding, errors can occur – from nonsensical shift suggestions to incorrect data evaluation. These initial hurdles cost time and money. Simply “switching on and running” is utopian.
  • Acceptance and change management: Even the best AI is useless if employees and managers don’t accept it. There are sometimes reservations: Some fear that AI could replace or monitor humans. According to experts, AI is often viewed negatively in the HR department because people think of questionable applications in recruitment or dismissals. Transparency and involvement are crucial here. The workforce (and the works council) must be brought on board from the outset, otherwise there is a risk of mistrust and resistance. The introduction of AI planning therefore requires change management: explaining, training, taking concerns seriously. Without this, planners may ignore AI suggestions or employees may boycott duty rosters because they are “made by a computer”. The technology is only as good as the acceptance within the team.
  • Bias and discrimination: AI is not automatically objective. If a system is trained on past data, it may reproduce existing prejudices. Example: If a planner has always selected certain people for coveted jobs in the past, an AI could adopt this tendency – and disadvantage women, older employees or certain groups without intending to. This is particularly well known in recruiting (keyword Amazon recruiting AI), but can also happen with shift assignments (e.g. employees who have often said “no” are less likely to be asked for a job). It is difficult to recognize such distorted results, as complex algorithms are difficult to explain. Developers and HR need to be vigilant here and constantly check the results. In addition, of course, equal treatment rules still apply: An AI must not differentiate according to gender, origin, age, etc. – that would be illegal. But even indirect discrimination (e.g. via proxy data) is a danger. In short: without careful monitoring, there is a risk that AI will inadvertently reinforce injustices.
  • Lack of transparency and black box problems: Many AI models (such as neural networks) work like a black box – the decision-making processes are difficult to understand. This is problematic in the HR area: if an employee asks why he/she is given fewer shifts than others, the employer must be able to explain this to some extent. “That’s what the algorithm spit out” is not enough. A lack of traceability can destroy trust and become a real problem in the event of a dispute. Legally, it could even be mandatory under the GDPR to provide an explanation (for automated decisions). This lack of transparency requires countermeasures: either use rules that are easier to explain (even if the AI potential is then lower) or technical solutions for explainable AI. Until then, always check AI results with common sense. Blind trust in an opaque algorithm is dangerous.
  • Data protection and ethical boundaries: As described above, data protection is an obstacle to the extensive use of data by AI. An AI that collects “everything” – from sickness levels to productivity and GPS data – quickly overshoots the mark. Apart from legal sanctions, such an all-round monitoring approach would also destroy employee motivation. The risk of data misuse is real: AI tempts companies to incorporate more and more data points (“because we can”), for example to improve forecasts. Companies need to rein themselves in and set ethical guidelines. Data security is also an issue: AI systems are IT systems and potentially hackable. Central personnel planning databases with all shift data are sensitive – a leak could reveal business secrets or personal information. Introducing AI therefore also means investing in cybersecurity.
  • No 100% reliability: As advanced as AI is, events are often characterized by unique circumstances that no algorithm can predict. An AI model only knows the past. Unprecedented situations (e.g. a new event format, a pandemic, a sudden VIP visit with special requests) can cause forecasts to falter. Humans, on the other hand, can react to new situations with creativity and experience. If you rely too much on AI, you run the risk of overlooking unusual circumstances. Algorithms can also make mistakes. There have been cases where shift planning AIs have suggested completely impractical schedules because a parameter was set incorrectly. Human supervision remains essential. AI takes away routine work, but makes planning more challenging overall because the role of the human shifts to controlling and questioning the AI. A deceptive “lean back and let the AI do its thing” would be fatal.

Conclusion

AI can significantly improve workforce scheduling – it creates transparency about future requirements and relieves planners. At the same time, the EU legal framework requires companies to take great care: In addition to GDPR and equal treatment law, the AI Act in particular brings many new obligations. High-risk applications (recruiting, employee monitoring, etc.) must be strictly checked, documented and monitored by humans. AI applications that entail unacceptable risks – such as emotion recognition or unfair social assessments – are prohibited. Overall, companies can benefit from AI in HR planning, but they must proceed responsibly and pursue a well-thought-out strategy so that AI can be used in a targeted and legally compliant manner.

This post is also available in de_DE.

You May Also Like

Some small optimizations before the summer break​

The latest CrewBrain update focuses on optimizing short recording, task imports, and a new vacation process with substitution rules, enhancing ease of use with color accents, keyboard navigation, and customizable dashboard widgets, available from tomorrow.
View Post

How we prioritize our Feature Roadmap

A clear and thoughtful prioritization of the feature roadmap is crucial for long-term success and customer satisfaction. Ultimately, we regularly face the challenge of selecting the ideas that bring the…
View Post

Various extensions and a new language

The latest GigPlaner update introduces French language support, new overtime and category surcharge options, enhanced time tracking features, favorite filters, and improved job contact management, aiming to streamline operations and enhance user experience across Europe.
View Post