We are one of the first workforce management software providers to release our MCP integration. This is great news for everyone who wants to use CrewBrain with an LLM of their choice – a further step we’re taking towards smarter planning.
How do I set it up?
We explain the setup and technical requirements in a separate wiki article.
Background: Whatâs behind our MCP integration?
With the launch of our MCP server, CrewBrain introduces an important building block for connecting AI (e.g., large language models, LLMs) and workforce management. In short: The MCP server provides the interface through which AI clients can interact with the CrewBrain system, e.g., to create jobs or customers.
Some technical highlights:
- Authentication can be done via OAuth with Dynamic Client Registration (DCR).
- Alternatively, token-based authentication can be used orâless recommendedâBasic Auth with username and password.
- The MCP server basically supports a similar feature set as the existing CrewBrain API v2 and is actively being developed further.
- Currently, communication takes place via SSE (Server-Sent Events). If you use a client that only supports HTTP Streamable Transport, please contact us. We will check whether a timely integration of Streamable HTTP is possible.
All this demonstrates our commitment to modern technologies without locking you into a specific AI provider. We’re creating the technical infrastructure so that our customers can use CrewBrain with their preferred AI.
What can you do with it? Examples & use cases
The MCP integration opens up various exciting possibilities, such as:
Reporting & Insights AI assistants could analyze planning and deployment data to identify trends (âIn the last 3 months there were an average of 12 overtime hours per week in Department Bâ) and make suggestions for improvements (âConsider scheduling an extra shift or hiring more part-time staffâ).
Chat-based planning assistance A chat interface (e.g., an LLM like ChatGPT or another model) asks: âWhat does the duty roster look like for next week?â and the system provides a summary based on CrewBrainâs data. Or: âCreate a job âReturn vehicleâ on Saturday from 10 am to 12 pm at Location Leipzig.â The LLM generates the technical call, the MCP server accepts the task, and creates the entry in CrewBrain.
Integration with other systems For example, if a chatbot is embedded in the employee portal, employees could express change requests via voice or text (âI want to take next Tuesday offâ) and the system automatically checks and, if possible, creates the request or suggests alternative dates. The MCP server enables access to CrewBrain data.
When choosing and implementing your AI integration, please keep the EU AI Act in mind if you are located in the EU or work for a company based in the EU. The EU AI Act restricts certain AI integrations and sets requirements for the traceability of AI decisions, especially if they affect your staff. See also our article âAI in HR planning â what is and isn’t allowed in the EU?âKeep the EU AI Act in mind!
Why this is an important step for CrewBrain
- Flexibility & openness: With the MCP interface, no one is limited to a predefined AI toolâcustomers can integrate the AI assistant of their choice. Even fully self-hosted AIs, which are particularly privacy-friendly, can be used in conjunction with CrewBrain.
- Future-proof planning: AI can simplify many repetitive tasks, such as updating customers and locations from an Excel list.
- Increased efficiency & relief: Routine inquiries (e.g., âDo I have an early shift next week?â) can be answered automatically for employees.
- Security & standardization: OAuth/DCR and token authentication provide a modern security framework.
Data protection implications
With the MCP integration, mainly the way you interact with CrewBrain changes. A few important points:
- Acting on behalf of the user: A connected AI assistant always operates in your user context: it can only access the data and perform actions that your CrewBrain account is authorized for. Authentication is done, for example, via OAuth with Dynamic Client Registration or a personal access token.
- Data flow to the AI: Each AI integration decides for itself which data it sends to the respective LLM. This can range from highly minimized, abstracted information (âGive me an overview of next weekâs assignmentsâ) to detailed data sets. Here, itâs worth defining in advance:
- Which fields are truly necessary
- Whether sensitive data (e.g., names) can be pseudonymized or omitted
- Which actions the AI is allowed to trigger (read-only vs. write access)
- Freedom of choice in hosting: With MCP you can
- use cloud LLMs from major providers (e.g., ChatGPT with OAuth integration)
- or connect fully self-hosted models that run exclusively within your own infrastructure. The latter is especially attractive for many when it comes to compliance and data protection requirements.
- Contract & compliance topics If you integrate external LLM providers, their terms of use and data protection regulations apply in addition to those of CrewBrain. In particular, check:
- Are the sent data used for training?
- Where are the servers located (EU / third country)?
- Is there a data processing agreement (DPA)?
- Best practices
- Maintain roles & permissions in CrewBrain carefullyâthe AI inherits them.
- Only provide the AI with as much data as is necessary for the use case (data minimization).
- Document internally which AI services are connected and for what purpose.
- Involve data protection officers early, especially for new, data-intensive use cases.
- Obtain user consent before using the system.
Please remember: LLMs are known to perform unwanted actions or use chats for training purposes. In case of doubt, you are giving a third party (e.g., OpenAI) the opportunity to access your CrewBrain systemâs data. To revoke access for a client registered via OAuth, please go to Administration -> API -> OAuth and delete the respective client. From that point on, its login is no longer valid and data can no longer be accessed.
Your next steps
We look forward to the integrations you and your team will build using AI and the MCP protocol! We recommend working through the following checklist before deploying an AI system:
- Who on the team is allowed to test or configure AI integrations?
- If you use a public, non-self-hosted LLM provider (such as OpenAI), obtain your usersâ consent before using it so their data may be processed by this provider.
- If you use your own LLM, you have more control over the data. At the same time, depending on usage, it may be necessary to review the exact use case. For example, AI used for performance measurement or control is classified as a high-risk application under the EU AI Act and therefore requires bias tests. The use should definitely be discussed with the data protection officer beforehand.
- Which tools may my MCP client use? Usually, the clients let you select or deselect which tools are allowed. For example, you could restrict it to absences only if you only want to create vacation requests.
- Ideally, set up a CrewBrain user with restricted permissions. This way, the LLM cannot âgo rogueâ and wonât have access to information it shouldnât see.
- Bring your team on board and discuss together whether and how you want to use AI, and regularly check whether it actually brings you benefitsâfor some tasks, an experienced CrewBrain user is still faster than any computer.
A reality check: What can AI actually do?
Of course, we did extensive testing in preparation for our launch. AI that is supposed to interact with CrewBrain must be especially good at the discipline of âtool-calling.â Tool-calling is essentially the process of converting natural language into a computer-readable format that the MCP server can then handle.
What worked well in our tests1
In general, AI works well with CrewBrain when only a few âtool-callsâ are required, such as simple querying, creating, editing, or deleting data records. AI also generally finds it easier to work with systems that have only a small to medium number of records (e.g., dozens of jobs, dozens of employees). So far, we have not observed any unwanted behavior in ChatGPT during our tests (e.g., accidental deletion of records). Here are some requests where we already got good results with ChatGPT in testing:
- Create customer XYZ (for example, by inserting the customerâs email signature)
- Create multiple customers from this Excel list (with attached Excel file).
- Employee B is on vacation from 12/23/2025 to 12/30/2025. Create the vacation entry.
- Employee A has called in sick. Create the sick leave entry.
- Create employee Firstname Lastname, Street 5, ZIP City. The email address is xyz@gmail.com.
- Create multiple employees from this Excel list (with attached Excel file).
- Create the job “Event” at Location.
- Change the phone number of employee/customer XYZ.
- Which jobs from this Excel list have already been created? (with attached Excel file)
- Assign employee XYZ to job “Event”.
- Change “Event” from 10:00â19:00 to 09:00â19:00.
- Who is on vacation on 12/23/2025?
- Which jobs are scheduled for this week?
- Of course, we conducted the tests on our own test system. This means that the LLM could behave differently again in connection with your real data. â©ïž
…and what didnât work so well
In systems with thousands of records, AI still struggles, as a lot of filtering is required. Calculations are often incorrect or the LLM gets stuck. However, we can imagine that you could equip a specially trained model with additional capabilities (e.g., running small Python code to perform calculations).
- How many hours did an employee work in October?
- Assign people to my job “Event”.
- Create a payroll period for an employee. (This is currently not supported by our interface)
Reality check: Our conclusion
We also tested with LLMs from other providers and noticed that, at present, Chinese and some smaller open-source models tend to perform better in tool-calling, such as:
- Kimi K2 (modified MIT License; Moonshot AI)
- GLM 4.6 (MIT License; Z.ai)
- Qwen3 Coder (Apache 2.0 License; Qwen)
- GPT-OSS 120B (Apache 2.0 License; OpenAI)
You can find a list of models that are especially good at tool-calling at OpenRouter.
Especially in the early stages, youâll often need to work on the optimal wording for the AI so it understands exactly what you want to do. If you or the AI get stuck, feel free to check here to see what content the tool expects. With explicit hints, the LLM often knows how to proceed.
Have you found a great use case? Then share it with us using the hashtag #crewbrainai – we look forward to your ideas!
This post is also available in de_DE.