With the new AI summaries feature, Meisterplan helps project managers evaluate information more quickly and prepare it in a structured way. The feature uses a modern generative language model that is specifically designed to summarize project data. Transparency, data protection, and the secure processing of all inputs are top priorities. In this Q&A, we answer the most common questions about how the feature works, how data is processed, and what protective measures are in place for the AI in use.
General
Question: What type of AI technology is it?
Answer: A generative language model (Large Language Model, LLM) from Anthropic’s Claude family is used. Currently, the Claude 4.5 Haiku model is deployed. The model is utilized as a managed foundation model via Amazon Bedrock and supports users in structuring project information as well as creating project status reports and notes.
---
Question: What is the main purpose of the AI?
Answer: The main purpose of the AI is to create and update project status notes (progress, deadlines, costs, risks, issues) based on the project data provided in each case. The AI is used to summarize and present existing information in a structured way and does not generate any “new” project data of its own.
---
Question: What type of AI language model does Meisterplan use?
Answer: It is a generative AI language model (Large Language Model, LLM) that is used for automated text generation and summarization.
---
Question: Where is the AI hosted?
Answer: The AI model provided by Anthropic is operated via AWS Bedrock. Request processing takes place within the infrastructure provided by AWS and exclusively in the configured AWS region, either in the United States (Oregon region) or within the European Union.
AWS documents that user inputs and the model outputs are not shared with the model providers (source).
---
Question: Which functions are the AI used for?
Answer: The AI is used for supportive functions in the project context, including:
- Creating project status notes,
- Generating structured summaries of project KPIs such as time, cost, progress, and risks,
- Preparing project information in a quality‑assured manner for management reports.
Only the project data provided for each case, as well as relevant role and permission information, are taken into account. Data outside the selected project is not used.
---
Question: Is it clearly communicated that the user is interacting with AI?
Answer: Yes. The use of AI features is clearly identifiable for the user. AI features are only displayed and made available if they have been explicitly activated beforehand.
Furthermore, interaction with AI is clearly marked. The feature name "AI Summaries" explicitly indicates the use of AI. Additionally, consistent AI symbols are used both in the information panel and on the generate button to make it transparent that the content is generated by AI.
---
Question: What is Meisterplan AI?
Answer: Meisterplan AI is a supportive AI feature integrated into the Meisterplan system that automatically analyzes and prepares project‑related content. It uses a generative language model to structure, summarize, and clearly present project data such as status notes, KPIs, or comments. The AI works exclusively with the project data provided in each case and helps teams create meaningful project information and management reports more quickly.
---
Question: How is Meisterplan AI activated and what requirements must be met?
Answer: Meisterplan AI can be switched on or off at any time using a toggle. To use the AI features, beta access must be enabled both system‑wide (by administrators) and in the user’s own profile. In addition, under Integrations > Meisterplan AI, the toggle must be set to Meisterplan AI. Once this is activated, all users will have access to the AI features.
To use Meisterplan AI, the user’s group also needs the permission Configure Meisterplan AI under Manage > User Groups > System Administration > System Administration.
---
Question: Why is this feature labeled as beta?
Answer: The feature is in beta because its scope and final design are still evolving. While the core functionality is already stable, details such as user guidance, scope, or behavioral logic may still change. The beta label makes it transparent that the feature is not yet fully finalized and will continue to be refined based on user feedback.
Integrity
Question: Which datasets are the AI based on?
Answer: The AI model used, Claude Haiku 4.5, was fully trained by the model provider Anthropic. Meisterplan will not develop or retrain the model.
The training data consists of a mix of licensed data, human-created training examples, and publicly available texts. Specific individual datasets are not disclosed by the provider. A detailed description of the training data is available from Anthropic in their official System Card (System Card).
---
Question: What data was used for training purposes?
Answer: The training data for the AI model used comes exclusively from the AI provider Anthropic. It consists of a mix of licensed data, human-created training examples, and publicly available texts.
Project or customer data from Meisterplan is not used for training purposes and is not incorporated into either the training or further development of the AI models.
---
Question: What time period does the training data cover?
Answer: The training data for the Claude Haiku 4.5 model comes from sources extending to July 2025. Information generated after this date is not part of the training dataset and therefore cannot be reliably reflected in the model's knowledge (source).
---
Question: How are biases in AI models ("hallucinations") checked?
Answer: To reduce biases and "hallucinations", several measures are combined. The AI is used in such a way that it relies exclusively on the explicitly provided project data. This reduces the likelihood of unfounded or speculative statements.
Furthermore, the model provider Anthropic has specifically trained the AI to be cautious in situations of uncertainty. The goal is for the model to recognize missing or unclear information and communicate it transparently, instead of generating false or fabricated content. This approach is part of their "Constitutional AI" training and is verified through regular evaluations and tests.
Anthropic explicitly points out that outputs may be incorrect, incomplete, or misleading and should not be used as a factual basis without independent verification (source).
---
Question: How do you ensure that no harmful or discriminatory content is generated?
Answer: To generate AI content, an established AI model (Claude from Anthropic) is used. It has integrated safety and alignment mechanisms. The model is trained according to the "Constitutional AI" approach.
In this approach, the AI is guided by a written "constitution" that defines binding principles for generating responses. These include, among others:
- No discrimination or derogatory content towards protected groups
- Avoidance of racist, sexist, or hateful language
- Respect for human dignity and equality
These principles are an integral part of the training and evaluation process and serve to systematically prevent harmful, discriminatory, or inappropriate output (Source, Safeguards and harmlessness).
---
Question: How often is the AI retrained?
Answer: Training and the release of new model versions are handled exclusively by the AI provider Anthropic. No project-specific retraining or fine-tuning by Meisterplan takes place.
Anthropic releases new versions of the Claude Haiku models. For example, Claude Haiku 3.5 was released in October 2024 and Claude Haiku 4.5 in October 2025. Anthropic’s product and research planning only indicates that updates occur but does not specify a binding frequency.
---
Question: Are third-party rights protected (intellectual property, etc.)?
Answer: Yes. Third-party rights are protected because project-related content is processed exclusively for the duration of the respective AI request. The transmitted content is not used for training, further development, or updating the AI models and is not shared with the model provider.
Processing is carried out for a specific purpose within the AWS Bedrock infrastructure using contractual, technical, and organizational safeguards. This ensures that the customers' intellectual property remains entirely under their control (source).
Data Usage
Question: Is customer data used for AI? What data?
Answer: Yes, selected project-related customer data is used at runtime as input for AI functions (e.g., to generate summaries). This includes content available in the respective project, such as project data and associated comments.
This data is used exclusively to process the specific request to the AI model. It is not used for training, updating, or improving the AI models. Only explicitly selected project data is transmitted; data from other projects or contexts is not included.
---
Question: Will the data be used to train or update the AI?
Answer: No. The provided project and input data will only be processed at runtime within the scope of the request. The data will not be used for training, fine-tuning, or improving AI models.
---
Question: How do you guarantee that data won't be used to train or update the AI?
Answer: The guarantee is provided by the architecture and terms of service of AWS Bedrock. Amazon Bedrock processes the data entirely within the AWS infrastructure. Prompts and model responses are not passed to the model provider (Anthropic) and are not used for training Foundation Models for current or future model versions (source).
---
Question: How is the data protected?
Answer: Data processing takes place via AWS Bedrock using established security and compliance mechanisms.
Prompts or model responses are not shared with the model provider (Anthropic). Anthropic has no access to requests or generated results. The data processed via AWS Bedrock is not used to train Foundation Models for current or future model versions – not even in anonymized form.
Data transmission is encrypted using TLS.
AWS Bedrock meets relevant security and compliance standards, including GDPR requirements and certifications such as ISO 27001 and SOC 1/2/3. The service is suitable for use in regulated environments, such as the financial or public sector (source).
---
Question: Are custom prompts/conversations saved?
Answer: Custom prompts can optionally be saved within Meisterplan to be reused. Complete conversation histories are not permanently stored for end users.
---
Question: Are there audit logs for the generated data?
Answer: Yes, audit logs exist for the use of the AI interface. These logs record technical metadata such as timestamps, the service called or the model used, project or account assignment, and the success or failure status of a request. Through our AWS infrastructure, model invocation logging is performed automatically, recording all requests made to the AI model in Amazon S3 or CloudWatch. This log captures when the model was called and with which request; however, the associated user information is pseudonymized, so it is not possible to trace which specific individual submitted the request.