Salary and performance data is category-one sensitive
A prompt asking ChatGPT to 'rewrite this offer letter' can inadvertently include the candidate's name, role, and compensation. That's a data incident under AVG.
Spot salary, leave, and employee identifiers before they enter AI tools or email.
Who this is for
HR staff handle the most personal data in any organisation, and they use AI to draft it. Salary figures, performance notes, leave balances, and BSN numbers find their way into prompts and emails. BeeSensible shows the risk before it becomes a breach.
Where it goes wrong today
A prompt asking ChatGPT to 'rewrite this offer letter' can inadvertently include the candidate's name, role, and compensation. That's a data incident under AVG.
HR staff use AI to draft termination letters, performance reviews, and onboarding notes. These are exactly the documents that contain the most personal data.
Demonstrating that controls are in place for how personal data is used in AI tools is increasingly expected in CAO negotiations and works council discussions.
Inline guidance, not a blocker
BeeSensible makes the risk visible at the exact moment the data is being typed, so HR staff can decide what to remove before it goes anywhere.
How BeeSensible helps
The product covers the surface where the risk actually lives: the compose area of the AI tools and communication apps your people already use.
Detects salary figures, BSN, employment contract terms, leave and sickness records, performance categories, and candidate identifiers.
BeeSensible shows what's sensitive before the message is sent. HR staff decide what to remove, replace, or mask. Nothing is blocked automatically.
HR often leads internal AI policy. BeeSensible gives you operational data to show what's actually happening, not just what the policy says.
Detection statistics and handled rates provide evidence that personal data processing in AI tools is controlled and documented.
BeeSensible runs where HR staff already work, without adding friction to every message.