Children deserve better protection online.

Guardiobot is a volunteer-run safety and moderation platform for Discord and Roblox communities which aims to create a safer environment on some of the most popular platforms.

Scroll to explore
300M+
children affected by online exploitation annually
Childlight Global Child Safety Institute, 2024

Platform moderation is not enough.

Roblox employs 3,000 moderators for a platform with over 100 million daily active users. Discord's native tools are reactive — they respond after harm has already occurred. Neither platform can watch every server, every message, every join.

The most common pattern is well documented: contact made on Roblox, conversation moved to Discord, exploitation follows. Predators rely on the gap between platforms. Guardiobot is built to close it.

Why we built this

The gap every predator exploits.

Roblox and Discord are separate platforms with separate moderation systems. Predators exploit that and move children off-platform to private, unmonitored conversations.

01

Contact on Roblox

A child is playing a Roblox game. A stranger joins, starts a conversation. It begins with in-game chat — compliments, shared interests, offers to trade items or help.

02

Moved to Discord

The conversation shifts. "Let's talk on Discord where it's easier." Roblox's moderation ends. The child is now in a private channel, a server, or a DM — watched by no one.

03

Escalation

Without cross-platform awareness, Discord servers have no way to know a user arrived from Roblox, what their behaviour there looked like, or whether they've done this before.

Guardiobot's response

We utilise data from other projects like Rotector who have data on known offenders and give communities the tools they need to take action and protect themselves.

Three systems. One platform.

Guardiobot combines automated detection, human review, and cross-platform intelligence into a single service so server owners don't have to choose between them or wire them together themselves.

Full feature list

01

Server Protection

Every message scored for severity. Every join checked against our databases of predators. Threats are acted on before a human has to see them.

  • Rule-based detection — keyword, regex, hash, URL
  • Anti-raid with VPN and alt-account detection
  • Two-factor member verification on join
  • Cross-server ban propagation across the network

02

Human Review

Flagged reports go to trained Trust & Safety volunteers who review and make a decision on each case.

  • Anonymous reporting for members, reviewed by trained volunteers.
  • Automatic action taken based on offense history and severity
  • NSFW reports are reported to authorities and NCMEC
  • Full audit trail and appeals system

03

User Intelligence

Safety history persists across servers and platforms. A bad actor can't start fresh by hopping to a new server, switching from Roblox to Discord or creating a new account.

  • Classification A–D based on verified offence history
  • Rotector and TASE integration for Roblox and Discord data
  • GDPR compliant with full data export and erasure

Our impact, in numbers.

Updated every 15 minutes from our live systems.

Servers Protected

Discord communities actively running Guardiobot.

Reports Handled

Cases reviewed by volunteers in the last 30 days.

Threats Blocked

Harmful matches automatically stopped in the last 30 days.

The people behind it

Real people protecting real communities.

Guardiobot isn't 100% automated, and that's intentional. Using AI for things like reports would be fast and more efficient,but it struggles in understanding context and it can be costly for a small project like us to implement. We believe decisions should be made based on understanding, not speed.

Volunteers aren't paid for what they do, they work for us every week because they have the same passion we do. Creating a safer internet.

Trust & Safety
Reviews reports, works with law enforcement and other organisations to report content and moderates Discord communities along with Roblox experiences.
Human Resources
Manages volunteers, ensures policy compliance and responds to any user complaints or tickets.
Awareness Team
Handles partnerships, community outreach and content creation for Guardiobot.
Development Team
Builds and maintains all of our systems.

What we detect and stop.

Purpose-built detection for the threats most common (and rare) in communities where younger audiences are present.

Grooming & Exploitation

Requests for personal information, attempts to move conversations off-platform, age and location probing, and gradual boundary-testing patterns — all flagged before they escalate.

Escalation patterns Off-platform redirect Cross-platform

Harassment & Threats

Direct threats, targeted harassment, hate speech across character encodings and obfuscation attempts, and self-harm encouragement — detected and actioned automatically.

Slurs & hate speech Obfuscation-resistant Per-server config

Scams & Raids

Phishing links checked against safety databases, cryptocurrency scams, mass-mention spam, and coordinated join floods — stopped before your members ever see them.

URL scanning Anti-raid Rate limiting

What communities say.

From the server owners and moderators using Guardiobot every day.

Our Partners

Collaborating with other safety initiatives to give your community stronger protection

Ready to Protect Your Community?

Join Guardiobot and help us build a future with safer platforms for everyone.