Moderation
Aria uses a combination of AI-powered moderation and human review to keep the platform safe.
How it works
Real-time monitoring
Live sessions are monitored in real time for harmful content including hate speech, threats, and sexually explicit material. If something is detected, it's flagged for review.
DM screening
Direct messages are screened for toxic or harassing content to keep private conversations respectful.
PII detection
Personal information shared in live audio (like phone numbers or addresses) is detected and flagged to protect participants.
Music detection
Recorded sessions are checked for unauthorized copyrighted music to help creators stay compliant.
Human review
All automated actions can be appealed. Automated moderation may produce false positives, and every flagged case is available for human review through the Appeals process.
Host moderation
Session hosts also play a role in moderation. Hosts can:
- Mute disruptive speakers
- Remove participants from their session
- Set content warnings appropriate to their content
Transparency
Aria is committed to transparent moderation:
- All enforcement actions include a written explanation
- Every action is logged in our moderation audit trail
- Users always know what action was taken and why