CONTENT MODERATION AND SAFETY POLICY
Effective Date: 11 February 2024
This Policy explains how Flipdare maintains a safe environment for all users.
Prohibited Content
The following content is strictly prohibited:
- Child sexual abuse material (CSAM)
- Violence, threats, or terrorism
- Sexual assault or exploitation
- Self‑harm or suicide promotion
- Hate speech
- Harassment or bullying
- Criminal activity
- Spam or malicious content
- Any content illegal in the user’s jurisdiction
Reporting Mechanism
Users may report any content directly within the Application.
Reports may include:
- Death or self‑harm
- Child abuse
- Violence
- Sexual crimes
- Bullying
- Spam
- Criminal activity
- Legal issues
- Other serious issues
Automated and Manual Review
Flipdare may use:
- Google Content Safety API
- Google CSAI Match
- Google Sentiment Analysis
- Internal classification tools
Severe reported content is reviewed by a human moderator within 5 days.
All other reported content is reviewed by a human moderation within 31 days.
Law‑Enforcement Escalation
Severe content may be forwarded to:
- AFP
- FBI
- INHOPE
- GIFCT
- ACCCE
- NCMEC
- Other authorized agencies
Flipdare complies with all lawful requests for information.
Reputation System
Each user has a Reputation score (0–100).
- Low Reputation triggers:
- increased content analysis
- automated restrictions
- potential suspension
Automated Restrictions
Depending on severity, users may lose access to:
- profile editing
- chat
- Task creation
- Promise creation
- other features
Restrictions last from 1 day to permanent, based on severity and Reputation.
Dispute Resolution
Restricted users may dispute via:
- Flipdare Support
- Admin > Disputes screen
- A temporary account may be created if access is blocked.
A mediator may be requested.
The mediator’s identity is hidden.
Final decisions are issued within 7–8 days.