Safety is a top priority at Bumble For Friends. To minimise the length of time that content which may be against our Community Guidelines, Terms and Conditions, or otherwise harmful or unlawful remains on Bumble For Friends (referred to as ‘Violating Content’), we deploy a combination of:
- Automated systems – these identify and, in some cases, take action against Violating Content in real-time.
- Human-led moderation – to ensure escalated cases are reviewed with appropriate context.
- Member controls – we encourage all our members to use our in-app tools to report, block and unmatch suspicious accounts.
We use these measures with a particular focus on certain types of Violating Content, including (but not limited to):
1. Terrorism Content
- Our Community Guidelines on Dangerous Organisations and Individuals prohibit individuals or organisations that proclaim, glorify, condone, or support violent, dangerous, or terrorist-based missions having a presence on the app.
- Our automated text-based detection flags keywords, phrases, and behaviour patterns associated with extremist content for human review.
- Additionally, photos and videos uploaded onto the app may undergo both human and/or automated moderation to detect imagery containing firearms, explosives, or hate symbols.
- In the unlikely event that Violating Content in this category is established, it would be swiftly removed and the member’s access to the app revoked.
2. Child Sexual Exploitation and Abuse (CSEA) Content
- As per our Terms of Service and Community Guidelines, Bumble For Friends is exclusively for adults (18+) and we have a zero tolerance policy towards any form of child sexual exploitation and abuse.
- All members are encouraged to report any suspected underage members using the specific reporting reasons in our in-app tool.
- We deploy age assurance measures designed to detect and revoke access to the App of any individuals under 18, significantly reducing the risk of child exploitation.
- Our proactive automated text-based detection scans for keywords and behavioural indicators of underage users on the App, which also helps us detect potential child grooming attempts.
- We also deploy proactive Child Sexual Abuse Material (CSAM) detection technology consisting of automated hash-matching against industry-standard databases to identify and remove known CSAM. Confirmed CSAM results in immediate removal and account blocking.
3. Fraud and Financial Offences (including ‘Romance Scams’)
- Our Community Guidelines prohibit any scam or theft activity intended to defraud or manipulate members out of financial or material resources. We also prohibit Inauthentic Profiles and expect all our members to represent themselves accurately on their profile.
- All members are encouraged to report any suspected scam or fake profiles using the specific reporting reasons in our in-app tool.
-
We employ a combination of automated and manual checks to detect and remove fraudulent activity, including:
- Deception Detector™– a proprietary AI-driven fraud detection system which analyses profile data, engagement patterns, and account activity to proactively detect deception before scams occur. This technology is used in conjunction with dedicated human support to prioritise a safe and empowering community.
- Financial solicitation detection - AI-driven and keyword scanning flags and escalated for human review messages financial solicitations.
- Photo & video moderation: automated and human image review to detect and escalate imagery containing celebrities, QR Codes or text.
- Additionally, our dedicated Anti-Spam team continuously analyses profile data, photo authenticity, and behavioural patterns to detect fraudulent users.
- Profiles flagged as suspicious are either blocked immediately if fraudulent behaviour is confirmed or required to complete additional verification before regaining access to the app.
4. Other Violating Content
- Threats, abuse, and harassment – as stated in our Community Guidelines policy on Bullying and Abusive Conduct, we don’t allow content or behaviour that makes any individual or group feel harassed, bullied, or targeted.
- Hate speech and harmful misinformation – our Identity-Based Hate policy prohibits content or behaviour that promotes or condones hate, dehumanisation, degradation, or contempt against marginalised or minoritised communities based on a wide range of protected attributes, including race/ethnicity and religion/belief.
- Breaches of these, and our other Community Guidelines policies are detected through member reports or AI-driven tools and flagged for human moderation.
Acting on reports of illegal content
Our members play a critical role in the safety of Bumble For Friends by reporting content or behaviour that may violate our Terms of Use, Community Guidelines or the law. To ensure swift action, we encourage all members to report Block & Report anyone that makes them feel uncomfortable or unsafe. See this article for more info on what happens when members report something to Bumble For Friends. Automated tools and human reviewers analyse reported content to determine the appropriate action.
In the UK, our members and members of the public can also report any content on Bumble For Friends that they think violates our Terms of Use, our Community Guidelines, or UK law via our Reporting form, outlining the type of violation. This form can also be used to make a complaint about any aspect of Bumble For Friends’ compliance with the OSA. All reports of illegal content and non-compliance complaints submitted via the Reporting Form are reviewed by human moderators.
If we determine that reported content violates our Terms of Use, Community Guidelines, or the law, we determine the penalty by considering several factors. For example, we may:
- Remove the content
- Issue a warning
- Ban the offending member from some or all Bumble Inc. apps
When necessary, we also may cooperate with law enforcement to assist in potential criminal investigations related to member conduct.
If we determine that a non-compliance complaint was founded, we will review our systems and processes to ensure compliance with the relevant OSA obligations.
Use of proactive technology
We employ advanced detection technologies to identify and remove Violating Content proactively, including:
- Private Detector™ - an AI-driven tool that automatically detects and blurs explicit images before they reach users, to prevent unwanted sexual content (also known as ‘cyberflashing’). Users can choose to view or report blurred images instantly.
- Deception Detector™ - uses machine learning to detect inauthentic profiles (see more information in the section on Fraud & Financial Offences above).
- CSAM detection - detection technology consisting of automated hash-matching against industry-standard databases (see more information in the section on Child Sexual Exploitation and Abuse above).
- AI-powered detection and text-based machine learning models which detect and escalate suspected Violating Content.
- Anti-spam systems - analyse a wide range of profile, device, and behavioural data to detect patterns of suspicious behaviour and proactively remove fraudulent users.
All our detection technologies operate alongside human verification teams to ensure accurate enforcement.