OpenAI Moderation, Profanity, NSFW Filter

Official Plugin by Brandarrow for OpenAI Moderation, Profanity, NSFW Filter - Bubble.io Plugin


Vitals

Description

OpenAI Moderation by Brandarrow is a free, AI-powered content moderation plugin designed for Bubble.io applications. It leverages OpenAI’s powerful moderation API to help developers filter harmful, explicit, or unsafe content in real time promoting a safer, more ethical digital environment.

This plugin provides automatic detection of sensitive or inappropriate content, ensuring your platform remains compliant and user-friendly with minimal effort.

Price

Free

Installation

Add the plugin to your Bubble app via the official plugin marketplace.

Demo & Editor

Demo Editor


Instructions

You can use the plugin either as a workflow action or as a data source in conditionals:

1

Workflow Action

Add the action: Brandarrow – OpenAI – Moderation Check or Brandarrow - OpenAI - Moderation Check (with Image)

Use it in any workflow where user input or data needs to be checked.

2

API Data Source

Select “Get data from an external API”

Choose: Brandarrow – OpenAI – Moderation Check or Brandarrow - OpenAI - Moderation Check (with Image)

Bind the returned data to a condition, display element, or database save logic.


Key Features

The moderation tool checks for content in the following categories:

Category
Description

Hate

Content promoting hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability, or caste.

Hate/Threatening

Hate combined with threats of violence or serious harm.

Harassment

Harassing language toward individuals or groups.

Harassment/Threatening

Harassment with threats of violence or serious harm.

Self-harm

Promotion or depiction of self-harm, such as suicide, cutting, or eating disorders.

Self-harm/Intent

Expressed intent to engage in self-harm.

Self-harm/Instructions

Encouragement or instructions for self-harm.

Sexual

Sexually explicit or arousing content (excluding educational).

Sexual/Minors

Any sexual content involving individuals under 18.

Violence

Depictions or threats of death, violence, or injury.

Violence/Graphic

Graphic, detailed depictions of violence or injury.


Components

Element

None

Action
  • Brandarrow - OpenAI - Moderation Check

  • Brandarrow - OpenAI - Moderation Check (with Image)

Event

None

Data calls
  • Brandarrow - OpenAI - Moderation Check

  • Brandarrow - OpenAI - Moderation Check (with Image)


Example Use Cases

  • Flag user-submitted content in forums, reviews, or chats.

  • Prevent posting of inappropriate or violent text/images.

  • Enforce community guidelines and compliance requirements.

⚠️ Tip: For best accuracy, break long text inputs into smaller chunks under 2,000 characters each.


Common Issues & Troubleshooting

Issue
Likely Cause
Suggested Fix

No moderation data returned

Text input is too long

Split text into chunks < 2,000 characters

False positives or mismatches

Ambiguous or borderline language

Consider using context-aware conditionals

Plugin action not working in workflow

Action not connected properly

Ensure the workflow uses the “Brandarrow – OpenAI – Moderation Check” action with valid input

Conditional check shows no results

API call not triggered or returns null

Confirm the API is connected and the response is parsed properly

Plugin returns error when using an image

The URL is not publicly viewable

Install the File Sharer plugin (free) and use the result of that as the image URL input in this plugin


Changelog

Version
Status
Date
Notes

1.0.0

Shipped

October 3, 2024

Initial release. Full category support using OpenAI’s moderation API.

1.1.0

Shipped

July 12, 2025

Added image moderation support


Need Help?

Help & Support

Last updated

Was this helpful?