Ethical aspects of using AI in Drupal

Created on 3 July 2025, 3 days ago

Topic raised by @bisonbleu and @matthews during 🌱 Drupal AI Contribution meetup 2025.10 Active :

Is there anything happening to address ethics? Here’s an example. I’m sure you all have seen/read articles about the bots situation, where small websites are taking a bit hit because bots don’t always act/behave in a nice way: 1, 2, 3. Does Drupal AI need a bot defender/moderator module?

@matthews:

I've done quite a lot of thinking and writing around ethical use of AI. Many of the issues aren't specific to any one way of using or building AI - but many would and should apply to how we build things.
Here are two blog posts I've written so far: https://www.jamesmatthewsaunders.ai/post/the-principles-of-ethical-ai-de...
https://www.jamesmatthewsaunders.ai/post/ai-ethics-and-open-source-why-w...

(not sure if the title is correct. Feel free to change)

🌱 Plan
Status

Active

Version

1.2

Component

Discussion

Created by

🇧🇬Bulgaria valthebald Sofia

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @valthebald
  • 🇩🇪Germany marcus_johansson

    I think there are better people then me to take a stance here, but one thing that is an important foundation.

    Any and all models has biases in them and will always have that and the only way you can steer it to your bias (because you also have it) is testing and having guardrails. This is extra important in agents, as opposed to pure LLM calls, because agents are by design taking their own decisions.

    Testing we have in https://www.drupal.org/project/ai_agents_test - this means that you can setup tests against decision making and produced text. With this you can test any and all models for not just efficacy, but also if it follws your wanted bias and also find the model with the smallest echological footprint that handles the task at hand. We don't need to throw OpenAI at everything, when there are tasks even a CPU server model could handl.

    Guardrails is coming up Create the concept of Guardrail agents Active , and while the main reason for this is to protect against prompt injection, this can also be used to make sure that the LLM answers in neutral ways, without to much bias in one or the other direction.

  • 🇯🇴Jordan Rajab Natshah Jordan

    Yesterday, I had a look at this paper ( only looked at their methods for testing AI models).
    Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers

  • 🇨🇦Canada RobLoach Earth

    While I'm also likely not the best to answer this, one of the things I appreciate about the approach in the AI module is how it's AI provider agnostic. If you have ethical concerns about using the OpenAI provider , you are able to switch to one of the various other ones out there, including a completely open source solution with the Ollama Provider . Then you have complete control over which model you're using, where the hardware is running, and how.

    The choice of which AI solution you put together is completely in your hands.

  • 🇩🇪Germany jurgenhaas Gottmadingen

    Does Drupal AI need a bot defender/moderator module?

    I'd say probably not. Being hit by bots on a Drupal site is due to unethical behaviour of some other AI solution, not the one in Drupal. While it's important to find some good defence, that's outside the scope of Drupal AI, in my view.

    The other way round, when Drupal is crawling remote resources, this is what we are controlling, and here we should provide measures that respect guardrails set by the remote side.

  • 🇨🇦Canada _randy

    I agree with what @robloach said and also agree what was noted in the blog post: https://www.jamesmatthewsaunders.ai/post/ai-ethics-and-open-source-why-w... in that:

    The agnostic approach for choosing the AI provider means you can choose which provider most aligns with your business ethics.

    In the blog post, the following quote resonates: "AI shouldn’t be about replacing people. It should be a force multiplier. It’s here to help us work more efficiently and solve bigger problems faster" Our Maestro module AI extensions do exactly that: Use AI to offload the mundane tasks humans do so that they can focus on more productive tasks. However, if a business automates a business process and that replaces people, is that an ethics problem that Drupal must guard against?

    The idea of guardrail agents to cleanse inputs is the new-normal approach when dealing with safeguarding your AI-enabled sites. I think it also comes down to education of the community when using new AI tools: Cleansing inputs and creating prompts which do not leak, override/harm your website or otherwise provide faulty information. It's not good enough to assume everyone/everything is a good actor when it comes to using AI. It's a new version of SQL injection for our age.

  • 🇩🇪Germany breidert

    AI bots are becoming a real issue. Many of our clients face massive bot hits and we have to spend a lot of energy on traffic shaping on the CDN/WAF level.

    Note that CDN providers such as Cloudflare make it really easy (or even default settings) to block AI scrapers:

    I think the traffic part is not what we can influence, that happens before traffic hits Drupal websites.

    But what we can do is provide functionality to make sure the AI(s) we use (in our agentic systems) can be restricted to deliver ethical results.

    IMO the way to go is to provide string guardrail functionality with sensible defaults. There is already an issue for guardrails Create the concept of Guardrail agents Active , and we are currently looking for contributors that are willing to take this on.

Production build 0.71.5 2024