Ethical aspects of using AI in Drupal

Created on 3 July 2025, 24 days ago

Topic raised by @bisonbleu and @matthews during 🌱 Drupal AI Contribution meetup 2025.10 Active :

Is there anything happening to address ethics? Here’s an example. I’m sure you all have seen/read articles about the bots situation, where small websites are taking a bit hit because bots don’t always act/behave in a nice way: 1, 2, 3. Does Drupal AI need a bot defender/moderator module?

@matthews:

I've done quite a lot of thinking and writing around ethical use of AI. Many of the issues aren't specific to any one way of using or building AI - but many would and should apply to how we build things.
Here are two blog posts I've written so far: https://www.jamesmatthewsaunders.ai/post/the-principles-of-ethical-ai-de...
https://www.jamesmatthewsaunders.ai/post/ai-ethics-and-open-source-why-w...

(not sure if the title is correct. Feel free to change)

🌱 Plan
Status

Active

Version

1.2

Component

Discussion

Created by

🇧🇬Bulgaria valthebald Sofia

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @valthebald
  • 🇩🇪Germany marcus_johansson

    I think there are better people then me to take a stance here, but one thing that is an important foundation.

    Any and all models has biases in them and will always have that and the only way you can steer it to your bias (because you also have it) is testing and having guardrails. This is extra important in agents, as opposed to pure LLM calls, because agents are by design taking their own decisions.

    Testing we have in https://www.drupal.org/project/ai_agents_test - this means that you can setup tests against decision making and produced text. With this you can test any and all models for not just efficacy, but also if it follws your wanted bias and also find the model with the smallest echological footprint that handles the task at hand. We don't need to throw OpenAI at everything, when there are tasks even a CPU server model could handl.

    Guardrails is coming up Create the concept of Guardrail agents Active , and while the main reason for this is to protect against prompt injection, this can also be used to make sure that the LLM answers in neutral ways, without to much bias in one or the other direction.

  • 🇯🇴Jordan Rajab Natshah Jordan

    Yesterday, I had a look at this paper ( only looked at their methods for testing AI models).
    Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers

  • 🇨🇦Canada RobLoach Earth

    While I'm also likely not the best to answer this, one of the things I appreciate about the approach in the AI module is how it's AI provider agnostic. If you have ethical concerns about using the OpenAI provider , you are able to switch to one of the various other ones out there, including a completely open source solution with the Ollama Provider . Then you have complete control over which model you're using, where the hardware is running, and how.

    The choice of which AI solution you put together is completely in your hands.

  • 🇩🇪Germany jurgenhaas Gottmadingen

    Does Drupal AI need a bot defender/moderator module?

    I'd say probably not. Being hit by bots on a Drupal site is due to unethical behaviour of some other AI solution, not the one in Drupal. While it's important to find some good defence, that's outside the scope of Drupal AI, in my view.

    The other way round, when Drupal is crawling remote resources, this is what we are controlling, and here we should provide measures that respect guardrails set by the remote side.

  • 🇨🇦Canada _randy

    I agree with what @robloach said and also agree what was noted in the blog post: https://www.jamesmatthewsaunders.ai/post/ai-ethics-and-open-source-why-w... in that:

    The agnostic approach for choosing the AI provider means you can choose which provider most aligns with your business ethics.

    In the blog post, the following quote resonates: "AI shouldn’t be about replacing people. It should be a force multiplier. It’s here to help us work more efficiently and solve bigger problems faster" Our Maestro module AI extensions do exactly that: Use AI to offload the mundane tasks humans do so that they can focus on more productive tasks. However, if a business automates a business process and that replaces people, is that an ethics problem that Drupal must guard against?

    The idea of guardrail agents to cleanse inputs is the new-normal approach when dealing with safeguarding your AI-enabled sites. I think it also comes down to education of the community when using new AI tools: Cleansing inputs and creating prompts which do not leak, override/harm your website or otherwise provide faulty information. It's not good enough to assume everyone/everything is a good actor when it comes to using AI. It's a new version of SQL injection for our age.

  • 🇩🇪Germany breidert

    AI bots are becoming a real issue. Many of our clients face massive bot hits and we have to spend a lot of energy on traffic shaping on the CDN/WAF level.

    Note that CDN providers such as Cloudflare make it really easy (or even default settings) to block AI scrapers:

    I think the traffic part is not what we can influence, that happens before traffic hits Drupal websites.

    But what we can do is provide functionality to make sure the AI(s) we use (in our agentic systems) can be restricted to deliver ethical results.

    IMO the way to go is to provide string guardrail functionality with sensible defaults. There is already an issue for guardrails Create the concept of Guardrail agents Active , and we are currently looking for contributors that are willing to take this on.

  • 🇺🇸United States matthews Colorado

    Thanks for citing my blog post. I've been thinking a lot about this space and am happy to nerd out with anybody that wants to hack out a path that helps our community sort out what we stand for ethically.

  • 🇨🇦Canada bisonbleu

    +1 @jurgenhaas

    The other way round, when Drupal is crawling remote resources, this is what we are controlling, and here we should provide measures that respect guardrails set by the remote side.

    If devices/agents created with Drupal AI can scrape external content, then we need to educate, inform and possibly contain the appetite of such devices/agents.

  • 🇩🇪Germany D34dMan Hamburg

    Ethical aspects of using technology (including AI) often fall under broad topics such as:

    - Transparency
    - Trust
    - Privacy
    - Bias / Fairness
    - Security
    - Misuse

    However, each of these can have a wide range of tolerance depending on factors like where you live, who you are, who is paying for the work, how much money is involved, your level of debt, oobligations to authorities...

    Because of this, I’m not interested in discussions around these subjective or context-driven variations here. Ultimately, the decision on how technology is used lies in the hands of users and implementers.

    What I am interested in (and see as technically feasible)

    I would like to propose that we focus on implementing the following core principles when developing tools in Drupal, whether they are powered by AI or not. (Any tool not directly used by AI today could still be repurposed by AI tomorrow.)

    1. Observability

    - All tools must make it clear what they are doing.
    - It must be possible to know the inputs and outputs at each step of the process.
    - This information should be observable and alterable by other modules.

    2. Control

    - All tools must expose controls to modify the inputs and outputs.

    3. Opinionated Defaults

    - All tools must ship with opinionated defaults that use the minimal set of information needed to complete a task.

    This is not meant to be a comprehensive set of guidelines, but rather a starting point to define a standard approach to building modules and features in Drupal that are ethically robust by design through "observability, control, and sensible defaults", regardless of whether they directly involve AI.

    The analogy I would use with opinionated defaults would be, these are like "locks". If you use the key, you can un-lock them and access the services inside. It is important to understand that, locks are not to deter bad actors. It exists to prevent a user from unknowingly making a decision.

  • 🇺🇸United States Kristen Pol Santa Cruz, CA, USA

    I haven't caught up on all of the above, but my 2 cents:

    • We should have a page off of d.o/ai that is focused on AI ethics for the general audience specifically and ties into the "responsible AI policy" that Dries wrote about (which I didn't find anywhere so does it exist?)
    • We need responsible Drupal AI development/contribution guidelines and/or policies
    • At minimum, we should have contribution-focused documentation that outlines what contributors can do to support responsible Drupal AI

    Meaning, that we have multiple audiences, and how we convey this information needs to be relevant to them:

    1. Business decision makers
    2. Drupal end users (marketers, content creators, etc)
    3. Drupal designers, site builders, and developers
    4. Drupal contributors
  • 🇯🇴Jordan Rajab Natshah Jordan

    We could follow up with the TUF methods ( on documentation, Spec and TAPS)

    The Update Framework
    A framework for securing software update systems

    It could be ~ The Ethical AI Framework (TEAIF) ~ in General AI integrations in Websites/Webapps
    PHP/Drupal could have implemented AI Specs and AIAPs - AI Augmentation Proposals
    Which could be followed in projects.

Production build 0.71.5 2024