Marking submissions using a rubric

Created on 27 January 2025, 2 months ago

We have been using this module on our educational site to help teachers create and edit rich text content via the CKEditor and it's proved to be very so far.

However, we think that we are only really scratching the surface and would like to start looking into more state-of-the-art use in particular, to use AI to mark (as in teacher assess) free text supplied by students.

Our thinking is a along the lines of 'Long' examination questions where students write rich text passage as their 'answers'.

We would like to look into the idea of supplying a 'model' answer which contains all the key details for a 100% perfect mark and then have AI (possibly in conjunction with ECA) analyse the students' answers and 'mark' them giving feedback in relation to the model answer.

Is this something we can expect to do with the module?

We would like to strike up a conversation with other interested teachers/drupallers.

Thank you.

💬 Support request
Status

Active

Version

1.0

Component

Discussion

Created by

🇬🇧United Kingdom SirClickALot Somerset

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @SirClickALot
  • 🇬🇧United Kingdom MrDaleSmith

    It sounds to me that what you are looking for is too specific an implementation for this general module and probably needs to be its own module.

    If I'm understanding you correctly, what you're looking to do is provide an AI with a known "good" paper so it can "mark" students papers against that. This might be something that a general AI could do if provided with example data, but for more robust results you'd really need to be training a custom LLM on a large number of "good" papers to achieve this, which would definitely be out of scope for this module.

    Your first step would be to do some testing against a few of the existing LLMs with your own prompts defining what you want it to do, and assess the quality of the results you get back. That would then give you some basis for deciding which way to go next.

  • 🇧🇪Belgium wouters_f Leuven

    Hey Sirclickalot, great idea.
    If you're interested I'm willing to set something up.
    maybe there's already a solution for you with automators or with ECA and the ECA Ai plugins .
    With ECA you could use the event submission, and then trigger a AI node with your fancy prompt (and the assignment as input).
    The output can then be put in a separate field (or wherever).

    But obviously we can also make a separate module with a ckeditor plugin that triggers a prompt. so many options :D

  • 🇩🇪Germany marcus_johansson

    Automators together with Custom Field module can do this ( https://www.drupal.org/project/custom_field ).

    Create a custom field that has rating (integer) and reasoning (numeric). Give one shot examples in each of them for the automator and then give the model answer in the prompt with a prompt on how to fill out the numbers.

    But verify everything and make sure that you write a prompt that is within x% error margin when you run it 20-30 amount of times on the same number. Setting a low temperature might make it more consistent. Then you have to test if its consistent to your own needs.

    See here: https://www.youtube.com/watch?v=agO-1e6iCJ4&ab_channel=DrupalAIVideos

  • 🇬🇧United Kingdom SirClickALot Somerset

    Hi @wouters_f ,

    Yes, we are extremely interested in helping develop such a feature.

    We would be able to allocate significant time to helping with trialling/ testing since one of has a lot of time our hands!

    Why?...

    Well, the background to all this is that we have been building a community education site for Computer Science learning.

    The site is aimed at help both students and teachers by providing a single port of call for an entire free course for teaching and learning Computer Science.

    Very important to point out that the intention is to build a community of teachers who contribute to, and of course, then use the site as the core content of their teaching. Hence our insistence on doing it in Drupal.

    While we are quite happy with the instant feedback with of our 'quick self-test' material using H5P, but there is a very clear need to enable students with instant feedback on free text that write by matching it up with 'Model answers' and commenting back to help them improve on anything they are missed or got not quite right.

    We have put an extraordinarily large amount of time and effort into developing this Drupal-based education suite because we think it will make a huge difference to young folk trying to get into Computer Science and school level but who don't have the luxury of access to high quality teaching.

    It's here if you are interested in have a quick look…
    Bit-by-Bit.org

    We also have a short Ladybird-book-style 5-minute read on the project for busy educators to give them an idea of the vision…
    The Bit-by-Bit project

    Actually, we don't see this as necessarily an education-specific either, more generically, some sub-module that enables the site-builder to identify one field (maybe even cross-content-type) as 'FOR critique', another as the 'critique context' and a final one as 'critical comments' in which to stuff the results.

    That generic approach would leave the door open to all of use-case-specific 'bells and whistles' such as (for example) a designated field of 'keywords' to focus in when writing the critique.

    Just ideas at this point but very keen to get a dialogue going ;-)

    Thank you.

  • 🇩🇪Germany marcus_johansson

    The thing you describe is possible to setup with the AI Automators in less than 10 minutes without writing a single line of code.

    Try this:

    1. Install AI Automators
    2. Create a node type.
    3. Add a fields you describe.
    4. When you come to Response field, at the bottom of the field configuration there is an checkbox "Enable AI Automator" - click this.
    5. Choose "LLM: Text (Simple)" as the Automator Type
    6. Choose "Advanced Mode (token)" as the Automator Input Mode
    7. In the Automator Prompt, write something in the lines of - but with your tokens, and be more expressive and try out until you are happy.
    8. </li>
        <li>You are an school teacher that is great at assessing papers. You will be given a model response for a task and then an actual response from a student. Your task is to give back a score of 0 to 100, where 100 means a model response and also a few paragraph how you came to that conclussion.</li>
      </ol>
      
      Take the following into consideration:
      [node:my_taxonomy_field]
      
      The model paper:
      [node:model_paper]
      
      The actual paper:
      [node:actual_paper]
      

      Save the field, create a node of the new type and fill out everything except response. Click save, the response is filled out for you.

      Also note, that if you want the AI Automators to run on some conditional or you want this rule to be controlled by ECA, that is possible if you install the AI ECA module.

      If you want full control over your workflow this is also very doable in pure ECA with AI ECA plugins.

      I'm oversimplyfying here, check https://workflows-of-ai.com/ for different workflows with automators (or Interpolator as it was called before).

Production build 0.71.5 2024