case1. Input/output based grading

Add the following folders and files to the default file structure.

  • input folder / output folder: Define test cases for grading.

  • testcases.py file: Set partial scores and correct/incorrect messages for each test case.

1. Writing Test Cases for Input/Output Grading

Create files to store input/output values for grading, and write test cases for each one.

  • input folder

    • Save input values for each test case as txt files.

    • Save one input value for each case.

  • output folder

    • Save output values for each test case as txt files.

    • Save output values corresponding to the input values written in the input folder.

2. Writing Grading Messages

Define correct/incorrect messages for each item set in the test cases. Also, set partial scores for each test case.

# Correct messages for each test case
correct_messages = [
        "testcase 1. The answer is correct!",
"testcase 2. The answer is correct!"
]

# Incorrect messages for each test case

wrong_message = [
"testcase 1. The answer is incorrect!",
"testcase 2. The answer is incorrect!"
]

# Partial scores for each test case

scores = [50, 50]

3. Writing Grading Code in grader.py

Execute the student's code for each test case and check the output of the code. Then, compare it with the expected output value (output file) to determine if the exercise is correct or incorrect.

You can provide partial scores for each test case, and send the final grading result to the platform (LXP) using elice_utils.secure_send_score(total_score).

Here is an example code.

import os
import subprocess
import sys
from testcases import *
sys.path.append(os.getcwd())
from grader_elice_utils import EliceUtils  # isort:skip

elice_utils = EliceUtils()
elice_utils.secure_init()

SUM_TESTCASE_SCORES = 100
INPUT_DIR = '.elice/input/'
OUTPUT_DIR = '.elice/output/'

try:
    total_score = 0

    # 0. Load input and output files
    input_data = [x for x in os.listdir(INPUT_DIR) if x.endswith('.txt')]
    output_data = [x for x in os.listdir(OUTPUT_DIR) if x.endswith('.txt')]

    # 1. Check the number of test cases
    if len(input_data) != len(output_data):
        sys.exit(1)
    NUM_TESTCASES = len(input_data)

    # 2. Check the file names
    matching = True
    for i in range(1, NUM_TESTCASES + 1):
        input_file = '%d.txt' % i
        output_file = '%d.txt' % i

        if input_file not in input_data:
            matching = False
        if output_file not in output_data:
            matching = False
    if not matching:
        sys.exit(1)

    # 3. Grading for each test case
    for i in range(0, NUM_TESTCASES):
        testcase_score = scores[i]
        input_file = '%d.txt' % (i+1)

        input_text = subprocess.run(['cat', '%s%s' % (INPUT_DIR, input_file)],
                                    stdout=subprocess.PIPE).stdout
        result = subprocess.run(['/bin/bash', '.elice/runner.sh'],
                                input=input_text,
                                stdout=subprocess.PIPE)
        student_result = result.stdout.decode('utf-8')

        answer = ''.join(open('%s%s' % (OUTPUT_DIR, input_file)).readlines())

        student_result = student_result.strip()
        answer = answer.strip()

        # Determine whether the test case is correct or incorrect
        if answer == student_result:
            total_score += testcase_score # Add partial score for the test case
            elice_utils.secure_send_grader('✅ {} \n'.format(correct_messages[i]))

        else:
            elice_utils.secure_send_grader('❌ {} \n'.format(wrong_message[i]))

    # 5. Calculate the final score
    total_score = int(total_score)
    elice_utils.secure_send_score(total_score)

except Exception as err:
    elice_utils.secure_send_grader('An error occurred during grading. Please check if the code runs correctly.\n')
    elice_utils.secure_send_score(0)
    sys.exit(1)

Last updated