Evaluating Student Code

Does anybody know of any grading scripts for Processing code? Perhaps something like an AST could be used to evaluate student code.

1 Like

Hi,

What parameter(s) do you want to evaluate? Is it time complexity, text output, graphic output or features, object oriented structure, flexibility ?

1 Like

Graphic output, but I realize that this might be hard, especially with animations that interact with the mouse. I have started working on a test suite that parses the student code. Instructors add the code they expect to see in each function. The test passes if the code is there. This, however, is not the best solution. So I am curious if anybody in the community has tried to tackle this problem.

1 Like

Hi @pfe1223,

I havenā€™t looked at it yet, but Prince Steven Annor developed a Processing 3 auto-grade module called ā€œAutoGradā€ for SuaCode. However, I believe that it is not AST based, and that each test class is custom-designed in relation to the specific assignment module.

You might also be interested in this:

Rather than AST-based syntax evaluation, I have in the past used visual e2e testing in which each learner is assigned a target such as an output image (assignment generation) and then is able to self-grade whether or not they have accurate produced that target. See prior discussion here:

ā€¦as an aside, e2e visual testing is also used for regression testing on the Processing.R mode: https://github.com/processing-r/Processing.R/blob/master/hack/generate-e2e-test.py

@jeremydouglass Thanks so much, this looks promising.

1 Like

Regarding animation and visual testing:

For random processes, give students a template with a required randomSeed() set. The PRNG used is from Java, so it is cross-platform deterministic.

For deterministic animations, you can reduce them to the static image comparison case by giving students a template that saves a frame on a given frameCount ā€“ e.g. if(frameCount==300) saveFrame().

For animations with mouse interaction and a deterministic state (e.g. the image output is always the same if the mouse is at x,y) then you can again reduce them to the static image comparison case by giving students a template that saves a frame on if(mouseX==x && mouseY==y) saveFrame(). They have to hunt-the-pixel a bit to generate their image(s), but it shouldnā€™t be too bad. A slightly fancier template would have the x,y input defined from mouseX/mouseY, but on e.g. keyReleased() it overrides this with a set x,y and then fires saveFrame() ā€“ so, make your interactive sketch, but press space to save.

For non-deterministic interactive output, visual comparison isnā€™t usually a good fit ā€“ there are workarounds, but they are specific to certain cases or involve complex workarounds (e.g. check deterministic pixel areas only, or check for certain elements or certain properties of the output image such as histograph, or draw a deterministic version of the contents to a second buffer etc).

Is this suite something that you are able to share / interested in sharing? Iā€™m not currently using something like this, but Iā€™m interested and might be able to contribute in the future.

One further note: I found this fascinating masterā€™s thesis project:

Which also mentions a Processing static analysis tool:

In addition to PMD, a tool part of a research presented at CSEDU 2018 [8]
is a static analysis tool that was used for research on Processing code. This tool
currently has no real name, so it will henceforth be referred to as ā€˜CSEDUā€™.

1 Like