Does anybody know of any grading scripts for Processing code? Perhaps something like an AST could be used to evaluate student code.
Hi,
What parameter(s) do you want to evaluate? Is it time complexity, text output, graphic output or features, object oriented structure, flexibility ?
Graphic output, but I realize that this might be hard, especially with animations that interact with the mouse. I have started working on a test suite that parses the student code. Instructors add the code they expect to see in each function. The test passes if the code is there. This, however, is not the best solution. So I am curious if anybody in the community has tried to tackle this problem.
Hi @pfe1223,
I havenāt looked at it yet, but Prince Steven Annor developed a Processing 3 auto-grade module called āAutoGradā for SuaCode. However, I believe that it is not AST based, and that each test class is custom-designed in relation to the specific assignment module.
You might also be interested in this:
Rather than AST-based syntax evaluation, I have in the past used visual e2e testing in which each learner is assigned a target such as an output image (assignment generation) and then is able to self-grade whether or not they have accurate produced that target. See prior discussion here:
ā¦as an aside, e2e visual testing is also used for regression testing on the Processing.R mode: https://github.com/processing-r/Processing.R/blob/master/hack/generate-e2e-test.py
@jeremydouglass Thanks so much, this looks promising.
Regarding animation and visual testing:
For random processes, give students a template with a required randomSeed() set. The PRNG used is from Java, so it is cross-platform deterministic.
For deterministic animations, you can reduce them to the static image comparison case by giving students a template that saves a frame on a given frameCount ā e.g. if(frameCount==300) saveFrame()
.
For animations with mouse interaction and a deterministic state (e.g. the image output is always the same if the mouse is at x,y) then you can again reduce them to the static image comparison case by giving students a template that saves a frame on if(mouseX==x && mouseY==y) saveFrame()
. They have to hunt-the-pixel a bit to generate their image(s), but it shouldnāt be too bad. A slightly fancier template would have the x,y input defined from mouseX/mouseY, but on e.g. keyReleased() it overrides this with a set x,y and then fires saveFrame() ā so, make your interactive sketch, but press space to save.
For non-deterministic interactive output, visual comparison isnāt usually a good fit ā there are workarounds, but they are specific to certain cases or involve complex workarounds (e.g. check deterministic pixel areas only, or check for certain elements or certain properties of the output image such as histograph, or draw a deterministic version of the contents to a second buffer etc).
Is this suite something that you are able to share / interested in sharing? Iām not currently using something like this, but Iām interested and might be able to contribute in the future.
One further note: I found this fascinating masterās thesis project:
- ZITA - A Self Learning Tutoring Assistant http://essay.utwente.nl/77910/1/Blok_MA_EEMCS.pdf
Which also mentions a Processing static analysis tool:
In addition to PMD, a tool part of a research presented at CSEDU 2018 [8]
is a static analysis tool that was used for research on Processing code. This tool
currently has no real name, so it will henceforth be referred to as āCSEDUā.