Looking forward to contribute to opensource this GSoC season and I think the Screen Reader for p5js canvas project might be extremely helpful for a lot of people with special needs. If I guessed this correct, the project aims to develop a suite of accessibility functions that can describe the contents of the p5 canvas just like that of a screen-reader.
Wanted to know if there are any extra features that this project aims to implement. I do have experience working with ML libraries like tensorflow, tfjs and I think this could be a good project that I can implement. Awaiting further inputs from the community.