I have a .csv file with a column for time in milliseconds and another for speed in kmph. I would like to create a visualization for a speedometer using processing.
My framework for the solution was to create a counter that would spit out numbers every millisecond. If the number on my list matched with the number in the millisecond counter, the corresponding speed value would be called. Else, the counter would move ahead.
The problem is that I am unable to understand how Processing works with time, especially with stored data. I cannot seem to be able to call data from the .csv file in the draw function in order to control the rate of iteration. Instead, I use TableRow to iterate through the table within the setup function, but then I have no control over the rate at which it iterates, and around 3 minutes worth of data gets processed in a few seconds.
Is there something painfully obvious that I am missing? I have a feeling this is something to do with CPU time and realtime. I’d appreciate any leads.
millis is a very short Periode of time: 1/1000 sec
Draw() runs only 60 times per second
This means if you do an if comparison in draw() each iteration of draw, already hundreds of milliseconds have passed!!!
Hence a comparison against your data set using == is worthless
Instead use if(millis() >= previousTimeFromDataSet && millis() < timeFromDataSet) …
You indeed read your data set line by line and ignore the millis in this process
Then when you plot the next point in your graph (or draw a line to the next point) :
See the x axis as time that has passed and the y axis as the speed at this time
Calculate the x and the y value using map() - see reference
x = map(timeFromDataSet, 0, 12000, 0, width);
y = map(speed, 0,310,height-20, 0);
or something similar
Are your measurements from instruments, or simulated? Are the regularly spaced? Share a few rows of your sample data.
If your data is slow / sparse, then you need to find the two nearest times and the current time position between them, then interpollate the speeds, as @Chrisir
If you data is fast / thick, then it is probably good enough to simply display the most timely single measurement. You can keep the current row id in a global variable and march up to the next row each time with a while() loop.