I am well aware of the built in time functions of processing, the ones that get you the computer clock time.
However I have a situation where I get large amounts of milliseconds of timed processes , and I would like to convert those in a human reading format.
Java has some library like:
long minutes = TimeUnit.MILLISECONDS.toMinutes(millis);
But that does not work for us.
I wonder if someone wrote at least a function to take care of this.
Thanks a lot,
Mitch
as you not say like " i want show ( nicely ) the time since start of the program"
you possibly now aware that that is what millis mean?
and yes, i also not aware of a ready “data time” tool for this
but try
int milli;
int hours;
int minutes;
int seconds;
int days;
void setup () {
size(600, 400);
}
void draw() {
background(0);
get_time();
text("millis from start = " + nf(days, 3) + "_"+ nf(hours, 2) + ":" + nf(minutes, 2) + ":" + nf(seconds, 2) + ":" + nf(milli, 3), 10, 20);
text("Time: "+year()+"/"+nf(month(),2)+"/"+nf(day(),2)+"_"+nf(hour(),2)+":"+nf(minute(),2)+":"+nf(second(),2), width-170, 20);
}
void get_time() {
milli = millis();
seconds = milli / 1000;
minutes = seconds / 60;
hours = minutes / 60;
days = hours / 24;
//This text shows every number of millis, seconds, and minutes as if their independant numbers that don't reset to 0.
//aka don't loop around every particular number. millis should go to 999 then to 0, and repeat.
//Seconds should be based on every 1000 millis, and zero back to 0 at 60000. same for minutes.
//My theory is to create multiple functions which return a "looping" of each variable from 0 to it's own respective highest num.
// point is you must calc all backward now:
hours = hours - days * 24;
minutes = minutes - days * 24 * 60 - hours * 60;
seconds = seconds - days * 24 * 60 * 60 - hours * 60 * 60 - minutes * 60;
milli = milli - days * 24 * 60 * 60 * 1000 - hours * 60 * 60 * 1000 - minutes * 60 * 1000 - seconds * 1000;
}
…and when I imported TimeUnit into a simple sketch, the code from the top answer worked just fine.
import java.util.concurrent.TimeUnit;
void draw() {
int now = millis();
String time = String.format("%d hrs, %d min, %d sec",
TimeUnit.MILLISECONDS.toHours(now),
TimeUnit.MILLISECONDS.toMinutes(now),
TimeUnit.MILLISECONDS.toSeconds(now) -
TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(now))
);
println(time);
}
16 hrs, 963 min, 6 sec
Of course, you have to do the same subtraction math (look at mins) that you do in a native Processing sketch for each step, based on which is the highest time unit and which is a remainder. To me this means that your native solution msConversion is more readable for a beginner than using TimeUnit.
Thanks a lot,
Did you have to install the “TimeUnit” library? Manually?
While on this subject:
How many days of millis do I get before Processing re-sets it to 0?
I have a long running application. I wonder if there’s a way to reset millis without making the user re-start the sketch?
My other option is to write my algorithms based on the computer date - time.
That seems really hard. During the same day, I could deduct an end time from a starting time.
such as (ending)23:33:55 minus (Starting) 18:55:33 . I would convert everything to seconds and get a difference.
But what do I do for timing a process that starts on one date and ends the next day?
import java.util.concurrent.TimeUnit;
// Max millis() in days:
println(TimeUnit.MILLISECONDS.toDays(MAX_INT)); // 24
// Max frameCount in days (60 FPS):
println(TimeUnit.SECONDS.toDays(MAX_INT/60)); // 414
exit();
As an alternative, you could keep your own higher resolution millis timer (e.g. a double) and increment it whenever millis overflows, every 24 days or so. Or when framecount overflows, if you plan to run for 2.3 years.
Note that, realistically, if you plan on having a personal computer – or server – that executes a continuous process with no interruption for even 20 years: the chances are incredibly small that you will achiece that runtime, even with a good hardware and infrastructure plan. You will probably have a forced software update, soft reboot, hack, software failure, hardware failure, power failure, natural disaster, theft, et cetera. Odds are your hardware alone just won’t last ten years without a stop.
If you DID want a multi-decade continuous process, PDE probably isn’t a good way to do it. Most approaches (outside exceptional requirements like space probe engineering) don’t even try to do single-process, single-machine – they instead start by making a process distributed so that one can replace hardware and software incrementally over time to keep the process alive as things break and and become rapidly obsolete.