I used the de.bezier.data.sql. and processing.serial libary to get data from Arduino then save to MySql database. The transmission speed I get from Arduino is 0.02 second per record, however, I can only insert 10 records per seecond from processing to MySql using"msql.query(“INSERT INTO datatest(data) values(’” + s + “’)”);". I want to ask whether anyone had ever encountered this problem and what’s the solution to this?
there might be ideas about speed up here and there,
but it all ends some where and you need to talk about the data flow concept.
-a- you use a sampling rate 50Hz, ? how many channels?
a arduino analog read / serial print USB could go up to ?500Hz max.
above that you need to work with a batch concept:
analog read to array ( ?300? samples ) send as batch via USB
( for one analog channel i reached a sampling speed
- 8kHz arduino UNO
- 500kHz MAX32
( for a oscilloscope like tool )
But would not be a continuous datastream.
-b- for data view like processing with 60 Hz. frameRate
i would say it should not be “its job” to handle highspeed serial data stream
so possibly use one program ( python to handle the serial interface…)
and for later data visualization ( look at the database ) processing.
-c- but even more, the file handling of tools like sql…
? how they work? they open and close the file for storing each record?
if you write to a file in stream mode it might be much faster,
( but if not close properly loose all, and can not read for visualization at the same time.)
but processing not allow file open for append ( like python…)
you might get a speed up by writing to a file in RAM DISK,
that i do on Raspberry Pi ( via python ) to protect my system disk ( what is a uSD card ) from wear out.
but you could possibly use a batch file concept
collect the lines from USB and save to table…
after one second write the table to a new CSV file ( with timestamp in filename )
that files you process later to database…
-d- somewhere on the way
- from data sampling
- data transport
- data visualization
- data storage
you need to think about data concentration / reduction.
so again a tool reading the serial interface could sample to array first
and make ( average / min / max ) for a certain time period and save that to database.
also in industrial applications ( process control systems / historic data collection )
there are very interesting concepts to reduce file size…
you save datetime stamp and value ONLY after a change of value over a deadband
( ? was called swinging door ? )
imagine the coding you need to bring that later in a graph over time,
but it is very effective.
also there are concepts like
- current trend stores like for 30 min every second
( and more details like PV SP OUT MD for a control loop)
- historic trend stores every minute ( a average? or stronger filtered ) for a year…
all have a thinking of a ring buffer.
just did a little benchmarking
system: Raspberry Pi 3B+ OS: Raspbian
BezierSQLib 0.3.1 Florian Jenett
create SQL db file in RAM DISK
save millis and counter to SQL db
readback and check:
// for 100 records need 975 millis // for 1000 records need 5982 .. 8057 millis
so yes, SQL DB seems kind of slow, speed varying but 160Hz was possible
under this test condition.
but no serial link, no draw loop used.
I’m trying to setup a processing sketch on a Raspberry PI which connect to a MySQL DB with the BezierSQLib library.
When i run it, however, i get this error message:
SQLtest.pde:20:0:20:0: NoSuchMethodError: processing.core.PApplet.registerDispose(Ljava/lang/Object;)V
NoSuchMethodError: You may be using a library that’s incompatible with this version of Processing.
I’m running processing 3.4 on the PI and BezierSQLib 0.2 (didnt find any 0.3.1)
would the SQLib version be the source of the problem?
yes, but possibly the old bezier lib not capable of using the actual MySQL.
why you not just run the actual Processing version and
install the newest library for it?
curl https://processing.org/download/install-arm.sh | sudo sh
and use the library manager to install the bezierSQL lib
i must admit that since last big upgrade debian i not used MySQL anymore
i am very happy just with using SQLite3 ( what uses ( auto creates ) a .db file )
and i run for example a python CMS with it.
but i understand that you have that database already and the ?tools? to fill it?
and from processing just want read? use the data ?