One of the first questions I would ask is why a database at all? What is the key of the collected data that makes it necessary to put this data into an indexable structure?
My experience has been that most data acquisition activities work just fine with a flat file accessed via a seek into each block of data. It is fast, and each file can be uniquely identified as to which data collection event that it came from. Such a process is also very well suited to loading a block of data into memory where it can be treated as a multi dimensional array with no further manipulation of the input data. I would note that within php, the arrays are real nice for some applications, but you pay a lot in overhead to have column numbers and associative names for each data element.
If you believe a database is the way to go, then I think that a fully normalized database is a mistake. ie, 1,000,000 records represent 1,000,000 data points. Instead, build large records of multiple points.
Let's say that you have two collected values and an indentifying integer. This yields 12 bytes of data (or maybe 20 or so with double precision). Put 500 of them into a record and you get a 10,000 byte record, not at all unreasonable for today's machines.
At 500 per record, you have a total on 2,000 not 1,000,000 records. I guarantee that it is far far faster to read 2,000 10K records than it is to read 1,000,000 20 byte records.
As for the calculation, I wouldn't bother to store it if it is a simple divide. Intel processors execute a divide almost as fast as an add. I've never done any tests but your tradeoff here is bandwidth (i/o channel speed) versus calculation speed. Maybe if you were doing FFT's storing the result might make sense, but otherwise, I can't see it.