When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.
No worries, thanks for the update! I'll for sure be researching an affordable alternative this week, and maybe make a purchase. If I do, I'll update this thread too!
Just read through this thread. I've been doing some of my own reverse engineering of the ECU maps solely through massive OBD/CAN data collection and analysis, I hadn't seen any of the ECU maps before this. I'm particularly interested in the part about calculating/using load (in g/rev) as the main x-axis for the 2d lookup tables. In excel, I've made 2d tables of all my data logs based on the load value being in g/rev, as well as Volumetric Efficiency (assuming a constant density of air at STP, 1.18 kg/m^3) which is how the calculated load value, the one that can go above 100% is calculated from the ECU. In my findings, it seems that there is a discrepancy between the 2d lookup table values in the ECU maps shown in this thread/ on romraider, and the ones I have collated, but only when I assume that the load value in the ECU tables is in g/rev. When I instead base the load value on Volumetric Efficiency, everything matches up near perfectly, except for in lower rpm's where knock control comes in pretty heavily. I'll quickly go over how my excel sheet works and then show some examples of the discrepancies.
The first step is the data logging, which was done in various conditions, driving styles, climates etc to cover a wide variety of use cases. Most of the logging was done through ELM327 based connector, at around 4-10 Hz depending on the number of parameters being logged. In total so far, I have collected around 1.4 million data points.
The main thing I use in this sheet is what ill call the map generator, which works as follows: you set up a square table with x-axis and y-axis of any of the parameters logged, choose the range of those values, and then specify the target parameter which you want to generate the table for. For example, x-axis would be rpm, y-axis would be load, and then the Z value or the actual value in each cell would be leading timing advance. For each cell, the excel function goes through all of the data points, and searches for when the conditions of that cell are met, for example at 4000 rpm and 80% VolEf, it will search for all values of ignition timing that occurred when rpm was between 4000rpm +/- 100rpm (adjustable buffer) and VolEf between 80% +/-2%. The function then takes either the average, median, standard deviation, total count, or whatever you want of the Z parameter and puts that in a cell. It does this for each x and y value and then generates a table.
Shown below is a part of a table generate with X-axis being rpm, y-axis being load in terms of Volumetric Efficiency, and the Z value being Leading timing advance.
cell values are color coded to easily show trends and whatnot. Keep in mind, this table is assuming Volumetric Efficiency, not g/rev is the load factor. Notice at 4000rpm, and around 67.5-75% we see the timing go from ~18 to ~14. If we convert the Y-axis to g/rev, then the change form 18 to 14deg occurs at 0.52-0.58 g/rev. Now, let's compare this do the US 6-port MT table in RomRaider:
Here we see that with the load axis being in g/rev, we expect the change from 18 to 14 degrees of timing to occur from 0.69-0.75 g/rev of load, which doesn't match with the expected 0.52-0.58 g/rev from my logs. However, if we assume that instead of being in g/rev, that it is in terms of volumetric efficiency, then we should expect the transition to occur around 69-75% Volumetric efficiency, which lines up nearly exactly with the 67.5-75% value from my data logging.
Below is a graph of the Ignition timing vs. Load at 4000rpm for three different sets: Directly from the RomRaider, with load in g/rev, From datalogs with Load in g/rev, and from datalogs with Load in Volumetric Efficiency.
As you can see, the purple and light blue data sets are a very good match, while the orange does not fit the trend. This shows that when you assume that the Load is in terms of Volumetric Efficiency, and not in g/rev, the datalog values match the RomRaider values almost perfectly.
My point that I'm trying to bring up is that in my experience, the load axis in the 2d lookup tables in RomRaider do not make sense in units of g/rev, and only seem to for units of Volumetric efficiency. I saw in the ghidra decompilation how it appears to be calculated which does indicate that it is g/rev based, but the experimental findings show otherwise. The g/rev and VolEf units are seperated by a multiple of around 0.772: at 100% VolEf
100% VolEf = (0.654 liters)/(0.654 liters) = 1; 1*(0.654 liters/revolution)*(1.18 g/liter) = .77172 g/rev ,so 100% volumetric efficiency would be 0.77 g/rev, and 1 g/rev would be 129.6% volumetric efficiency.
Let me know your thoughts on this. I greatly appreciate the work you've done so far, as well as making it open source. I'm a mechanical engineer so coding stuff is mostly black magic to me once you get further down than C. I've used Ghidra a little bit before to decompile the ROM from a MicroTech ECU, but that was with the help of a Computer Science major friend so I can appreciate how hard this stuff is. Also in case you're interested, I have a huge collection on everything technical about rotaries from the 10A to the 8c, including some very hard to find research papers, training manuals, etc. Thanks again for contributing!!!
So, at risk of sounding coarse, I'll just say this: The units of engine load in the axis are g/rev.
Without that being the case, the entire control strategy for all ECU fueling falls apart (like literally all of it, batch injection, idle, cranking, power enrich, cat temperature model, the whole bitch) as the math doesn't math and that is the most important part of this. The same variable being used for fuel calculation is the exact same variable in RAM that is used as a lookup pointer for these tables.
Some flawed data from your logging is that you are comparing a final ignition timing value to a base ignition timing table. There are more things that effect ignition timing at a speed/load point than just that base table, so I would expect those values to not match up perfectly.
Final Timing Angle BTDC = (Base TimingFinal * Cranking Angle Multiplier) + ((Idle Base Timing Final + Idle Speed Compensation + Cranking Angle Timing) * (1-Cranking Angle Multiplier)) + Coolant Temperature Compensation - Intake Air Temperature Compensation
There are actually quite a few other aspects that go into finals timing values that I don't have encompassed in this equation, as they were mostly transient conditions and didn't really make sense to include given the context of values you'd want to tune and change.. and also probably not well understood at the time of writing. For example, this equation doesn't take into account knock retard (which you seem to also have thrown out data points with), and other timing compensations during partial or liftoff throttle to name a few.
There are also engine load limit saturation cals that the ECU uses (usually only a factor at WOT) to deal with the parabolic nature of a MAF sensor voltage output. While kind of cheating the math, this sensor saturation can lead one to some false positives about what true VE may look like with some of your math trying to convert VE to a g/rev read from the ECM. That math also doesn't account for compression losses due to spark gap/other things I don't totally understand on the rotary engine. I'm not an ME by degree, so it's all wizardry to me.. This is also why I think there is a big global compensation table for scudging in the fueling that software vendors love to call a VE table. They get away with it as enriching and leaning the table acts like a speed density type VE tuning table, which in the software works because it's effectively just being a multiplier in the chain of fuel calculation... much like a VE tables does.
Also depending on which engine load PID you choose to query, it will give you g/rev or absolute load in % (with some scaling factor of 100 I believe?). I'm not sure which you were using, but if using it in Romraider it was likely g/rev. The Romraider logger defs are a cluster **** right now to be honest, so I can't recall exactly what is what.
I think this discussion always gets brought up because there are multiple ways to measure load from the ECM over OBD, and the range of airflow in this engine is quite close to what a VE value would be... couple that with vendors calling a table people use a VE table when it is not.. and it's just a bad recipe.
I went through a lot of ECU disassembly under the guise of it being a VE value, but it just simple is not possible for it to be.
Also, I've love to see a rotary at 130% VE without a turbo, overlap, or resonating intake.
Thanks for the response! I don't mean to sound argumentative at all, or insult you or anything I sometimes come off that way but that is not my intention at all, this is all in the pursuit of knowledge for me. I'm curious what exactly led you to the conclusion about the load axis being in g/rev for sure. In my opinion, the clearest tell is the 0.6345 factor applied to the value. That factor is nearly exactly equal to 1/(displacement*density) at STP: 0.6345 vs. 0.6344. All of this logging was done reading from the PCM outputs, not stuff passed as outputs to OBD. The Volumetric Efficiency value I'm referencing is one of the calculated load outputs from the PCM, which uses the "g/rev" to calculate it. I also calculated the volumetric efficiency based on MAF, IAT, BARO values and the 2 volumetric efficiency calculation methods were identical. Below is a graph of Volumetric efficiency and g/rev vs rpm. The y-axes are scaled by the 0.6345 value (g/rev is double what the ECU lookup table is, the ECU table i believe is per rotor, this is both rotors). Yes, there are 2 data sets here, with g/rev being in blue, and volEf in orange. You can't see the blue dots because for every single data point, with the scaling applied they are identical.
Peak volumetric efficiency is around 105% at STP, which is very reasonable for this engine given the advanced intake design. Mazda actually overstated the volumetric efficiency of the engine in their research development paper, which is the whole reason I went down this rabbithole of data analysis in the first place lol.
verified by experimental approach as well:
A PP engine with the right intake can, and has, hit 125% volEf before, here's an example from the R26B Le Man engine, from their development paper:
As for the load saturation limits, I am aware of them, I have only hit them a few times, it's pretty noticeable in the data as the VolEf goes from slightly jittery to a linear line when it's above the max load limit.
I do agree with you assessment of certain vendors selling software with an editable "VE table" for the rx8, it's completely nonsense to have that on a MAF car.
Another thing that led me to this conclusion was looking at the torque lookup tables. I used those, along with the lambda tables, to calculate a BSFC/thermal efficiency table, and compared that to experimental data done by people with much fancier equipment than me. With assuming the g/rev approach, this leads to a calculated minimum BSFC of 217 g/kWh at 2000rpm and .6875 g/rev or 139.1 N*m. Comparing this to published data, which gives a minimum BSFC of 257.4 g/kWh at 2000rpm, 6 Bar BMEP/ 125 N*m. Again, if you instead assume that the load is in a normalized VolEf, then it gives a BSFC of 255 g/kWh at the same points.
Let me know what you think, my brain is already fried from looking at the code. Also another weird thing i noticed is the hard-coded equivalence ratio is set to 13.9:1, which is strange because pure iso-octane/gasoline is 14.7:1, and E10 gasoline is 14.1:1. doesn't matter too much anyway as the lambda sensor will adjust for that.
The biggest tell for me was in the some of the calculation doing batch injection. Using the flow rate of the injector, paraphrasing a bit, the software calculates the maximum amount of fuel each set of injectors can provide in a cc/rev. This is compared to the fuel request as result of math using the engine load variable to know when to turn on different injector sets so this must match the same units. Working the math back it makes sense for this load units to be in g/rev because of this. Grams of air being used in a lambda and equivalence ratio calculation also confirms this.. as well as the grams of air per "stroke" variable you mentioned too that is upstream of the engine load calc. I am just doing a logical deduction based on the software that is 100% running on the ECM. The way I tend to reverse engineer is to assume something is correct based on context clues and then go down the rabbit hole of trying to prove that incorrect. I wasn't able to prove that wrong for this variable.
As far as the 13.9:1 ratio, this is the ratio for E15 gas, which (in the US at least) is likely the worst case scenario of ethanol in pump gasoline, and like you said the fuel trims take care of the rest. This value is also the value I modify to maintain proper fueling for my FlexFuel patch I wrote, so we know that math works in that regard too, and because it's all air/stroke based math, everything just works regardless of the base equivalence ratio, batch injection included.
This matters far too little for so much talk. Simply calling it engine load as a percentage of how much air would the engine pump if it ran at 100% VE and standard ambient conditions is plenty of information for the scope of tuning.
It's easy to overcomplicate and overlook things while staring at a computer, only to find out on the road/dyno that it matters 1% or less.
Okay, that seems to make sense to me, after all it does make sense to have all the math done in g/rev since that seems to be the industry standard for MAF based control. As for the grams per "stroke" variable being upstream of the engine load calc, do you mean that the flow goes something like: MAF sensor V --> MAF sensor g/s --> MAF sensor filtered g/s --> filtered g/rev --> filtered g/rev * 0.6345 --> Engine load g/rev ? I tried going through the whole calculation flow in Ghidra using one of your archives on github, but a lot of the variables were unnamed, which .gar file is the most "complete" would you say?
Also, I tried downloading my ROM through ELM327 device, using ELM327 to J2534 spoof software, was able to get rx8man's tool to recognize the ECU calibration and VIN, but couldn't pull a dump -- probably due to latency issues with the ELM327 and it needing to ping a security code or something. I have an actual J2534 device coming soon, so I should be able to upload a dump from a 2007 US MT model "N3M5EK000", which is the one that has been patched by TSB campaign MSP16, which looks like it would affect tables for timing and fueling at idle under high IAT and ambient temps.
In the grand scheme of things, yeah it doesn't really matter but there's always that urge to understand something 100% lol. The main reason is that the difference isn't just 1%, it seems to be on the order of 15% or so, so I'd say that's significant. OEM Engineers would KILL for 15% more airflow on an NA engine. But yes, for just tuning load is load, doesn't matter too much but I'm trying to go a step further and do a lot of derivative calculations which are sensitive to the exact amount of airflow.
Okay, that seems to make sense to me, after all it does make sense to have all the math done in g/rev since that seems to be the industry standard for MAF based control. As for the grams per "stroke" variable being upstream of the engine load calc, do you mean that the flow goes something like: MAF sensor V --> MAF sensor g/s --> MAF sensor filtered g/s --> filtered g/rev --> filtered g/rev * 0.6345 --> Engine load g/rev ? I tried going through the whole calculation flow in Ghidra using one of your archives on github, but a lot of the variables were unnamed, which .gar file is the most "complete" would you say?
Also, I tried downloading my ROM through ELM327 device, using ELM327 to J2534 spoof software, was able to get rx8man's tool to recognize the ECU calibration and VIN, but couldn't pull a dump -- probably due to latency issues with the ELM327 and it needing to ping a security code or something. I have an actual J2534 device coming soon, so I should be able to upload a dump from a 2007 US MT model "N3M5EK000", which is the one that has been patched by TSB campaign MSP16, which looks like it would affect tables for timing and fueling at idle under high IAT and ambient temps.
That math train is basically correct.. Some of the naming is odd because the first order low pass function essentially needs a "previous loop" value, so while the filter applies to the variable, the name change can be kinda confusing. the latest uploaded archive file should be the most up to date. I'd say the vast majority of variables will be unamed in those files.
Definitely not sure if the ELM327 supports the uploading/downloading or has an onboard buffer for data, so I sorta wouldn't expect it to work.
I was using an updated version of Ghidra so that erased some of the labels, opened it in the correct version and now everything is there. After going through the whole calculation chain, the 0.6345 multiplier really stands out. I see you have it labeled as "displacement/2", but if it truly is half the displacement, it should be 0.654 liters. It's strange to me that Mazda would have this wrong, and it's not rounded as there's more sig figs than the true displacement.
I sought out where it was used and saw its use in "calculateAirVolume??" function. Some weird constants in there, notably the 353.016 and 38.07.
That 353.0163, coupled with the fact that it is in an equation with 101.32 (standard pressure) and 273, for temp, leads me to believe it must be some combination of ideal gas law variables. Turns out it is equal to M*T/P*V, or Rho*T/P, or at STP. So, equation for "air_density??" reads as: var = P/T (kPa/K) * (353.0163/101.32) (g*K/L*kPa). Units are then in kPa*g*K/K*L*kPa, which simplifies to g/L, so this just calculates the air density given pressure and temp.
Now, we see that engine_air_density?? = number*0.6345*density/rpm
presumably, with units of engine_air_density = g*h/rev, gram hours per revolution.. strange.
This value is then divided by the load, and filtered by comparing to the "secondary_air_pump_volume??", which is determined by the minimum of either the engine load, or by the rpm/60.
This is strange, as you would think that if its a minimum of 2 values, that hen you compare them they both have the same units.
now this is gonna get a lil crazy so hold on, lots of assumptions n stuff.
lets assume my theory that engine_load_g_rev is instead a unit of volumetric efficiency, so 1/rev, or effectively unitless. And let's also assume that the 0.6345 constant has units of 1/(density*volume) This makes the following unit calculations go like:
engine_air_density?? = (1/g)*(g/l)*(hour/rev) = hours/rev*l
then, in the filter, it is divided by the load, which in this case would make it: hours/l.., which is then compared against secondary_air... which compares 2 values with units of 1/rev for the load, and rev/hour... hmm those still don't match
Okay, let's go back and assume something weird. The value big_number_battery... is the product of fvar2 and fvar3, which are dependent on rpm*load and battery voltage respectively. to me this smells like something to do with the secondary air injection motor, i'm gonna assume that it is a correlated value with units of volume/time, since it is pumping air. now, this cascades the calculations:
engine_air_density = (l/hr)*(1/g)*(g/l)*(hours/rev) = 1/rev, and tada, it is now unitless!!
My best guess is that this function is used to calculate how much air to send through the secondary air pump, through the secondary_air_pump_volume variable, which i suspect is actually a duty cycle or something. The whole calculation chain only seems to make sense if load is unitless in 1/rev at least to me.
Also, looking at the calcFuelVolumeRequest function, the fueling_request_cc_rev to me, i'm not sure how the units got to cc. lambda is unitless since it is just a ratio, fuel air ratio is also unitless, so that leaves just engine_load_g_rev, which means that the end result would be an output in the same units of g/rev.
I see in the fuel PW function that it is expecting fuel_request to be in cc, since it's dividing by the injector flowrate which makes sense. So this must mean that in fuel_volume_req_cc, the units must obviously be in cc (duh lol), but nothing indicates that it actually is in the calculation chain.
Now, what if that 0.07196 didn't actually represent 1/AFR, and wasn't unitless? for example, to make the units work to get fuel_request to be in cc, and assuming load is 1/rev, then the 0.07196 would need units of cc/rev... Well, at load = 1, then V = 654 L, and we use density of air at STP of 1.205 g/L, then we get 0.78807 g of air per combustion chamber at 100% VE, or load = 1. Now, divide that value by 14.7, we get 0.0536102040816 grams of fuel required. Now, divide that by density of gasoline at STP, and you get 0.071960005 cc of fuel required for stoich combustion at 1 Load. This brings it all together now, the 0.07196 has units of cc. This has to be where the fuel_request get's the volume unit from.
Very long thread and very confusing, sorry I suck at wording and explaining sometimes. TL;DR, i think load is in dimensionless units of VE/rev, with 1 representing a completely full combustion chamber at STP, and 0.07196 is not 1/AFR, but actually cc of fuel required for stoich combustion at Load = 1.