Gas gauge calibration unit

I likely explained in my earlier posts. The calibrator separates the measurement of the tank sender from the driving of the gauge.

To measure the sender a constant current of 20mA is supplied, and the voltage is measured using the A/D in the micro controller. Using Ohms law V= I x R, or R = V/I. By reading the voltage and dividing by current resistance is measured. The important part is to know the resistance at empty, full and in between. The exact values can vary based on sender, and float travel, but the calibrator can adjust for that.

About gain and offset. Perhaps you know slope related to the equation of a line. Y = mx + b. The m is gain, slope, or gradient (the steepness of incline), and b is the intercept.
The intercept is also called bias or offset. So Y is PWM value, x is sender resistance, b is Empty PWM, m is gain.

So to put in perspective the offset is the PWM value for the gauge to read empty, and for the correct gain the PWM will drive the gauge to full. /

The slope is actually negative or \, because sender resistance is less for full, than at empty.

Imagine having a plot of resistance vs sender travel, and a plot of PWM for gauge travel.
Then imagine a way to tie the endpoints together, and a function to generate the desired PWM based on sender resistance.
There is a change of resistance from empty to full, and a change of PWM from empty to full, m is derived by the ratio of those.

So the calibrator works by knowing the endpoint resistance empty, and the PWM value to drive the gauge to empty, and the resistance for full, and the PWM value to drive the gauge to full. The user pushes the buttons to generate the values for the gauge endpoints. The simple equation for a line, perhaps learned by most in 5th grade and long forgotten, is used to drive the gauge for all values between empty and full.

If this post is less than clear I am sure more can me found by entering "line equation" in a browser search field.