Wednesday, June 19, 2019

and for completness sake

A short Variations Too documentation video from Currents 2019 in Santa Fe:


(many thanks to my spokes-model, Raina Wellman!)

Monday, April 29, 2019

Variations Too -- The Program

I seem to have (finally) come to a rest in the development cycle of the Variations project so I thought I should post the actual Arduino code, for completeness sake....

VariationsToo.zip

You will also need all (or most of) my previously described libraries:

schipArduino.zip

I had to give up on the HCSR04 sonar distance detector because it was unreliable, noisy, and sensitive to vibrations when the arm servo motors were running. I replaced it with my old favorite, the GP2Y0A02YK (or equivalent) IR distance sensor that triggers out at about 2 meters. This only needs an ADC channel to interface, so it's a lot simpler anyway.

Variations Too will be in:

CURRENTS NEW MEDIA 2019

in Santa Fe, NM, from June 7 - 23. So come on by!

...You and I both have to wait until the piece is installed in the gallery before I can make a decent video because I have run out of room for clean backdrops in my home/studio...

Tuesday, March 19, 2019

Random Thought

I have to say, that, in general, I do not believe in randomness. I'm sure there are some Quantum Mechanics (Maniacs?) out there who will beg to differ and provide supporting arguments, but until then....

Let's say I flip a coin. This particular flip comes up HEADS. Can you provide me with a proof that it could have been TAILS? Sure, sure, you can show that the next few flips might have different outcomes, and further that the next 1 billion flips will average dangnabbitedly close to 50% each. But that's not what I asked. I want proof that the original action might have taken a turn to the T-side. Since that has already (not) happened it is in the -- still apparently -- inviolable past and cannot be changed. So maybe it wasn't random at all?

Don't get me wrong, I'm not trying to argue that we can predict the future. Both complicatedness (many moving parts) and complexity (intersecting feedback loops) make that practically and theoretically impossible.

I'm just saying that we can predict the past.

Monday, February 4, 2019

Fixed Point Failure

A Fixed Point math library and Neural Net demo
for the Arduino...

Or: Multiple cascading failures all in one place!


Last year I found a simple self-contained Artificial Neural Net demo written for the Arduino at: robotics.hobbizine.com/arduinoann.html and spent a goodly amount of time futzing around with it. I now, almost, understand HOW they work, but have only a glimmering of insight into WHY. The demo does something really silly: The inputs are an array of bit patterns used to drive a 7-segment numeric display and the outputs are the binary bit pattern for that digit (basically the reverse of a binary to 7-segment display driver). Someone not totally under the influence of ANNs could do this with a simple 10 byte lookup table. But that is not us. On the plus side it _learns_ how to do the decoding by torturous example, so we don't have to bother our tiny brains with the task of designing the lookup table.

HOW ANNs work on the Arduino is:
  • a) Extremely slowly, because they use a metric shit-ton of floating point arithmetic; and,
  • b) Not very interestingly, because each weight takes up 4 bytes of RAM and there is only about 1Kb kicking around after the locals and stack and whatever else is accounted for -- the simple demo program illustrated here uses about half of that 1K just for the forwardProp() node-weights and then the backProp() demo uses the other half for temporary storage. Leaving just about nothing to implement an actually interesting network.
But. I thought I could make a small contribution by replacing the floating point -- all emulated in software -- with an integer based Fixed Point implementation -- whose basic arithmetic is directly supported by the ATMEGA hardware. This would also halve the number of bytes used by each weight value. Brilliant yes?

And in fact. My FPVAL class works (see below for zip file).  Except, err, well, it doesn't save any execution time. But more on that later....

Anyway. The FPVAL implementation uses a 2-byte int16_t as the basic storage element (half the size of the float) and pays for this with a very limited range and resolution. The top byte of the int16 is used as the "integer" portion of the value -- so the range is +/- 128.  The bottom byte is used as the fraction portion -- so the resolution is 1/256 or about .0039 per step. On first blush, and seemingly also in fact, this is just about all you need for ANN weights.

As it turns out, simple 16 bit integer arithmetic Just Works(TM) to manipulate values, with the proviso that some judicious up and down shifting is used to maintain Engineering Tolerances. This is wrapped in a C++ class which overrides all the common arithmetic and logic operators such that FPVALs can be dropped into slots where floats were used without changing (much of) the program syntax. This is illustrated in the neuralNetFP.cpp file, where you can switch between using real floats and FPVALs with the "USEFLOATS" define in netConfig.h.

Unfortunately it appears that a lot of buggering around is also needed to do the shifting, checking for overflow, and handling rounding errors. This can all be seen in the fpval.cpp implementation file. An interesting(?) aside: I found that I had to do value rounding in the multiply and divide methods -- otherwise the backProp() functions just hit the negative rail without converging.

I also replaced the exponential in the ANN sigmoid activation function with a stepwise linear extrapolation, which rids the code of float dependencies.

I forged ahead and got the danged ANN demo to work with either floats or FPVALs. And that's when I found that I wasn't saving any execution time.  (Except, for some as yet unexplained reason, the number of FPVAL backprop learning cycles seems to be about 1/4 of that needed when using floats[??]).

After a lot of quite painful analysis I determined that calling the functions which implement the FPVAL arithmetic entail enough overhead that they are almost equal in execution time to the optimized GCC float library used on the ATMEGA. Most of the painful part of the analysis was in fighting the optimizer, tooth-and-nail, but I will not belabor that process.

On the other hand, if you are careful to NOT use any floating point values or functions, you can save two bytes per value and around 1Kb of program space. Which might be useful, to someone, sometime.


So. What's in this bolus then is the result of all this peregrination. It is not entirely coherent because I just threw in the towel as described above. But. Here it is:

http://www.etantdonnes.com/DATA/schipAANN.zip

Thursday, January 31, 2019

Some Driveline Enhancements

Variations Too, again


So. I've been dragging my feet -- once again -- because everything just seemed too hard over the holidays, but I have made progress none-the-less....

While doing limited in-camera demos I found that the Variations second arm linkage just tore itself apart pretty consistently. This was due to there being nothing but a bit of stickyness holding the axle into the arm. I originally used two pins through the whole sandwich to keep the gear from spinning, but I didn't have anything really holding the layers together, and there was too much torque for the sticky to manage.

This has, perhaps, been remedied:
Improved(?) axle mounting

After the arm was all re-assembled, I drilled a hole longitudinally through the circular backing plate and the axle, and glued a 1" long by ~1/32" diameter nail into the hole. This of course requires solid drill press, or mill, mounting and careful attention to not breaking the (@$!@#) miniature drill. Here you can also see the two pins (little brass brads, also about 1/32" dia) that pierce the entire sandwich to prevent the gear from spinning on it's own.

Compare to the previous layout, where the above photo is looking straight on from the bottom:
So. After assembling and gluing all the little bits into their sandwich, one needs to fire up the machine shop and drill two transverse holes almost through all of plate-arm-gear layers -- the almost part being that we don't want to completely pierce the gear itself, thus the pins need to be shorter than the full thickness (which may vary according to the arm material). Then rotate the arm and drill a longitudinal hole through the backing plate and axle -- basically straight down, centered where the "Plexiglas backing plate" arrow points in the above -- Gear Linkage -- photo. THEN glue the relevant pins into the holes. I've tried both: Goop, which is a bit hard to get schmushed into the holes but sticks to the pins; and: filled acrylic-solvent glue, which can be squirted into the holes but only sticks to the pins in an advisory way. Fortunately the sticky provides little in the way of mechanical advantage, it only needs to keep the pins in place.

I did this for the two lower arm linkages and made the executive decision that the torque on the smallest, upper, arm did not merit the extra effort. YMMV...

 

So.

I think this may be the end of the mechanical portion of our time together, save perhaps for cable routing which is still rather ad-hoc.