Author Archive for

09
Aug
13

Going to Small Sat

Tomorrow begins the Small Sat conference in Logan Utah. It is 6 days immersed in space science, technology, policy, and inspiration. The format looks to be a new presentation every 15 minutes! There are several people, businesses, and presentations that have my attention. I’m hoping to meet Zac Manchester from the KickSat project, hear from Interorbital on their summer’s work, have a few laughs again with Fred of Team FredNet, look for mission control software best practices for Part-Time Scientists, and talk with other ion engine researchers. With good cell reception, I hope to tweet cool facts during the conference (@iondragonfly).

09
Aug
13

KickSat Sprite CGP

How’s that for a cryptic title? The KickSat project will deploy 200 satellites, called “Sprites”, each the size of a business card into low earth orbit in December 2013. Along with Don Smith and my colleagues at Part-Time Scientists, I am writing software for one of these Sprites. The most interesting challenge is communications. At roughly 1 byte per second and no battery to power a radio through a solar outage, getting a lot of data from the communication stream is difficult. Us humans are pursuing the classic route of performing trade studies to select a communication protocol. Meanwhile, the Cartesian Genetic Programming (CGP) machinery grinds away looking for an even better protocol. I’ve reached out to the creator of CGP, Julian Miller, who has also brought in Simon Harding to assist. I cannot overstate the value of their input on tuning parameters and setting up the problem. Final results will be posted here in early September. In the meantime, here is the structure of the CGP system.

The Sprite has a 3 axis magnetometer, a 3 axis gyro, a temperature sensor, and a small persistent memory. Figuring just the sensor readings themselves, we have 7 telemetry points to be sent to the ground. Adding values for system uptime, reboot count, min/max readings, sensor noise, and sensor correlations further raises this count.

The generic structure is to reduce the resolution to 8 bits per telemetry point and to send points in either a round-robin or random sequence with an indicator telling us which data point follows. Pairs of telemetry values are grouped into a single ID and sent together. Success is defined by recovering the sensor data from both complete and corrupted packets. With amount of redundancy built into the Sprite’s low level protocol, we aren’t concerned really with byte-level corruption as much as with packet truncation due to loss of power.

Each “world” consists of a CGP individual to encode a packet and another individual to decode a packet. The encoder is presented with 16 separate inputs, 1 for each bit in 2x 8-bit sensor readings. The encoder outputs 16 separate outputs, each interpreted as a binary value (>0 –> 1, <= 0 –> 0). These constitute the packet that will be sent.

A packet is made from the encoder output and presented to the decoder. It takes 16 inputs, each a tri-state bit. 1, 0, and -1 with -1 meaning “unknown” as can happen with truncated packets from the satellite. The decoder’s output is 16 tri-state bits (>0 –> 1, == 0 –> 0, < 0 –> -1 unknown) and are mapped to 2x 8-bit sensor readings.

The high order bits on each 8-bit sensor reading are the most valuable. In a corrupted packet, the reward for getting the highest order bit of a sensor reading correct is 128. The second is 64, and on down. In a normal packet, the rewards are 3* the corrupted reward schedule. This does lead successfully to the achieving perfection on bits 0 and 8 first (the MSB of the two sensor readings) and to getting normal packets working before truncated packets (though only slightly). If all test cases are perfectly handled, then the total size of the encoder and decoder is used to drive down the size.

100 random 8-bit pairs are used for each generation in each world. Validation is done with 1000 random pairs and there is strong correlation between generation test scores and validation scores. Testing the full set of 65,536 pairs has not been worth it for validation. Each encoder and decoder pair is given a new set of random test cases with each generation. Each random pair is presented normally (input->encoder->packet->decoder->reward) as well with truncation (input->encoder->packet->set last 8 bits to -1 ->decoder->reward).

Though separate individuals, the final reward score is assigned to both the encoder and decoder, causing them to advance or fail together. This is based on my past experience with CGP protocol simulation that showed very, very, very slow progress when ranking each separately.

The population structure is 1+7. That is, the 1 best individual (or child that is not worse than the parent) is promoted and 7 mutant children created. There is a 1+7 population for encoders and a 1+7 population for decoders.

09
Aug
13

CGP, Compressed Sensing, and Data Collection

We’re doing vacuum chamber tests of the FRETS1 satellite’s ion engine and building an embedded data collection system. Naturally, I want every nuance to every signal trace at several million samples per second on lots of channels. Even after being talked down to 4 channels, 2MS/s/channel, and 8 bit samples, it’s still a lot of data for an embedded system that must be the size of an Arduino in the prototyping stage and even smaller on the satellite.

Lots of and lots of searches later, it is clear we aren’t going to have that system. We’re going to have one better. To start with, the incoming data can be fed into a flexible triggering system and threshold events counted in small time slices. The data storage and sampling requirements are 3-6 orders of magnitude less for this approach, removing large amounts of add-on SRAM and engineering headache. The downside is that the data is essentially compressed in the hardware before it gets to the collection CPU. Now it gets interesting. Such data can be converted using the Compressed Sensing GPU library I presented at NVidia’s GTC this year. Further, the recovered signal can be modeled with an equation using Cartesian Genetic Programming (CGP). The equation is fit to the compressed data, the recovered data from the Compressed Sensing library, and to physical truths of continuity.

Admittedly that’s a lot of math for an embedded controller to be doing. However, we can build basic engine controls using just the compressed data. The engine controller will have even better data over time as the CGP and Compressed Sensing solutions converge. And that is what I mean by a “better” data collection system – it has a series of fallbacks needed for a space mission and still works just fine on the bench.

18
Jul
12

Thinking with laser cutters

I finally made my first part using a laser cutter.  There’s a period on that sentence because the most exciting event happened 30 minutes later after the second and third parts came out perfectly.  The background: the ion engine needs a test stand for use inside a vacuum chamber.  I decided to make this stand out of clear plastic so all internal parts (especially unwanted sparking) could still be observed.  A laser cutter at the nearby TechShop was the obvious choice.  After some fussing around with 2D drawing programs, I was thankfully nudged into using Inventor to make real 3D parts and to confirm how they fit.  When cutting day finally came the process was flawless and the parts came out so easily that I had 90 minutes of time left in my laser reservation.

There was a class running in the woodshop, which I needed to make holders for the vacuum chamber’s thick-walled glass cylinder.  Looking at the woodshop, looking at the laser, looking at the woodshop, looking at the laser – I had a “2001” moment!  It was suddenly clear that the 1/4″ acrylic sitting right there was perfect for cradling the glass cylinder and a few very simple pieces of 1″ dowel would give all the 3D-ness that was actually needed.  10 minutes of drafting later, parts were coming out.  15 minutes of wood cuts and drilling later, it was done.  That “2001” moment when the mental barriers fell really emphasizes the importance of putting yourself into new situations.

Here are the parts for the support and chamber partially assembled.  Enjoy!

30
Dec
11

Evolving Custom Communication Protocols

I once again spoke at 28C3 in Berlin, Germany for Part-Time Scientists.  Today, the topic was Evolving Custom Communication Protocols.  We used Cartesian Genetic Programming to create the finite state machines of a transceiver, including all packet flow control logic.  This only works when there is at least some trivial adversary, such as Outer Space for our moon mission.  The technique works even better when an attacker is included and allowed full access to all data and is coevolved with the transceiver.  For our runs, transceivers evolved with and without an attacker both handle a validation case equally well, but the transceivers evolved with an attacker are accelerating their evolution and will soon out perform their less-stressed cousins.

Code, data, and the presentation PDF/PPT are online here:

http://dl.dropbox.com/u/5001972/EvolvingCustomCommunicationProtocols.zip

The YouTube video is here:

http://www.youtube.com/watch?v=zoOOpiMJQ8s

 

29
Dec
11

Algorithm Invention with Cartesian Genetic Programming

A couple of days ago, I got the honor of presenting at 28C3 in Berlin, Germany for the Part-Time Scientists!  The topic was “Automatic Algorithm Invention” and featured my foray into Cartesian Genetic Programming.  The topic is exciting actually and much easier to get started with than you might think!  To prove it, I’ve posted the presentation materials and code:

http://dl.dropbox.com/u/5001972/AutomaticAlgorithmInvention.zip

There is also a YouTube video of the presentation:

http://www.youtube.com/watch?v=tQvFZVlM2Gk

For me, the heart of the technique is the addition of “state” to the input and output variables.  Chromosomes each get private state and share common state.  They are evolved to cooperate and share information via this state.

Danke everyone that turned out and I look forward to hearing about your adventures with evolving algorithms!

04
Oct
11

CompSci says optimism works

I’ve been doing some reading for the upcoming ai-class.com course and have some philosophy to share as a result.

Searches are a mainstay of optimization algorithms and a good search needs a good estimator.  At every step of the way, the algorithm knows the real cost to get to that point and needs estimates of the cost of each possible next step.  But, in general, what policy suites an estimation algorithm best: optimism or pessimism?  (Note: I ruled out “realistic” as it implies we already know the answer to the problem.)

An optimistic estimator underestimates the true cost of a solution.   The final, optimal solution will always cost more than the optimistic estimate.  If we, as humans, were to give optimistic estimates, we’d always be wrong and underfunding our endeavors.  Sounds bad, right?  It turns out though, that this optimistic estimate always leads us to the optimal lowest real cost.  Why?  Because with an unattainable low estimate, an algorithm keeps searching.  Eventually, it finds the lowest cost and concludes all possibilities beyond that cost more and “settles” for the optimal point.

A pessimistic estimator overestimates the real cost.  When we humans overestimate then find a solution that costs less, we’re heroes.  When an algorithm follows a pessimistic estimator, it stops when it finds a solution that is lower than the estimate.  This may or may not be the actual optimal location.  Dangerously, pessimism teaches us that pessimism works and trains us to double our estimates and to only trust those that overestimate.  It rarely shows us an optimal solution and we never really find out what we’re missing.

So, what are we to do? Both options are usually wrong though the pessimistic one might actually find a high cost solution that matches its estimate, and being correct sure feels good.  It’ no wonder people question the results of optimization algorithms – the algorithms are just trying to find the best answer while the people are just trying to be right.

25
Jul
11

Maker Faire Detroit 2011

It’s that time of year again, time for Maker Faire Detroit!

This year, I’m showing the FRETS1 satellite.  We’ve been hard at work on the design of the satellite’s systems, the ion engine, and frankly, learning a lot about space.   I also learned quite a bit from just talking with people a couple of months ago at the Ann Arbor Mini Maker Faire.  It was hot, well over 100F in the open air buildings at AAMMF.  Several of the builders had to pack up early because of the heat.  Throughout it all, people kept coming to our display and talking a lot about the satellite project.  Everyone was uncomfortable on the concrete, in the heat, with no air flow and yet the enthusiasm for a satellite project they could relate to never stopped.  I don’t have words for just how inspiring that experience was and hope Maker Faire Detroit brings the same energy – just more comfortably for everyone.

There is now a separate blog about the satellite and engine’s progress, at frets1.com.  Check it out, especially the gallery of high resolution satellite prototype photos.

I hope to see everyone at the Maker Faire Detroit in just a few days (July 30-31 in Dearborn, Michigan at The Henry Ford).

 

25
Jan
11

T-11 Months

We’re building and flying an actual Earth-orbiting satellite.  It’s a TubeSat from Interorbital Systems and will fly in late 2011.

The primary mission is to test one of 3 satellite thrusters on the drawing boards: an ion engine, a low thrust field effect engine, and a plasma-based magnetorquer.  The ion engine has the best chance of good thrust but is the most difficult to build and power.  Special techniques are needed to build something not yet tried, exploiting quantum tunneling and laser effects on the Debye Sheath. A field effect engine is theoretically possible but thrust is so low, no one has tried it yet in space.  We’re hoping to use the power of modern GPUs to get the absolute best thrust through FDTD simulation and genetic algorithms.  The last idea isn’t technically an engine per se.  It uses RF waves to push electrons around and around a loop of plasma.  The resulting current is exceedingly large and interacts strongly with Earth’s magnetic field to provide strong torque for attitude control.

The next milestone is a satellite kit kickoff party, complete with a cake in the shape of an astronaut (red velvet cake – more interesting to cut into).  There might even be tin foil hats.

22
Jan
11

3D part two

Ok, ok, ok.  It’s been a while.  How about a status update on the 3D project?  Don and I did in fact present at the Detroit Maker Faire.  We even wrote about it, honest.  Over a couple of days, we talked to nearly 1,500 people and even won an Editor’s Choice blue ribbon!  The great people at Evil Mad Scientist Laboratories that made the Peggy2 board stopped by and took this amazing photo.  Note the jaw dropped stare.  Kids weren’t the only ones with that, myself included!  Some of the best scenes were father’s getting giddy and pulling their whole family over to check it out.  Here’s our display in action:

As if the Maker Faire wasn’t awesome enough, we were later contacted by Jonathan Gordon, a student at Johns Hopkins University!  He had an amazing semester and did a very nice version of the display complete with animation.




May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031