Published using Google Docs
2014
Updated automatically every 5 minutes

January 2014

February 2014

March 2014

April 2014

May 2014

June 2014

July 2014

August 2014

September 2014

October 2014

December 2014

January 2014

1/13/14 Hell’s teeth! Another year! At the very least I need to split this diary into separate pages - I guess it’s going to get a lot longer yet… Not my highest priority though. Back to work today after a trip. Changed GShorts to a new genetic data type, GNorm, with normalized values from -1 to +1 and guaranteed 4-digits with no rounding errors. Changed map params[] to use these. Then wrote a new kind of map that can contain small genetically defined programs written in the param list, to perform simple arithmetic operations on yin and yang vectors. This should give me a lot of flexibility for clamping, scaling and rotating values. I need this so that I can feed distance and egocentric direction to the WALK map, whose own coordinate frame is speed and rate of turn. Then the creature can see a target and will automatically speed up to a walk or run (depending on arousal level, etc.) and slow to a stop when it gets there. Without modulation the creature would always run if the distance is large, and would slow to a halt too gently. Kind of pleased that I’ve got at least one (albeit trivial) little machine code language in there already - it’s sort of a trademark.

1/14/14 One of those days when a snag became an opportunity, sort of. Turns out the new MapBias class can replace MapMixer, MapXY and possibly MapGaze. I almost combined it with MapPosture (soon to be MapPattern) but decided that was going too far. MapPattern can handle static transforms via a tissue while MapBias handles simple dynamic modulation. Also turns out that egocentric distance/bearing is more use than Cartesian location for navigating, so that cuts out a map.

1/16/14 Fixed gait control using new MapMixers. Started on the higher postural control but got waylaid trying to stick the legs out sideways to save balance (attitude control is not the right way after all). Sensor math not working yet. Too tired to fix it tonight.

1/17/14 Damn, I thought it was Wednesday! Got hung up on balance control. Sideslip didn’t work as a suitable sensor, so I’ve written a sensor that measures CG relative to the feet. IRL we’d probably use relative foot pressure, but I can’t access that. Either way, it kind of works - if the creature gets close to toppling, it splays out its uphill legs and uses their weight to shift the CG. Fore and aft the idea is to parallelogram so that we try to keep our CG central when climbing/descending. Still needs some work but I think it’s ok. Got other things to do over the w/e so hopefully I’ll remember this on Monday.

1/20/14 The new anti-topple stuff is just too much heartache - too many signals to mix, etc. It’s just not worth it for what it is. Kept the sensor but dumped the brain maps for it. Must move on. Some bugs to fix and then on to postures and local navigation.

1/23/14 Google Docs was screwed up for a few days but now it’s back, so I’ve finally got round to splitting this diary into several pages! Anti-topple is back, in a new guise. Walking is close to complete now, so I’m at last moving on to connect up gaze to it. Added new map class that can selectively route yin from one map into the yang of another. This will allow various nav targets to be sent to the walk system, etc. Added transients to visual appearance objects and created a multi-color flashlight that I can use to attract gloop’s attention. Salience of this is detected ok by the visual system, but there’s a problem. Vision currently projects rays to find objects, but small objects fall through the gaps in the rays, so i need to revamp the eye to choose the nearest object in each sector. The eye’s really complex, and I can’t remember why I chose to do it this way. Might get messy! Nice to be finally moving beyond the walk system though.

1/24/14 Phew! Exhausted. Massive rewrite of the eye. New methods for peripheral vision, foveal vision, obstacle mapping and landmarks. All written but not yet debugged. I like it a lot more than the previous method and it’s much more efficient.

1/27/14 Debugged most of the new eye. It probably needs tweaking eventually but it’s close enough that I can get on with the gaze control tomorrow.

February 2014

2/5/14 Trip to see Dean. Before I left I’d debugged the eye and started on the gaze hierarchy, but I got very muddled about the amplitudes and switching rules. In Pittsburgh I decided a daemon-based approach would help. At least it slices the cake the other way - encapsulating all aspects of inter-map signaling and separating out the use cases, instead of separating the different parts of goal-selection, feedback, reflection, etc. Today I started to code it and it helped me see the patterns somewhat better. But it’s sent me off thinking much harder about emotions and drives. ATDR depends on understanding this, so I’m going to figure it out before coding. I can feel some new but surprising symmetry emerging. And maybe some chemistry at last!

2/6/14 More signal handling and reflection. Still murky about a lot of it but I’ve almost got enough to be able to move forward again.

2/7/14 By Jove I think I’ve got it! Affect management and reflection, that is. It’s close to my original idea - no distinct ATDR and DTDR, just buildup within the affect layer itself. It fell out of the observation that object-recognition maps etc. need to reflect even weak attentional signals - no threshold like for saccades. The new system achieves this - we simply choose not to have the leaves of that sub-tree marked as BURs. In the absence of TD attentional signals, we’ll attend to the most salient stimulus and report down how we feel about its constituent features when assembled as a group. The whole thing seems nicely resonant and symmetrical. Partly coded it now but I’ve run out of steam for tonight.

2/10/14 Still coding intrinsic decisions. I think I’ve got a handle on it but it’s tortuous. Maybe get to the end of it tomorrow.

2/11/14 Or maybe not. Still ripping out the old goal-selection code and replacing it with the new. FB rules need sorting out for maps that have specialized IO signals. I’ve split Map into partial classes to make things tidier.

2/12/14 Finished first cut of new signal handler, but boy is it going to be fun to debug!

2/14/14 Gloop looked at me! A bit cock-eyed, but at least he looked when I flashed a light at him or moved in his field of view. Lots of scaling and dynamics issues to sort out yet, but the new goal-selection system works well enough to replicate the way the creature behaved before I rewrote it - walking, guidance, balancing, etc. and the gaze maps are vaguely working too. That’s the first time I’ve had buildup towards a saccade via the salience/desirability mechanism. Haven’t tried buildup of a speculative goal powered top-down by drives yet, but that’s more likely to work, I think. I’ve certainly spent enough months thinking about it… No idea how many bugs like between here and gloop chasing me around, but at least the signaling rules seem to be holding up. Maybe not TOO far now from being able to say he’s playing out options in his imagination.

2/19/14 TOO many variables! Way too many. I wake up in the middle of the night screaming their names.

2/21/14 Yay, beachballs!!! Gloop watches the beachballs (or me) pretty well now. Lots of tweaks to constants to be done yet but the code’s close enough. So the next step is to attach the fovea to the locomotion system so that he chases the balls. Need to think about how best to do that - I just want to kludge it for demo purposes but it’s best if I do it in a flexible way. It would be neat to give him an instinctive like of blue balls and fear of read ones, for instance. But one step at a time - this is just a proof of concept. He looks so cute.!I’ll get him chasing the balls and then let everyone have a play with it. I’d better document all this before I stop - there’s a lot to the gaze hierarchy, even though it’s only one organ and three maps. I think I’ve done it in a way that allows hearing and smell, and possibly even touch, to compete for attention, but they can wait. Lots to do yet, and little or no learning is switched on, but at last I’m close to closing the loop between the visual attention system and the locomotion/posture system.

2/25/14 Bad boy. I could just split azimuth and declination, add in egocentric distance and reflect it all down to the APPR map and the critters would chase beachballs, but I feel an aside coming on… Approaching something can usually be done in egocentric space, but things like navigation are going to require local space targets, so I’m stepping back to think about the next layers up, because I don’t want to wire things up in a way that won’t work later. Local space is a bit weird.

2/26/14 Bit miserable about it all today. Not really sure why I’m busting a gut trying to develop something that’s bound to get misunderstood, like last time. But I think I’m just tired, plus writing my TEDx talk brought out my introversion, because it requires me to shoot my mouth off to a thousand people and pretend that I actually enjoy doing that sort of thing. I’d much rather talk to my compiler. I speak pretty good C# but no Portuguese whatsoever. Spent most of the day on LucidChart drawing zillions of boxes. It’s sorting itself out fairly well, but I’m not there yet. Local space doesn’t seem necessary after all, except implicitly within the obstacle-avoidance map.  The bidirectional nature of the gaze transforms doesn’t seem to apply to egocentric<->absolute bearing frames, which might be a hint that I’ve not come up with the right architecture for guidance. Not sure. More boxes tomorrow, but I can dummy-in the maps for obstacles and target routing for now and start to think about getting some code out to people. Might add something akin to “lobes” so that I can isolate functional networks from each other a bit in the editors, etc. - it’s all getting pretty complicated.

2/27/14 Slight aside to write a compiler for the mixer language. It wouldn’t be mine if it didn’t have at least one little language and a compiler in it. It already had a gene compiler, of course, but it’s been a while. I needed another fix.

2/28/14 Ok, so Gloop chases me and the beachballs, sort of. Quite a few things to tweak yet but by the time I get back from England I can hopefully share the code with beta testers. Not that it’s even alpha yet, but it does at least show some behavior at long last. 69 maps in it so far.

March 2014

3/25/14 Back to work after Portugal. Starting to tidy up what I have for people to look at. Moved joint code into FixedUpdate() and tweaked force/damper scaling and PIDs for better physics - walking suffers if there’s any drag on limb movements. Next up: tweak some of the instinct genes. I’d like locomotion to be reliable, b/c it’s hard to test further stuff otherwise.

3/26/14 more tidying of locomotion instincts. But now I’m stuck on speed/distance control (modulated by things like fear, curiosity, arousal). Not sure how best to do it and it’s called into question some things about servo maps and sequencer maps. Don’t want to get waylaid by this, because I’m trying to get a demo out, but equally I don’t want to hamstring myself. I’ll give it a day.

3/27/14 Approaching objects sensibly is much more complicated than it looks! And yet people are confidently expecting AIs to surpass humans in intelligence any day now… One of those days when trying to solve an easy problem only succeeded in making it into a hard problem. “Approach target” is easy; “Approach target at a speed dependent on emotional cues, while avoiding obstacles, to end up an emotionally and behaviorally appropriate distance from it without over- or under-shooting, and facing directly towards the front face of the object, which may itself be moving” requires a bit more brainpower.

3/28/14 Mustn’t get sidetracked into approaching - do it later. Started work towards making a demoable version - changing file structure, pondering GUI layout exceptions, etc.

April 2014

April Fools’ Day! Should anyone stop by here to check, my server is down ATM and Grandroids is offline. Can’t get into it by any means known to man, so I may have to reimage it. Watch this space…

4/2/14 After much faffing around, the server is back together with shiny new versions of everything. I bought TerrainComposer for Unity, so I’m starting to learn that. Pretty steep learning curve but the results look really promising and will make a more interesting demo than just a flat field.

4/4/14 TerrainComposer is pretty good, although the dev is not so hot at explaining it.He tells us exactly how to attach the flanges to the widget, without ever mentioning what a widget is or why one might want to attach a flange to it. But I managed a decent test terrain, so now I’m accumulating better textures, etc. to make an environment for the demo. Cleaning up the project and dependencies. Got a bit sidetracked into making a sun that can go around the planet and provide dawn and sunset. Not bad, but making the skybox dynamic will be a bit of effort. Shouldn’t be doing that really. Might buy someone else’s solution or shelve it for later.

4/7/14 Nothing on the asset store did what I wanted, so I went ahead and wrote a placeholder sky controller. Can pretty it up later.

4/8/14 Quite pleased with my day/night cycle and new terrain. They’ll do for a demo, anyway. Now I need to add some objects and then try to merge the scenes for the editors.

4/10/14 Coming along. I’ve rejigged the editor system so that editors and other GUIs can be dynamically loaded into the one scene, instead of having to use a different scene for each editor. Moved some of the GUI functions (especially controlling the turntable) out into the world. Now I’m starting to make the GUI styles more compact and a bit easier to see than the default. Gloop had a look round his new world for the first time.

4/11/14 New GUI - more compact, so editors should fit on <HD screens. Tweaked editors. Strange bug - Gloop doesn’t want to leave the lab. Some more housekeeping to do, and some bugs to fix, but MAYBE a PC build by the end of next week. Really tired, though, so I’ll take a bit of time off for a hike. Hopefully I won’t meet any bears this time!

4/16/14 One of those days where every bug I fix reveals three more. But I’m getting there now. There are just a few “things that would be nice to fix” left. By the weekend it should be ready to test on a different PC, and if it works ok I can put it up on my server for my backers to play with. It’s really not much to look at yet, despite being the most complex software I’ve ever written, but at least it’s a starting point for future updates, which should come at a reasonable lick now. It’ll be easier for people to understand progress when they’ve got this as a baseline. Over the weekend I had a great idea (association layer) that seems to kill several birds with one stone. That might be the thing to work on once the demo is out.

4/17/14 The only thing more stupid than speculating on a target date is opening 3DSMax and Mudbox while thinking, “this shouldn’t take long.” Max: “Yep, perfect mesh. No problems at all. You’re good to go.” Mudbox: “What are you sending me this crap for. There are T-vertices all over the place.” Me: “Well where are they then?” Mudbox: “Try vertex 32,429 for a start. And no, I’m not going to tell you where that is.” Max: “Don’t look at me - I still think it’s perfect. Vertex 32,429? Try prodding a few and I’ll tell you whether you’ve found it or not.” This is what you get when you spend $5,000 on software - it comes with an attitude...

4/20/14 Had to kludge the graphics in the end. I know what I want to do but it means rescaling the creature and hence re-rigging it, so it’ll have to wait. Back to the ToDo list tomorrow. Physics seems slightly messed up again. It just changes all by itself, I swear. I thought it might be due to changing frame rate as I add new code, but everything’s definitely on the FixedUpdate() now and anyway I’ve only fiddled with the graphics. It’s not like textures get any heavier when you paint them… I’ve a horrible feeling that as soon as other people try it on their machines it’ll behave differently on each one. But there’s only one way to find out.

4/24/14 Phew! Demo is up on the site and all three versions seem to work, thank goodness. I shouldn’t call it a demo. I’m not really sure what to call it. It’s the place where I spend half my working day and I’ve just added to it so that people have a UI where they can play around without having to learn how to be a geneticist, a biochemist and a neuroscientist, just to get anything to happen. I guess I should just call it the software. I’m starting to think it may gradually evolve into some sort of game-like experience and never really cease to be my development editor suite and research tool at the same time. Maybe we’ll just get to a point where people can join in and follow along with the whole Alife experiment over the years to come, without having too much of a learning curve. We’ll see. For now it’s a relief to get some usable software out there for my backers at long, long last.

4/28/14 Working on a new version to trap the crashes some people are having. New logging system, new config screen, better tracing, etc. Thought I’d get it out today but it still needs a bit of work and I want to try a better way to manage the cortex thread, in case that’s causing trouble. Maybe tomorrow. It’s quite hard to debug something you can’t make crash. And also hard to debug new debugging code. Wasted half the day trying to get a release build to email me logfiles - turned out to require a switch in unity I’d never have guessed. Thank goodness for forums.

May 2014

5/2/14 Almost got a new demo version out this evening but got stymied by a bug in Unity. Tomorrow, I hope. New error logging, single-threaded (and much better for it), selectable creatures, creature’s-eye view (if I can work around this bug), new config scene, screenshots, new rendering effects, etc. No new biology, but hopefully more stable.. Did interview for documentary today, so not much work got done, but it made a nice change to talk about my work instead of doing it.

5/3/14 Dammit, every time I go to do a build I find a bug that I think I should fix first. Creature’s -eye view works, though - it’s quite fun. Added halos on lights, incl flashlight. Fixed some scaling bugs, made the shore shallower, that sort of thing.

5/5/14 When I go to bed at nights I usually have just one bug left on my list to sort out n the morning, but overnight some helpful pixies invariably pop by and give me some more. I definitely had only one bug last night, but tonight I have five. And I can’t believe it but I’m having to redo the leg maps again. Do 3D meshes stretch when the weather changes or something?

5/7/14 Bugfix version 0.1.1 released on PC. Mac and Linux tomorrow. Actually it’ll be 0.1.2 tomorrow, because I made a few changes after the first bug report email, to make sure people can’t create broken species and thus inundate me with error logs. So far it looks much more stable, and it’s certainly a hell of a lot faster. Won’t know for sure until more people have downloaded it. But maybe I can get back to the biology now!

5/13/14 Back to hard thinking. Having been a normal programmer for a while it’s difficult to get my brain back up to the right pitch - it’s like taking a break after running a series of marathons and then having to start training again. I’m working on navigation, at both the territory and local space levels. It’s a surprisingly complex and odd problem, once you break it down. I feel like there’s a neat answer lurking just around the corner. It’s taking me off into bidirectional indexable memory, feature binding and dynamic coordinate frames. It suggests a different slant on imagery too. But I can’t see the big picture yet.

5/15/14 Got a fairly general solution now, for scene stability, obstacle avoidance and other indexed stuff like “where did I last see a…” but my longstanding design for territory navigation is suddenly looking a bit shaky. This is hard stuff to think through. My brain aches.

5/16/14 Still stuck on the higher-level nav, so I’m going to work up to it via obstacles and scene stab. Been drawing up the connectome as it stands, to figure out where to insert the new subsystems. By the time I finish it’ll be practically as complex as the human connectome!

5/22/14 Think I’ve got obstacles and nav sorted out now. I think I’ll continue doing circuit design for a bit before I start implementing. The “what” pathway is next.

5/28/14 Man! This is hard! Getting there, but very slowly. Seeing some changes in the basic map structure - new layer for truth value (how close am I to my goal), plus more widespread use of inhibitory connections. Possibly split the bottom layer into pattern and control layers. Indexable memory is shaping up to be consistent. Lots of problems left to solve yet, though. Doesn’t help that I’m choking on wildfire smoke…

June 2014

6/4/14 Damn! I thought I’d worked out all the details of a whole new gaze and saccade system that works much better for obstacle avoidance, navigation and attention. But now I’m coding it I realize it means nowhere in the brain actually knows where our gaze is pointed any more! It could be done, but not with a single yin channel. Back to the drawing board…

...I tell a lie: I’ve just thought of a solution!

6/6/14 D-day and some actual progress at last. Still some tweaking to do, but the new BU control of saccades looks pretty good. Gloop really seems to be sitting up and taking notice now, and I think he’s probably getting good obstacle data and salience too. He looks more curious and perky now, but then he would - I’ve just given him the first stage in having some curiosity. Over the w/e I’ll try the modified salience assessment method and add a bit of anti-jitter, but then I can go on to add GAZC and the ability to reorient the body. This can be overridden when navigating, I think. If that part works then I’ll commit to BU saccades and redraw the connectome. After that comes obstacle avoidance, which requires reinstating Generic-style learning and planning.

6/10/14 Heavy debugging day, but finally Gloop can follow a ball all the way round, turning his body as necessary, as well as head and eyes. He can also look towards a top-down “compass” direction via internal attention, so now I can go ahead and work on the obstacle avoidance and scene memory. Nobody would guess that “looking at stuff” was so complex, but he’s a lot more alive now.

6/17/14 Working on obstacle avoidance and nav generally. A lot of the stuff at the moment is little Mixer programs written using genes. They’re equivalent to short code snippets, so it feels weird to be doing simple code in such a ridiculously roundabout way, but at least it’s genetic and so can be bred/modified. Anyway, I can drive Gloop to walk to waypoints now, as well as visual goals. Obstacle memory is written but not tested yet.

6/19/14 Out of the blue I figured out a way to document the genome and have it show up in editors, etc. that is fairly robust to changes, mutations, rebuilding, alleles, etc. So I did that. Also modded chemoreceptors and made the speed at which gloop walks sensitive to things like adrenaline, tiredness and fear. No actual chemicals yet, but the loci are now accessible through mixer scripts, etc. from brain maps.

6/21/14 Yesterday was a bust - Windows died - but luckily I got it working again. I don’t need a complete reinstall right now! Today I just documented some of the maps with my new doc system. It helps a lot to see the info right there with the gene it relates to.

July 2014

7/1/14 Debugging GRID cell conversions & loco dist->speed.

7/7/14 Still working on the nav circuit. Gloop can go to a location in space just fine now, but the circuitry is messy. I think I can see a more symmetrical structure that might work better. Did some simple chemistry for the first time - improved chemoemitters, etc. Getting there.

7/8/14 Mostly a good day. The new gaze chain works nicely, and gloop can now walk to a given distance from a waypoint or a visual object. It means he can chase balls again. Works pretty well, all things considered, and he’s pretty curious. I now have a GAZD for absolute vector space to local space transforms. Almost ready to move into obstacle avoidance, but there’s an irritating problem with adding hysteresis to the GAZC level.

7/15/14 One of those periods when I open lots of parentheses and don’t expect to close them again for ages. The obstacle-avoidance map raises questions about spatial memory and planning in general, and I’d like OBST to be a minor variant of a much more powerful class if possible, so I’m trying to code the general case while using the particular case simply as a sandbox. Might take some time, but planner maps and their interaction with rehearsal and hence cognition are kind of core to the exercise. Getting there, I think, but now I’ve opened the neurochemistry brace too, so I’m nesting deep…

7/18/14 Pretty unsuccessful week. There’s nothing intrinsically hard about OBST, it’s just thinking far enough ahead that’s the problem. Kind of got a mental block about it. A day off is probably the solution. A nice hike to let it soak in. The snag with OBST is that it’s very like a planning map, very like a spatial indexed memory, and very like a GAZ map. But it’s too unlike any of them to be a variant of one, and I can’t yet unify those three. Mostly it’s just me hoping for elegance, but if I just hard-code it uniquely, then it’ll probably turn out to tell me something useful about one of the archetypes.

7/22/14 Getting some of the issues to settle down now, but fixation is a problem. Fixating on a small object at a distance while we’re wobbling all over the place is hard for the gaze chain to cope with. The old way of establishing the object of gaze might help, but then I’ll get incorrect distance/obstacle data. Need to find a way out of this. I am starting to get past the mental block / rush of interconnected problems now, though. OBST is just going to have to be a very specialized map, despite its theoretically archetypal qualities.

P.S. Ha! Got it! It’s a tiny bit of a cheat but it makes use of a cheat I’d already had to include anyway, on account of the fact that the eyes are not really receiving photons. I think it’ll work…

7/23/14 Turned out the cheat wasn’t even needed: fixation got a lot better when I reduced the eye joint’s damping to 0. On a roll again now! New saccade/fixation system; new mechanism for tightening GAZ joints when navigating (no need for chemistry after all); custom inspector for Appearance scripts; adjustable visible range for objects. One small bug left and then I can tackle OBST tomorrow.

August 2014

8/8/14 Had a few days away with Dean, which did me good. But the damned obstacle avoidance was still there when I got back… I’m on version 7 of it now and STILL have two big flaws. A lot of the trouble was that although the maps either side of this have a good logical relation to this one (whether I see it as part of the gaze/locomotion chains or part of the scene memory system), they had slightly conflicting needs, making it all rather specialized. Also, there are disadvantages to all the possible forms of local space and I’ve kept vacillating between them as new snags arose. I’m on fixed but resettable space now, which seems to be the best compromise in terms of making sure all the surrounding maps source and sink the right kinds of signal (e.g. goal changes due to updated data don’t seem like whole new journeys to the obstacle map, and the NAVI map will be able to learn by observation about how to get around the territory, even when the individual transitions were caused by visually-guided choices). It’s been a nightmare of “if I do A and B it solves C and D but creates new problems E and F.” I’m getting there, but now there are subtle snags in the decision buildup process. They don’t apply to the more generalized planning scheme but do show up in local route planning, which is ironic given that the generalized scheme was invented by thinking about local route planning. This is a REALLY irritating part of the development, because my basic theories are sound but the devil is in the detail. After about version 5 I started to lose track of what I’d tried, what the problems were, etc. So I’ve gone quiet on the community site etc. because I have nothing to report and I need to focus hard. But my brain isn’t playing along, and keeps opening facebook and finding other ways to distract me. I think I need a long solitary hike this weekend, to think it through without distractions. But maybe not too close to a cliff edge…

8/12/14 Making some progress. Stymied for a long time by a weird bug that was sending unexpected signals to the decision layer. Turned out I’d repurposed this layer for forming intrinsic goals by default, and completely forgotten I’d done it. Fixed that, but now it has raised all sorts of issues about BUR behavior, feedback to the NAVI route planning, feedback to centers above the SCEN memory, etc. It can all be fixed, but this damn map refuses to fit in with the general scheme of things. Not sure which is most at fault - the map or the scheme.

8/14/14 KIND of getting there. He builds an obstacle map now and finds a reasonably sensible route around them, but there are a lot of false “radar” blips, which turn out largely to be due to slop and propagation delays in the gaze chain. I’m worried that saccades can’t be fast enough to build up a map in good time not to walk into walls anyway. There’s nothing wrong with the principle, but this is just a home computer and I can’t afford to update the brain fast enough. There are 4096 neurons in the obstacle map already and that may not be enough. Oh, for ten million neurons running in true parallel! Our own microsaccades run a lot faster than I can simulate and we have parallel depth processing too. I may have to find another mechanism altogether. I’ll sleep on it. Some level of cheating is feasible, just not desirable.

8/15/14 Alright! I cheated, but not by much. I had to gather a scatter of depth values from around our gaze via a separate route (which meant they also had to be converted to local XY in code, but that’s a good thing as it makes them much more reliable). I’ve kept the overall circuitry the same as I would have used, but just patched on an extra sensory map, so I may be able to improve on it later, and things will at least progress from here as if I’d done it all the right way. Painting the map now works nicely and route-finding seems to work, but I have a bunch of tweaking and bug-fixes to do. Didn’t get it done by the end of the week, but I’m close. It WILL work. Then comes territory navigation and at that point a new demo. Or maybe before.

8/18/14 Reading a manuscript and doing an interview over the w/e, so not much got done. Obstacle-avoidance DOES work, though - Gloop falls over far less often when he doesn’t bump into things! He even looks quite purposeful as he steps around obstacles. Still a few irritating bugs and I need to rethink gaze tightness when walking, because he sways his head too much, but almost done. Then I think we’ll probably go hunting throgs - that involves object recognition and scene memory. Plus TD visual attention and possibly BU auditory attention. Might do that before territory learning. It would be a good point at which to cut a demo (although I also have to fix the weird Unity Destroy() problems before I can do that).

8/20/14 Went backwards today. It would be really good if the obstacle map could treat SCEN as a conventional goal source (and hence provide it with feedback) AND also send it saccade targets. But that means two channels of yin. So I’m revising the whole link system, because it was written right at the start and a lot of it doesn’t make sense any more. It might be too messy to fix, but it’s worth a couple of days trying. Tried upgrading Unity too, but it crashes with my project. 4.5.3 and the new 4.6 both crash but 4.5.0 doesn’t. Reported the bug but I may have to narrow it down.

8/22/14 Tough week, but finally I think I’ve got the obstacle stuff done well enough to move on. I had to add a secondary yin channel, so while I was at it I completely revamped the link system. The old broadcast/multicast mechanism had outlived its usefulness. Things are faster now, and hopefully more logical (if I can remember how it works). Might tackle the destroy() bug before I move on to SCEN, and perhaps have a quick look to see why my project crashes the new Unity.

8/25/14 Turns out Unity has already fixed one of my bugs and the other was my silly misunderstanding, so that’s good. Runs cleanly now. SCEN was going to be next - a five-minute job - but it’s dependent on object recognition, and that raises various learning and association issues, so it’ll be a five-minute job in a week or so….

8/27/14 Back to basics (again). Neural migration, etc. Hard on the brain, but nice to be implementing some learning again. Watch this space.

September 2014

9/3/14 Kind of took myself by surprise by getting object recognition learning pretty easily. I MIGHT have a unified mechanism for both conventional (!) yin learning and planning, but it’s a bit early to tell. I’ll play around with it for a bit but then I need to make use of it for scene memory and navigation. Then I’ll have two forms of short-term memory and two systems entirely learned. Nice to see gloop knowing whether he’s looking at a throg or a ball (and even a blue ball or a green one).

9/5/14 Fused “posture” map class with pattern maps, and it all seems nicely general-purpose now. But there are some emerging ambiguities about yin amplitudes - sometimes they mean salience/valency and hence valid feedback to parents, and sometimes they discriminate between meaningful signals and “no signal” (or levels of confidence). Yang amplitude is similarly dual-meaning, but above and below a BUR it can mean different things with impunity. I don’t see an equivalent watershed in yin. No big deal yet, but something to think about.

9/11/14 Completely revamped amplitude handling. Now I specify the “cognitive domain” of each map and that determines how amplitudes are handled. Wrote an indexed memory map class, but I’m still unable to find a way in which OBST can avoid trying to send two different kinds of data to the same parent (our grid cell location and the location of the current saccade target). So I decided to write a new map script language that can be used in ANY map, not just mixer maps. Using a RISC architecture it should be efficient and it will allow me to genetically route signals and perform mods on any class of map. It’s a big step and will take several days (what with assembler, disassembler, gene encoder, editor changes, etc.) but I think it’ll pay off better than adding successive kludges. There are bound to be more examples where the yin/yang parent/child structure doesn’t quite work, and this is a generalized solution. The new language will be more powerful and easier to read than the mixer scripts, which just kind of “evolved.”

9/13/14 Shelved the language for now, because I found a better way to deal with the two-channel o/p problem. Finished writing the scene memory maps and it ALMOST works. I can stick an electrode in it and cause gloop to go to where he last saw an object of a given type, but the journey ends too soon because of the loose tolerance of ‘success detection.’ Probably some other awkward issues too, to do with having relatively small numbers of neurons, but it’s getting there.

9/15/14 Phew! Gloop doesn’t have the best memory in the world, but then neither do I. Nevertheless, he can now learn to recognize different objects, keep track of where they are in his local environment, and plan a route to get to them. Most of the time. It’ll get better when the visually-guided navigation joins up with this obstacle avoidance and scene memory. But the third part of the puzzle is learning about his overall territory and planning routes from one locale to another. Might do that next. I need to get a new demo out, but at the moment it’s very hard to explain anything that people can experiment with. Especially as he’s learning to classify objects now, and we can’t know in advance how he’ll arrange them. I’ll have a quick look at territory nav and how to connect them all up into a coherent system, but if I get too bogged down I’ll stop and cut a demo.

9/16/14 Hmm… Got sidetracked. Hopefully it’ll be useful. Trying to figure out the wiring of the system that learns how to do stuff to an object (especially go and find one, navigate to it and then walk up to touch it) got really messy, so I decided in a fit of impulsiveness to add neuromodulators to the system. I.e. a simplified version of chemistry (which can interface with the real thing on the other side of the ‘blood-brain barrier’) that allows signals in one map to modulate those entering another, indirectly and globally. I shouldn’t overuse it because it would get complicated and hard to track, but it’ll probably kill several birds with one stone, such as inhibiting certain reflexes when navigating to invisible waypoints, replacing the current affect-based inhibition that I used to stop gloop from trying to walk, etc. after he’s fallen over, causing anxiety when a map realizes it’s not likely to achieve its goal, and a bunch of other stuff. It involves two new gene types and hence some fiddling with the brain editor, gene expression, etc. but it’ll probably pay for itself.

9/19/14 The neuromodulator thing has turned into a pretty major exercise, but it’s going ok. Not finished it yet - it needs a new editor mode. There are some rough edges involved (e.g. attach a receptor to a link that’s wired to something else and it will modulate the electrical signal, as intended, but attach it to an un-wired link and it needs to behave like a simple receptor and produce a signal. Except electrical signals are vector3’s and chemicals are scalar, so then what amplitude do I supply, given that the receptor has no idea of where it is in the brain hierarchy... So I can see myself  (or evolution) getting a bit confused about the  various exceptions and twiddly bits, but basically I think it’s going to solve a bunch of otherwise awkward problems. Hope so.

9/21/14 Neurochemistry is in, new editor mode written, blood-brain barrier done. Nicely intuitive response curves - I might replace the standard chemoreceptors with this method. Also cleaned up auto-editable gene fields, so I should replace the code in the gene editor. Then I can test the neuromodulation and remove the previous inhibition method.

9/22/14 There was me thinking I had no chemistry in the genome. There are two places, in fact, so I have to replace those with the new kind. Shouldn’t take too long. I do need a way to see which map links have receptors & emitters on them, though, or I’ll get confused. I also need to get rid of the old locusIndex system and repurpose some mixer language instrs. NEARLY back to what I was supposed to be doing next.

9/24/14 Never, ever say that something won’t take very long! All day tracking down a single bug in the GUI code. Got there in the end, which is always satisfying, but what a waste of time! Anyway, Gloopy now walks at a speed that can be controlled chemically, so I’m getting there. The physics is crap again - it’s like it drifts over time! Might be the weight of his head, now that he looks around so excitedly as he’s walking. Need to reduce unwanted neck movements somehow. The point of all this sidetracking was to be able to inhibit VGN when navigating round obstacles, so I should sort that out first and then maybe have a tuning session.

9/29/14 Back on the main path again now, working on the terrain mapping / route planning map. The details are not working out quite as I’d imagined them four years ago, but I’ll get there.

October 2014

10/1/14 Man! I’ve had this NAVI map in my mind since the very beginning, yet it’s not turning out to be as straightforward as I’d hoped. As usual, the basic idea is fine but the details are awkward. Even with 4,000 neurons it’s tight for resolution. And hoping that pattern-matching of landmarks would be enough to organize the space turns out to have been naive. I have to do a bit of dead-reckoning to get things to organize. That’s biologically legitimate and I have the necessary data coming into the map; it’s just not as general as I was hoping for. But it’ll work out, and we’ll certainly get absolute (approximate) positioning without any cheaty access to a hidden GPS sensor, so it’s genuine enough for now. I thought I had a way for the map to rescale as more territory gets explored, but given the resolution limit that may not happen, which means I have to presume the size of the world. I really don’t like that, especially if I’m not allowing myself to know where the creature is when it’s born, but maybe something will occur to me later. Or maybe 16,000 neurons will do the trick. It’s really hard to visualize the way things will interpolate. Experience will tell, I guess.

10/2/14 Had some good ideas overnight. I think I know what to do now, and it extends into thoughts on how to learn about good TYPES of place (to eat, to go when frightened, etc.) as well as understanding weather, communicating expressions and other stuff. But I had to add some terrain data handling to make it possible, so today I took the opportunity to complete obstacle avoidance with the ability to avoid steep slopes, cliffs, etc. Works nicely. The trouble is, that huge flat plain that I made for developing walking when gloop was too stupid to avoid obstacles is now way too huge, too flat and too boring. I could really use a more varied, small-scale terrain but I don’t want to nest too many more brackets - I’ll forget how NAVI was supposed to work. Might just kludge that for now. Or maybe it’s a nice weekend job.

10/11/14 That nice weekend job took all week. Surprise! Although I did take the weekend off, to be fair, and I managed to add a few fixes and features along the way. Much happier with this nice little island - it has enough zones and obstacles to test the stuff I’m doing. Unfortunately Gloop walks into the sea sometimes, and often crosses the stream instead of taking the bridge, so I need to rethink local route-finding - the visual scan is too sparse - but it’s good enough for now and self-contained enough not to have knock-on effects, so I’ll come back to it. And at least I can pick him up and carry him now, so experiments take less time. Now it’s back to ze little hippocampal place cells…

10/14/14 Ok, I think I have a plan for NAVI now, which uses dead reckoning for accuracy, landscape features for resetting our beliefs due to d/r drift or forgetting, plus using the same features to drive mental imagery of the kinds of place we have to cross when considering a route (and hence how we feel about that prospective route compared to alternatives or compared to not going at all). And I think I can get all of this out of simple parameterization of the pattern-matching and retuning mechanisms (as long as I rewrite them), instead of lots of specialized map code. Plus I think I see how NAVI fits into a common scheme as both a verb (go to X) and a noun (a type of X to go to, as in “go home” or “go to the beach”). And besides places, this noun-type mechanism also supports objects (go to where we last saw a carrot), other creatures (find mom) and habitats (go to the nearest source of water or shade). There’s a lot to it, but I think I have a plan at least. Now I just have to implement it all…

10/15/14 Today I built a thousand-piece jigsaw of the brain, only to discover that the last piece didn’t fit. So I started again, beginning with the problem piece. It went great for 999 pieces, but then the new last piece wouldn’t fit. So I cleverly split the puzzle into two separate 500-piece parts, and now I have two last pieces that won’t fit. It’s days like this that shoot-em-ups were designed for. Tomorrow I’ll try a jackhammer on the damn thing.

10/23/14 Debugging new NAVI map, which is slow going because gloop has to learn his way around a bit before I can ask him to go anywhere. It’s basically ok, but the Devil is in the details, and the details are hellish! Didn’t get much sleep last night for thinking about those details, and tonight I think I might need to dream up a completely new approach, because of what seemed at first like a niggling problem. But it’s only when you try this stuff that the issues become clear, and old Gloopy is getting pretty complex now. Almost like he has a mind of his own…

10/24/14 It’s been another of those ‘seismic shift’ days, when I thought I’d almost got everything sewn up, even though I didn’t like the way I’d done it, and then in the middle of the night I had a thought that created a cascade of other thoughts that change everything!  The scientist in me loves these times - they’re ‘aha!’ moments - but the developer in me loathes them because of the effect on what I wryly like to call ‘the schedule’! But it’s the weekend, so I’ll allow myself the luxury of thinking this new stuff through for a couple of days (a return to migration, but this time based on temporal contiguity as inferred from transition memories). It has a nice correspondence to what possibly happens when we sleep, which is ironic, given that I was up half the night thinking about it.

10/25/14 The seismic shift is actually feeling pretty good. It involves a lot of changes but they shouldn’t take very long - hopefully no more than a week, but my time estimates have been way off lately. Migration based on transition memories is such a good idea that I might extend it to self-organization even in things that should be clustered purely by similarity - we just treat the ‘transition’ memory as a fiber leading to the next-most-active neuron instead of the start state. And thinking about that made me realise how useful it would be to include ‘Q’ in each fiber (equivalent to a dendritic arbor of variable size and spatial distribution), to include the idea of statistical variance in learned experiences. I did some tests and it’s only twice as slow as the present square of the distance method used in Match(), if I use an extension of my Envelope class to make a series of LUTs. And I can recoup some of that by using the same method to speed up the existing places where I use a lot of double curves calculated mathematically (2xPow() and a Sqrt() each!). Whether this is all going to help with NAVI remains to be seen, but it will certainly help with more general planning maps.

10/26/14 Converted Envelope back to 2D and wrote a new class for fast lookups of 2D curved surfaces. I think the latter could actually replace the former.

10/29/14 Totally wasted day, because the server fell over. Plesk, as usual. I think I’ve got it all back to rights but we’ll see. So no coding today at all, although yesterday I finished the conversion from envelopes to the new lookup tables, which gives me control of the Q of dendritic arbors, among other things. It took some fiddling but everything seems to be working again with the new code, so tomorrow (server willing) I’ll start on the new migration code. Fingers crossed it works, after all this fuss.

10/31 First draft of the migration code. Bits of it work quite well, but my lens idea is a bit dodgy and there’s a hell of a lot of processing involved. If it works it’ll need splitting up so that it can be done in tiny increments during sleep. Not sure the theory is right yet, though.  It’s been a really hard week. I didn’t take any time off last weekend, so I’d better have a break. I hope I can remember what the heck it all means when I come back to it. Spent a seriously scary $600 on Unity 5 yesterday, before the price goes up. I haven’t tried converting the project yet, because there are a lot of breaking changes - I might save that until a later Unity beta. No rush. Fingers crossed that the new Physx works better - maybe I can even go back to proper tensors! I thought this project would be long finished before 4 became obsolete. It’d better be done before 6 comes out!

11/2/14 So much for taking the weekend off - gales yesterday, rain and snow today. So instead I devised yet another migration mechanism, I’m fighting the way the real world is massively parallel and continuous, while computers hugely prefer nice neat arrays and iterators, but I think I might have a method now. It’s hard to visualize the limiting cases so I’ll have to experiment, but it seems reasonable and its easier to time-slice than the other version. It still has to be done during sleep, for efficiency’s sake, but that’s not a bad thing - sleep needs to be important, not just a conceit. Sadly it doesn’t involve actually firing off cells, so it doesn’t in itself lead to dreaming, but there may be a need for another mechanism yet that involves the full resonant yin/yang thing and hence would be ‘conscious’ in the right way. Not sure.

11/7/14 Thanks to a bunch of ****ers in St. Petersburg I didn’t get much done this week.

11/11/14 The snag with emergence is that it’s emergent.

December 2014

12/05/14 DAMMIT! Looks like yet another idea is about to bite the dust. Almost TWO MONTHS I’ve been trying to get this self-org to work. Every idea I try seems fine until some completely unexpected and bizarre dynamical twist comes along late in the process and ruins everything. I’ve been through 8 or 9  different ideas now, plus variants. It’s the associative memories that cause most of the problem. Maintaining an organization by similarity is easy enough but transitions need to cluster by similarity and yet remain partially highly distinct over short distances. And trying to do it with just a few thousand nodes stored in a grid doesn’t help, either - it would be easier in real life. It’s frustrating beyond belief. And it’s not like anyone else has had the same problem, so there’s no existing technique. The devil’s in the detail. Maybe this method can be salvaged, but I think there’s another issue looming. I reluctantly made some big changes that might mean it’s worth revisiting an earlier idea now, but heck! When I set out two months ago I thought I’ve have this done in a day or two.

Hmm… Ok, as long as I’m willing to let go of a couple of fondly held hopes, I have another idea that’s now possible because of the columnar structure. Watch this space. I’ve said this before…