28 November 2002: Well it looks like botepidemic.com karked it, so I'm moving this page to my personal webspace.

If you want to see some other stuff I have coded since I quit the nerualbot, go here.

15/September/2000: Hi all, today I have a fantastical treat for you all, some C++ Genetic algorithm and Neural Net code. Unlike the neuralbot code, this code is written in clean object-orientated C++ with a clearly defined interface. You should be able to use it relatively easily in just about anything, as opposed to the neuralbot code.

The main part of the code is a general purpose GA library which can be used to optimise just about anything. There is also some code for a multi-layer feed-forward NN which can be encoded and optimised by the GA. There is a small example driver program that trains a net to do the XOR problem using the GA. Here it is:

GANNcode.zip(61 KB)

hehe apparently 4 people have downloaded it even before I put this link up :)

24/August/2000: Hey all!
Bill Patrick kindly pointed out that every single download link on this site has been busted for god knows how long. Hopefully they should all work again now.

19/March/2000: A little while ago I did a talk on my bots to the AI group at my Uni. I made some slides for it, and so here they are if anyone wants to look at them. Being slides, they're somewhat devoid of content, and also they assume a knowledge of neural nets and genetic algorithms. They're in microsoft word file format. Here they are:

slides.zip

6/March/2000: 'Sup all. Ok big update this time. First things first. I've got a coding job with a German game company called Crytek. This is the same company that theFatal, author of the Jumbot, is working for. The contract expressly forbids me from working on projects outside of the game :( So this means I can't do the Quake3 port of neuralbot I would have liked to do. It means I won't be able to make big changes to the bot.

However, it would be extremely cool if someone else could port my bots to Quake3. The reason I wanna see a port to q3 so much is that q3 comes with those fantastic built in bots. What I was thinking is that instead of ascertaining the fitness of each bot by letting it play against other neuralbots with similar behaviour, fitness could be ascertained by letting the bot play against the built-in q3 bots. The advantage of this approach is that fitness scores will hopefully more accurately reflect how good a bot is. This is because in q2, with all the bots playing against each other, high fitness scores could be gifted to bots who displayed gimmicky behaviour useful only in fighting bots with similar behaviour to itself. This is the problem that I coded in reference bots to try to solve. Using the built in q3 bots seems to me to be a much better solution than using reference bots though. So if anyone is interested in porting the bots to q3, please contact me. I will provide full support to anyone willing to do the port.

I've been getting some pretty fat behaviour with the latest evolution of the code.(code not released yet) As well as the good aimers I posted a demo of below, I also trained up some bots that will very deliberatly run to and collect items, even though itemscore is set to 0. In other words, they learnt themselves that getting items is beneficial to fragging people.
I'll see about releasing this new version soon.

18/Feb/2000: Hello. Here are some demos: the first zip contains some hilarious demos that I recorded back when my bots had more bugs than neurons. The second zip is a demo of some bots I trained up recently with a new version of the code. They have pretty much mastered the art of aiming in 3 dimensions, at least with the machinegun.

olddemos.zip

goodaim.zip

To play the demos, unzip them to Quake2/baseq2/demos, and then type

map goodaim.dm2

or whatever the demo is called.

12/Jan/2000: Dammit where's that Q3 source?!

Well I was just browsing back through some old messages from the neuralbot mailing list, and noticed some stuff about back-propagation. I implemented back-prop learning for my bots a while ago, so I though I'd tell what happened.

Aahh back-propagation. For those who don't know, back-propagation is short for back-propagation of error. Apparently something like 80 or 90 percent of NNs used for various aplications use back-prop in one form or another.

Back-prop is a supervised learning algorithm. This means that there must be a supervisor telling the NN exactly what it did wrong and what it did right. For backprop, you need to know what the outputs of every single output neuron where supposed to be, for a given set of inputs. Compare this with the genetic-algorithm learning I usually use, which only needs ONE number describing how well the NN did to work. For the genetic-algorithm, you don't need to know what the outputs of each neuron have to be. This is one of the reasons why I like GA learning better than back-prop learning.

So to teach a bot NN with back-prop I needed 'correct' sets of inputs and output values for all the input and output neurons. Such a set is called a training pair. Actually you need several hundred/thousand trainig pairs for a NN the size of my bot NNs. My approach was to sample a human player's behaviour to generate the training pairs. Here was my plan:

* Set up a neuralnetwork for the human player with the same dimensions as the bots' NNs.

at a certain instant, say one frame in time:

* run the input functions on the human player as if it was a bot and record what they return.

wait for a little while (say human reaction time), and then:

* encode the human player's output at this time in terms of bot output functions (on/off for each one)

so now there would be a corresponding set of inputs and ouputs.

* Solve the neuralnetwork with backpropagation so that if the inputs were fed into the network, the outputs would result.

after a good solution has been generated:

* transfer the human player's NN weights to the bot NNs.

Well it took me AGES to code, and It didn't really end up working well. The actual backprop algorithm was working fine, I think it was just that my human behaviour was too complex to be captured by such a simple neural-network.

I could teach the bot simple things, like If I jumped around a lot all the bots would jump around a lot, and I could even teach the bot simple associations, like to fire whenever you are looking at someone. But stuff like level-navigation and aiming they just wouldn't learn.

It would have been cool, but the thing about back-prop is that the neural-network model has to be of a certain sort, and not too big, for back-prop to work well. For instance, as far as I know, backprop cannot be used to train a fully interconnected (recurrent) neural-network.(Where each neuron is connected to every other neuron)
Also back-prop can't be used to train the net if it uses stuff like diffusable gas.

Back-prop is very un-biological (there ain't no back-prop in the brain), and I generally just like genetic-algorithms better :).

So there ends the tale of my attempt to use back-prop :)

24/December/99: Wow nearly one month since I last updated the site. I'm off on a christmas/new years holiday tomorrow, which will include dancing in the new year at the Gathering, a dance event over here in New Zealand. Oh yeah!.

Well I've been messing around with diffusable gas-nets; I've coded in the system described in this paper over at the diffusable gas-nets home page, with a few small modifications. One of the cool things described in this paper is how every property of the net is defined in terms of neurons - there is no synapse/connection weight matrix.

Another cool thing in the paper is the genetic algorithm scheme they use, which I also coded in to my bots. Instead of the size of the GA population being the same as the number of bots, there is a large (like 100 strong) population, and a random selection of nets from a certain area of the population is 'inserted' into bot-bodies for trialling. The modified nets are then inserted back in to the population, and the cycle repeats. Kind of like a 'virtual' population.

I'm still to get a decent bot with this gas-net system though :).

Hopefully by the time I get back from holiday the Q3 source should be out, and the bots can be ported. Those built in Q3 bots should be extremely useful as cpu-minimal reference bots :)

26/November/99: Yay here it is - version 0.6! Man that took long enough. Go download it in the download section. The download comes with a new custom map by Castle called nb01, which is a great map to train the bots on. There's also some new source-code to go with it.

So what's new from version 0.5? Too much stuff to list... I've rewritten most of the code now :) Just go download it!

Also while you're downloading scroll down and join the neuralbot mailing list.

24/November/99: Neuralbot discussion mailing list! This one is an interactive one so anyone can send messages to it. Feel free to use it for general AI/neuralnet discussions as well

Join the interactive mailing list!
Enter your email address below,
then click the 'Join List' button:
Powered by ListBot

24/November/99: Yay version 0.6 is nearly ready. I'm just doing a bit of testing before I release it. Looks like the release will probably be overshadowed by the Q3arena release. Oh well :)

16/November/99: Yay my exams are over! The bot code is at that annoying stage where I want to add lots of cool stuff to it, but instead I have to just arduously mess round with the current code until it is in a releasable state.

OK how about this:
Once the new version comes out, people send in dna, and I run a bot tournament to see which dna is the best. I was thinking it could be a knockout tournament, where each match would be a 8 player FFA deathmatch on a (simple) custom map. In each match, 4 bots would have one dna file, and the other 4 would have another dna file. The frags of each group of 4 bots would be added together. The winning dna file would be the one with the most frags.

I could record a demo of each match, and post the demo and the results on the website.

I guess for the tournament to be interesting the bots will have to be able to evolve some relatively interesting behaviour (not just spinning and shooting). This should (cross-fingers) be possible with the new version.

What do you guys think?

28/October/99: I'm right in the middle of exams now, which is never good. I think that after the exams, I'm gonna change the neural network architecture that my bots use, to a much more biological form. I'd like to make the net fully recurrent; that is I'd like to have all the neurons connected to every other neuron in the net. This will allow for cool stuff like chaotic feedback :). I hope to also add in stuff like diffusable gas as a neurotransmitter, and perhaps even advance time in the NN at a greater frequency than 10Hz.
I hope to release version 0.6 before this however, if I can just get the bugs out of it:)

Here's a little bit on diffusable gas nets:

Diffusible GasNets Home Page

16/October/99: Another very interesting piece of writing on the philosophy of AI, this time from the guy who created the game Creatures.

Three observations

Also here is a link to his homepage.

14/October/99: Time I updated the page i guess :)
I've coded in proper client emulation for the bot now, which means the bots can use all the weapons, crouch, swim (although I haven't tested this yet), slide along walls, jump in directions other than vertically up, have different skins/models, stuff like that. I'm working towards a release relatively soon. I've scrapped and started again in a 'new and improved' fashion one of the new learning methods I'm working on. The other one is chugging along. These new learning methods probably won't be in the next release, unless by some stoke of luck everything falls into place and they start working perfectly.

A couple of websites:
The new speech recognition net
I imagine a lot more attention is going to be focused on more biologically plausible nets after this baby:)

Generation5: Artificial Intelligence Repository
This site looks VERY cool. Haven't really had a good look at it though. Thankyou to cOre for this one.

29/September/99: Added a couple of new links to the sites section: The New Scientist AI and A-Life page and the RoboKoneko page.
The bot's coming along very nicely, they can beat my friend now - as long as he doesn't crouch :)

26/September/99: I found this very interesting article on a recent late-night internet-trawl:

How long before superintelligence?

Personally I think his conclusion is overly optimistic due to these assumptions:

"It does seems plausible, though, to assume that only a very limited set of different learning rules (maybe as few as two or three) are operating in the human brain."

"It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements. "

12/September/99: I ought to say there are a couple more things for the next release:

*    Fixed 'tried to cprintf to non-client' bug

*    Added server commands for time acceleration (normal, lotsfaster etc..)

10/September/99: Added new link to the NN FAQ in the sites section.

7/September/99: The link to both the files didn't work, sorry about that! It should be fixed now. Thanks to those who alerted me.

7/September/99: Well the bot's going well, I've added in lotsa new smallish features and some new big features, which I'm still working on. I've also got a custom map and a couple of demos for download.

I'm using a sigma activation function now for the neurons which seems to be helping quite a bit. Other new Features:

*    Generally improved the quality of the input functions.

*    Generation counter saves and loads from save files.

*    Working on adding in Paul Jordan's chasecam ripped from the eraserbot source :)

*    Adaptive 'evolve_period' to get just the right amount of sumfitness.

*    'Crossoverwith xxxx.dna' command - crosses over the dna of the bots playing with the dna from a saved .dna file.

*    Optional (of course) fitness for inflicting damage

*    'Invisible to bots' mode - for non-interferist observation.

And the good shit:
*    A couple of exotic new Learning methods :)

*    Bots now run at player running speed.

Thanks to all the people who have suggested features.

I've got a map made just for neuralbots ( nice and simple), constructed by Abraxis. Basically it's a big hollow box with a few weapons etc.. sprinkled round. The bots seem to like it. Grab it here:

nbtraining.zip(37 KB)

I've also got a couple of demos recorded from bots I trained up on this level, using the sigma activation & improved input function code. You will need the map above to view the demos (nbtraining.bsp has to be in your /baseq2/maps directory). To play the demos, unzip them into your Quake2/baseq2/demos directory. If you don't have a demos directory, make one. Then type

map nb1.dm2

in the console to play the demo.

nbdemos.zip(431 KB)

Demo nb1 -
These bots were trained up with freerails, evolve period either 200 or 300, mutation chance 10%, num_synapses_to_muatate 400. I forget if I allocated them fitness for items: if not then these bots have learnt that items are good, which is very cool. They got through 8741 generations.

Demo nb2 -
These bots were trained up without freerails, using the new adaptive evolve-period code - the period adapted so that the sumfitness per generation was allways about 500. This turned out to be a period of about 65 secs. The fitness was 10 for a frag and 10 for an item. Mutation rate was as above. These bots were trained for 18240 generations (about 3 nights).

Catchya all later.

    

15/August/99: Wow I've been recieving a ton of emails! Keep the suggestions coming...
    I've allready implemented some of the suggestions into the next release of the bot. Also I'm working on something which should be mind-numbingly cool :)
   On another note, Nicholas Lawson informs me that:

"It's much easier to train bots of you run a dedicated server:

quake2.exe +set game nb +set dedicated 1 +set cheats 1 +set deathmatch 1 +map q2dm1

then run a client:

quake2.exe +set game nb +connect 127.0.0.1

to connect to the game, then in the client, type the usual commands like addbot, etc...
then the client can quit, leaving the dedicated server running (and the bots continue training in the background!)
It also uses up less CPU power."

Very cool..

9/August/99: site up and running, hosted by the excellent botepidemic.
    I've been delaying the opening of this site so that I can get a version of the code so that anyone who wants to train up a bot can do so. Hopefully I've succeded.
    Please bear in mind that neuralbot is a project I'm still working on - it learns, but not particularly well. I'm talking about hours of training for a simple 'turn and shoot at opponent' behaviour - if you're lucky.
    Anyway, enough negativism. Have fun - and I hope the bot & sourcecode inspires and interests everyone out there.

about|news|download|sites|contact