Kerneltrap has a discussion on the proposed patches to Linux kernel which enables Linux to use genetic algorithm for use in CPU and IO schedulers. Genetic algorthims are self optimising over time for particular kinds of workloads and this could be one way of smarter kinds of scheduling. An interesting idea though it yet remains to be seen what would turn up
It could yield interesting results. GAs had the property of being the best known optimizing metastrategy during the ’90s.
Well, subject is not to say of GA being a beast, but as warning.
GA must be absolutely assured of not turning into a beast, and this happens faster as you imagine. So as more complex GA code is, as more inexpected results can happen turning the so called optimised code in almost useless crap.
I’ll give an example:
You remember the game Creatures? GA is used for virtual creatures and you educate them.
This code base has been used for experimental reasons. They putted an virtual creature in a flight simulator and let them learn. Indeed, the creatures learned and found a way how to fly most secure.
The creatures learned, if they rotate during flight, then this is the most stable flight. And this is physically right.
But such a result has been never expected as humans can not rotate during flight.
Last but not least:
GA is good, but only if carefully used.
Not a very lively one.
As for the patches, sounds very promising! Another step towards the future of self-modifying self-tuning kernels. *drool*
Who knows, in the far future a kernel may be able to learn not only how to better tune itself, but also how to use new hardware without necessarily needing a new driver and to activly defend itself against malicious attackers.
GAs are a strange – thus interesting – idea. But…
Who knows, in the far future a kernel may be able to learn not only how to better tune itself, but also how to use new hardware without necessarily needing a new driver and to activly defend itself against malicious attackers.
What if the code is modified so it adapts itself to leave security holes or scrozzle the system?
add ‘and self-distributable’ to the list and there you go.
Wow this is some really interesting stuff. What is this Creatures game you were talking about sir unknown?
You remember the game Creatures? GA is used for virtual creatures and you educate them.
This code base has been used for experimental reasons. They putted an virtual creature in a flight simulator and let them learn. Indeed, the creatures learned and found a way how to fly most secure.
The creatures learned, if they rotate during flight, then this is the most stable flight. And this is physically right.
But such a result has been never expected as humans can not rotate during flight.
I do remember the game, but not your described event. Can you explain what you mean by “rotate”, please ?
Wow this is some really interesting stuff. What is this Creatures game you were talking about sir unknown?
http://www.google.com/search?q=Mindscape+Creatures
Nice idea, it’s good to see a bit of lateral thinking. The only issue I can see is the code becoming unmaintainable – I’ve heard stories of genetic algorithms evolving very efficient solutions, but ones that are totally incomprehensible to us. I think that was in relation to electronic circuits rather than code though.
drsmithy: I do remember the game, but not your described event. Can you explain what you mean by “rotate”, please ?
I assume he means rotate along it’s length, like a bullet, and for the same effect.
boily: What if the code is modified so it adapts itself to leave security holes or scrozzle the system?
When using GAs you usually cull to the most successful organism and then introduce random mutations from there. The criteria for “successful” aren’t going to include security holes…
This might is a good idea if things never change. But when you change the workload of the Kernel then the GA will also have to modify itself to accommodate this new workload behaviour. The problem with GA is that it can be quite temperamental, meaning when they did this with chip design, the CPU would work between a set range of temperature (example 30-32 degrees) but if the environment fell outside of this the chip no longer operated. Now the problem would be if the GA has optimised itself 100% for its present workload and then a new job was added, it could quite possible cause the efficiency to drop to dramatically.
Kernels use more general and less specific code for a reason, i.e. in most cases the code formula is the best for all tasks that a system might have to process, while GA’s are geared towards optimisation based off the current workload not taking into account future requirements. Then this adapting to the new requirement will have a performance penalty which can’t be estimated useless you always have the GA work within a set model of operational requirements.
The only problem I see with GA is the adapting process, now quickly can it adapt? What happens if it adapts too quickly to once off type of system processing, or is too slow to change?
Keep in mind that all we’re talking about in this case is a GA that makes small adjustments in kernel tuning. I imagine it will be decades at least before GAs are alowed to modify the kernel without boundries, if ever, so there is probably not much cause for concern. Im sure this will be approched very conservatively.
Everything is in the end limited by physical boundaries.Interesting to see though when the GA could live a life of its own ,and what new tune settings not in the initial matrix it would come up with.Similar to Conways Game Of Life clone we had to code at highshool.
> The criteria for “successful” aren’t going to include security holes..
That’s a problem. It can’t be done, at least not in the general case. Have a look at the “Halting Problem” for details:
http://en.wikipedia.org/wiki/Halting_problem
It sounds difficult to make sure that the OS parameters set by GA are reliable on long term…
It depends on to many variables and as a result it could lead to a non understandable solution.
GA in kernel would create a “probabilistic operating system”.
It is really what we want ?
Olive
myself i found that birmingham university and bristol have a good selection of resources for those interested:
http://www.cs.bris.ac.uk/Teaching/Resources/COMSM0302/index.html
http://www.cs.bham.ac.uk/~axk/teaching.htm
Use of Genetic Algorithms in Efficient Scheduling for Multi Service Classes
http://research.ac.upc.es/EW2004/papers/15.pdf
Scheduling parallel processes Using GAs
http://www.science.uva.nl/research/scs/papers/archive/Meijer2004a.p…
A Genetic Algorithm-Based Methodology for Optimizing Multiservice Convergence in a Metro WDM Network
http://www.fulton.asu.edu/~mre/metro_opt.pdf
a sensible idea is to test various workloads during the development of a kernel, find various optimal settings, then make these static in the code/compiled kernel.
as people have already mentioned, runtime switching of schedular policies is a good thing and this would allow the kernel to switch between various pre-located optimal settings for various tasks.
>> The criteria for “successful” aren’t going to include security holes..
>That’s a problem. It can’t be done, at least not in the general case. Have a look at the “Halting Problem” for details:
The Halting problem is completely different. If my system only allows secure actions to take place, there is no way you can compromise it (there will be implementation flaws though). For example, if the kernel was evolved in Java (as an example) with the highest-security settings, the VM will not allow insecure operations to take place, such as buffer-overflows which are trivial to make in C/C++.
For example, if the kernel was evolved in Java (as an example) with the highest-security settings, the VM will not allow insecure operations to take place, such as buffer-overflows which are trivial to make in C/C++.
No, this just shifts the likely point of failure. The writers of the VM still need to be as careful with their code as the kernel writers currently have to be with their C/ASM code. In fact by adding another layer of abstraction bugs leading to insecure operations will likely be more difficult to find (Is it the kernel code or the VM code?).
Just because you’ve shifted the point of failure, does not mean it’s wrong. This example means you can generate any Java byte-code you want and the byte-code will always be secure by definition because it won’t allow you to do insecure things. Yes, the VM might have flaws, but these can be found and fixed in one single place. If you find a byte-code flaw, it means every single byte-code program out there could potentially contain a flaw. C programs allow hundreds of ways to cause undefined and exploitable behaviour and you can never, ever be sure a C program does not contain an exploit. With the VM approach, you’ve just got to fix the VM once to fix, say, 1000 programs when you find a VM flaw.
It’s the right way to do it. The byte-code is designed to not have any security flaws or undefined behaviour. The VM is designed to run the byte-code. Of course, nothing is ever going to be bug free so the VM _will_ have flaws in it but it does isolate security exploits to one program (the VM) and not every single byte-code program you can ever produce. This is much better than the alternative.
By the way, I’m not saying to use Java in the kernel. I’m just giving an example that it is possible to write applications that are secure.
Depending on the problem a GA is called to deal with it may pottentially wonder through many useless solutions until it reaches a set that will lead them to the correct offsprings and solutions.
Convergence in GAs is a pain in the neck (At least). I suppose that the problem of Kernel optimization can not be optimally solved with any of the classic methods and that is why people start using GAs…An also good reason to put GAs at such a task is simply to have fun no matter what the results are 🙂
It would be very nice if we had some graphs of fitness/generation….
Anyone knowing any links?
Hear my words: DO NOT PUT NON DETERMINISTIC CODE INTO KERNELS! Only terrible things will happen.
GA and other non deterministic algorithm ARE interesting and of good use, but I think they don’t belong into kernels for the same reason you won’t find preemptive systems in critical control units in your car, a plane, etc.
That’s only my opinion, though.
GA is used for virtual creatures and you educate them.
This code base has been used for experimental reasons. They putted an virtual creature in a flight simulator and let them learn. Indeed, the creatures learned and found a way how to fly most secure.
The creatures learned, if they rotate during flight, then this is the most stable flight. And this is physically right.
But such a result has been never expected as humans can not rotate during flight.
Well, the description of the experiment is not accurate, though amazing. If anyone interested,
http://csci.mrs.umn.edu/UMMCSciWiki/pub/Evolutionaryrobotics/Evolve…
(See section 5.2, in page 4)
if existis driver whit information data generic about the hardware, type cromossomos whit the best informations for agility adapting process.