The Search Algorithms PackageThis package contains the implementations of the Common Search API - search algorithms Genetic Algorithms.
algorithms.gas depends on the following packages:
api package contains the specification and base classes of the Search
Algorithms Layer of the DGPF. The Search Algorithms Layer
provides easy-to-use tools to build various multi-dimensional search algorithms
and to implement simulation cores that can be shared among all of them.
You can thus compare search algorithms directly and select the one that
suits the best for your problem.
The search engine utilities provide a basic abstraction for search
SearchEngine defines the interface for
starting, aborting, waiting for, and configuring a search as well as an
event api allowing you to supervice the searches progress.
SearchContext defines how a
data representation of the solution candidates, can be modified and
SearchContext is the hard core that performs
the searches work. It might be useful to instantiate more than one
SearchContext on one machine, one for each processor, for
example. For one data type/problem domain, you will provide some basic
methods in form of a class derived from
Instances of that class then can be used by any search algorithm
implementing our search api. Therefore, you only need to define your
problem domain once and then can work on it using all algorithms available.
All search algorithms that implement our api are multi-dimensional per default, meaning that an individual's fitness is determined not only by one single fitness function but by a finite set of them. We provide a blueprint how such fitness functions should look like. Having multi-dimensional fitnesses, you cannot compare individuals in the same manner as if the fitness was only a single value. Therefore, you can provide user-defined individual comparators (or pick them from the set of predefined comparators) to direct the search.
Searches implementing our search api are adaptable to the bone. They are round based (we call each round an "update"). For example, one update of a Genetic Algorithm is one generation. After each update, the parameters of a search can be modified manually or adapt automatically, according to the rich statistical data provided. Each search algorithm may provide additional adaptable functionality.
SearchEngine throws an event after each update, containing
the current state of the search including nice statistical data. Each
search algorithm may provide additional statistical data.
Utilities such as tasks that can be distributed over a network and then
SearchContext and queues that allow you to manage
such tasks help implementing different parallel or distributed search
There also is a peer-to-peer extension that can be integrated into
arbitrary search algorithms. It enables them to exchange individuals
dynamically basing on a policy. Therefore, search algorithms of different
kinds can now cooperate, creating a powerful heterogeneous search.
All search algorithms implemented using the Commmon Search API come automatically equipped with an implicite auto-adaptive, configurable Tabu Search backend. It uses a cache to check new individuals if they already have been created. If so, they're disposed right away and the creation is repeated. Since this backend may execute on a search server, it reduces network traffic and can improve search speed significantly by enforcing greater problem space examination.
All search algorithms provided come with an auto-adaptive, configurable Tabu Search backend. If a client/server distribution is chosen, the backend executes on the server, reducing network traffic.
algorithms.ga provides the basic ability of genetically evolving solutions
for any type of problems. Multi-objective Genetic Algorithms
are be implemented using different distribution schemes in the package
which bases on the versatile search api provided by the Search-package.
The idea behind genetic evolving problem solutions is the following: You have an idea on some qualities a thing should have, but you donít know what this thing is or how it should be created. The genetic approach might answer this question. At first, a set of random things is created. From this set we take those things that match your idea the best (using the fitness function). These we modify slightly or combine with each other to find a new set of things. This we do again and again, until we have found the ideal thing or we are close enough to what we want. For more information, visit the Genetic Programming Group at http://groups.google.de/group/geneticprogramming/ or for example http://www.genetic-programming.org/.
Genetic algorithms can be performed on a single computer. Therefore, the LocalGeneticEngine is intended.
Genetic algorithms have a high parallelizing potential. There are many common methods for parallelizing them known. In "Island Hopping" for example, many populations of solution-candidates, so called individuals, can be evolved on different machines. By exchanging their best individuals, the populations are loosly coupled. This mechanism is implemented in one genetic engine called P2PGeneticEngine, where P2P means peer-to-peer, because it basically realizes a peer-to-peer network.
We support four different types of distribution of computational load for our genetic algorithms:
algorithms.ga comes with very simple interfaces that allow you to unleash
the potential of genetic algorithms without needing to think about network
stuff. Any sort of thing/problem can be evolved, you just need to make sure
that your individuals are serializable and then specify a fitness function
and provide reproduction facilities.
The GA package allows you also to customize the parameters of the evolution in a simple manner: you can the mutation/crossover rate, the selection algorithm, the population size or the emigrant- and immigrant-ratio and such and such on every generation, basing your decisions on extensive statistical data gathered. Therefore you also can implement a class that performs those decisions automatically - or you can use the default values. You may restrict the generation count or the evolution time as well as specify a ideal-threshold fitness value (the evolution stops when one individual with such a fitness or even a better one occurs).
The GA provides full support for multi-objective Genetic Algorithms. To be precise, one can specify an arbitrary count of fitness functions to be evaluated for each individual. This allows you to evolve individuals which have more than one desired property optimized. Therefore, the evaluation function of the genetic environment is simulation based. Each fitness function may create a data record for individuals being evaluated. This record serves as repository for information on the individual when it is simulated. A simulation consists of a specified count of steps. Each fitness function might specify a count of steps after which it will inspect the state of the simulation and update its records.
At the end of a simulation, each fitness functions evaluates its records and determines one fitness value. To obtain stable results, each individual will be simulated more than once. After all simulations of an individual are done, statistic records on the single fitness values assigned are provided to the fitness functions, so that they can compute the final fitness value. By default, this will be the median of the different test results (or the average if only a few simulations have been performed).
The statistic evaluation is rich: for each distinct fitness function there exist statistic records for the final fitness values of the populationís individuals. After receiving all single final fitness values, the client calculates one overall fitness values per individual. This value then goes through different filters, allowing the evolution to be controlled or steered in a specific direction. Various distribution methods for Genetic Algorithms are available, like "island hopping" which would now perform emigration and immigration.
Whatsoever, the GA package can still be used for such simple things like finding the root of some arithmetic functions which is presented as example amongst others in the Examples package also available online.
Like Genetic Algorithms, Hill Climbing is implemented in four distribution flavours: locally running, client/server, p2p and p2p-client/server hybrid distributed.
Like Genetic Algorithms, Simulated Annealing is implemented in four distribution flavours: locally running, client/server, p2p and p2p-client/server hybrid distributed.
Search algorithms are executed by
SearchEngines. The names of the
search engines are formed like this: The distribution shortcut + the algorithm
This Open-Source research project is licensed under LGPL, a license even more liberate than the GPL.
See the rich example collection also available on the web for feature demonstrations.
One feature, especially important for developers, is the full documentation of the source code provided. Each method, class, package is fully documented down to privated declares. No parameter is undocumented. We are also happy about hints or request for more documentation, because we want that anyone can use our code and hopefully understand it. This project has also educational character.
You can find more information on the project's home page or on the project's page on SourceForge. The source code and examples can be downloaded at the downloads page. Useful information and news will frequently be published in the Genetic Programming Group.