Projects are listed here in alphabetical order so as not to here any favorites. Although we may list projects, they may not be active as projects tend to run out of work or dissapear without warning for a number of reasons (completed work, lack of funds, hardware failure, alien abduction). Some projects are still in testing phases (test, alpha, beta) which may not be shown on project page and it up to every user to determine if the project is safe to run on their here. There can be risk associated with running test, alpha or beta projects like retin-a online crashes and zero credits for work. Some projects have more than one application and they may support any or all hardware tpye (CPUs, ATI GPUs, NVIDIA GPUS, accelerometer, hardware sensors) or operation system platforms (Mac, linux, Windows, etc) how to buy ambien online buy ambien online canada ambien pharmacy . We can’t shown all versions and combinations here, ie there might be a NVIDIA app but only for windows o/s. To show cheap retin-a combinations we would need too many columns to fit any screen or make the text so small it would not be useful. You need to visit each project and determine if they have applications for your hardware, memory and capabilities

Project descriptions come mostly from the project pages. If you are interested in a certain subject or project, enter it in the search box at the top of the table to search the immediate table to see only matching entries cheap tramadol overnight delivery how to buy tramadol online buy tramadol online no prescription today .
Common searches might be Math, Primes, Conjecture, Protein, Disease, Cancer, Genes, Space.

Project Descriptions

Project NameDescription

More projects may be listed on BOINC website

More details can be found by visting projects.


  The ABC conjecture involves abc-triples: positive integers a,b,c such that a+b=c, a < b < c, a,b,c have no common divisors and c > rad(abc), the so-called radical of abc. The ABC conjecture says that there are only finitely many a,b,c such that log(c)/log(rad(abc)) > h for any real h > 1. The ABC conjecture is currently one of the greatest open problems in mathematics. If it is proven to be true, a lot of other open problems can be answered directly from it.

Albert@home [alpha project]

  Albert@Home is a test project run by the Einstein@Home team. Most of the time it will have no work to run at all, and when it has, the applcations might be unstable, unreliable and may even damage your computer.

AlmereGrid TestGrid [test project]

  This is the Boinc test Grid of AlmereGrid. The goals is to experiment with Boinc test applications and new Boinc versions, before they are put on the main Grid.


Asteroids are the most numerous objects in the solar system. So far, hundreds of thousands of asteroids are known, with hundres of new discoveries every day. Altough the total number of known asteroids is large, very little is known about the physical properties of individual objects. For a significant part of the population, only the size of the bodies is known. Other physical parameters (the shape, the rotation period, direction of the rotation axis,...) are known only for hundreds of objects.

Because asteroids have in general irregular shapes and they rotate, the amount of sunlight they scatter towards the observer varies with time. This variation of brightness with time is called a lightcurve. The shape of a lightcurve depends on the shape of asteroid and also on the viewing and illumination geometry. If a sufficient number of lightcurves observed under various geometries is collected, a unique physical model of the asteroid can be reconstucted by the lightcurve inversion method.

The project Asteroids@home was started with the aim to significanly enlarge our knowledge of physical properties of asteroids. The BOINC application uses photometric measurements of asteroids observed by professional big all-sky surveys as well as 'backyard' astronomers. The data is processed using the lightcurve inversion method and a 3D shape model of an asteroid together with the rotation period and the direction of the spin axis are derived.

Because the photometric data from all-sky surveys are typically sparse in time, the rotation period is not directly 'visible' in the data and the huge parameter space has to be scanned to find the best solution. In such cases, the lightcurve inversion is very time-consuming and the distributed computation is the only way how to efficiently deal with photometry of hundres of thousands of asteroids. Moreover, in order to reveal biases in the method and reconstruct the real distribution of physical parameters in the asteroid population, it is necessary to process large data sets of 'synthetic' populations.

BOINC Alpha Test [alpha project]

  BOINC Alpha Test allows volunteers to test new versions of BOINC client software on a wide range of computers, thereby increasing the reliability of software released to the public.

  This is not a computing project. Don't attach the BOINC client to this URL

BURP (beta)

  BURP aims to develop a publicly distributed system for rendering 3D animations.

  Currently this is a BETA project which means that certain restrictions apply. Not all uploaded sessions will actually be rendered right away - and sometimes you will not be able to contact the schedulers for short periods of time.

  Please note that this project is still in its testing phase and does not yet provide the security and stability of a full-blown BOINC project.


  The objective of CAS@home is to encourage and assist scientists in China to adopt the technologies of volunteer computing and volunteer thinking for their research.

  The project has three parts:

  • Organizing hands-on volunteer computing workshops for researchers from the Chinese Academy of Sciences as well as other scientific institutions in China.
  • Launching pilot applications to exploit volunteer computing and volunteer thinking by scientists in China.
  • Creating a website with essential information about volunteer computing for Chinese scientists, which is this site.

  The project is led by researchers at the Computer Centre of the Institute of High Energy Physics (IHEP), Chinese Academy of Sciences (CAS).

  The project was officially launched in January 2010, with the support of the Chinese Academy of Sciences. A BOINC server was established at IHEP during the first quarter of 2010. A CAS@home master class with two international experts and over 20 participants was held at IHEP on 9 March 2010. In June, tests began of a volunteer computing project that will distribute Short-Cut Threading, an application about protein structure prediction developed by researchers at the Institute of Computing Technology, CAS. In July and August, researchers from IHEP and the international particle physics laboratory, CERN, will collaborate on developing a new volunteer computing application for simulating particle collisions at IHEP's particle accelerator, BEPC.

Chess960@home (alpha)

  Chess960 is a young innovative chess variant. In Chess960, just before the start of every game, the initial configuration of the chess pieces is determined randomly, that means that the king, the queen, the rook, the bishop and the knight are not necessarily placed on the same home squares as in classical chess. Since a few years there are World Championsships taking place in "Chess Classic Mainz" event in August every year. GM Peter Svidler is the current Champion. In this project we try to combine Chess960 and the idea of distributed computing. With the BOINC software framework from the University of Berkeley exists a platform we want to use in this project to perform these computing intensive tasks. With it we want to give this chess variant some basics in theory of this game. We know the fascination of this chess variant is the incredible amount of variations. That will not change with this project but some guidelines seems to be useful in each starting position. is a distributed computing project to produce predictions of the Earth's climate up to 2100 and to test the accuracy of climate models. To do this, we need people around the world to give us time on their computers - time when they have their computers switched on, but are not using them to their full capacity.

Collatz Conjecture

  Collatz Conjecture is a research project that uses Internet-connected computers to do research in mathematics, specifically testing the Collatz Conjecture also known as 3x+1 or HOTPO (half or triple plus one). You can participate by downloading and running a free program on your computer.

  Collatz Conjecture is based in Wood Dale, Illinois, USA and continues the work of the previous 3x+1@home BOINC project which ended in 2008. It can run on an nVidia GPU, ATI GPU, or CPU


  Constellation is a platform for research projects that use Internet-connected computers to do research in various aerospace related sciences and engineering. You can participate by downloading and running a free program on your computer.


  Genomes are fantastic keepers of genetic information and are the outcome of evolutionary replication, mutation and selection. Genomes organize functions from the cellular level, via the organismic level, up to the complex basis of mind. In human cells the genetic information controlling most processes from the cellular level, over embryogenesis to cognitive ability, manifests in a diploid set of 23 DNA molecules (chromosomes), combined they consist of ~3x10^9 base pairs (bp) stored in ~2.80 GB of data. This whole genome, whose added molecular length totals ~2m, is kept in comparably small cell nuclei with typical diameters of ~10 µm or volumes of 500 µm^3. The sequential organization of genomes, i.e. the relations between distant base pairs and regions within sequences, and its connection to the three-dimensional architectural organization of genomes is still a largely unresolved problem.

  Correlizer has been set up to unravel these mysteries, and we found long-range power-law correlations on almost the entire observable scale of 132 completely sequenced chromosomes of 0.5 x 10^6 to 3.0 x 10^7 bp. Varying from Archaea, Bacteria, Arabidopsis thaliana, Saccharomyces cerevisiae, Schizosaccharomyces pombe, Drosophila melanogaster, and Homo sapiens. The local correlation coefficients show a species-specific multi-scaling behavior: close to random correlations on the scale of a few base pairs, a first maximum from 40 to 3,400 bp (for Arabidopsis thaliana and Drosophila melanogaster divided in two submaxima), and often a region of one or more second maxima from 10^5 to 3 x 10^5 bp. Within this multi-scaling behavior, an additional fine-structure is present and attributable to codon usage in all except the human sequences, where it is related to nucleosomal binding.


  The goal of Cosmology@Home is to search for the model that best describes our Universe and to find the range of models that agree with the available astronomical and particle physics data. In order to achieve this goal, participants in Cosmology@Home (i.e. you!) will compute the observable predictions of millions of theoretical models with different parameter combinations. We will use the results of your computations to compare all the available data with these models. In addition, the results from Cosmology@Home can help design future cosmological observations and experiments, and prepare for the analysis of future data sets, e.g. from the Planck spacecraft.

  Light does not just have brightness and color. It can also be polarized. If you have read Contact, a science fiction novel by Carl Sagan, you know this because the message sent by the extraterrestrials contains information in its polarization. The human eye is not polarization sensitive so we only notice the effects indirectly. If you could see this aspect of light it might look something like the background image of this text - a valiant attempt by graphic artist Nikita Sorokin to illustrate the notion of light with directionality in the plane perpendicular to its direction of propagation. Polarization has many important applications, the most common of which are polarizing sun glasses. They work because reflections of unpolarized light are polarized. So wearing sun glasses that filter out all but one polarization direction reduce reflective glare. For more about polarization, check out this item on Ben\'s website. Cosmology fact: the variations in the polarization directions in the CMB carry even more information about the beginning of the Universe than the variations in brightness!

CPDN [beta project] is the largest experiment to try and produce a forecast of the climate in the 21st century.
To do this, we need people around the world to give us time on their computers - time when they have their computers switched on, but are not using them to their full capacity.
You can participate by downloading and running a free program on your computer.


  distributedDataMining (dDM) is the name of a research project that uses Internet-connected computers to perform research in the various fields of Data Analysis and Machine Learning. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) for the distribution of research related tasks to several computers. The intent of BOINC is to enable researchers to tap into the enormous processing power of personal computers around the world. If you are willing to support our research challenges please participate in the dDM-Project. During the last week, 269 project members spent 26,088 hours computational power on their 613 computers. We - the members of the scientific board - would like to thank all project members for their generous support of our research.

  All dDM applications use the open source framework RapidMiner. This data mining suite provides various machine learning methods for data analysis purposes. The RapidMinder uses a comfortable plug-in mechanism to easily add new developed algorithms. This flexibility and the processing power of BOINC is an ideal foundation for scientific distributed Data Mining. The dDM project takes that opportunity and serves as a metaproject for different kind of machine learning applications. Below, you find an overview of our subprojects and the related scientific publications.


  The goal of is to give the worlds security experts the best tools available for detecting weak hashes. This can help them to force the developers to use more secure methods of password protection. By distributing the generation of rainbow chains, we can generate HUGE rainbow tables that are able to crack longer passwords than ever seen before. Furthermore, we are also improving the rainbow table technology, making them even smaller and faster than rainbow tables found elsewhere, and the best thing is, those tables are freely available to download from our site by anyone!


  The goal of DNA@Home is to discover what regulates the genes in DNA. Ever notice that skin cells are different from a muscle cells, which are different from a bone cells, even though every cell in your body has every gene in your genome? That's because not all genes are "on" all the time. Depending on the cell type and what the cell is trying to do at any given moment, only a subset of the genes are used, and the remainder are shut off. DNA@home uses statistical algorithms to unlock the key to this differential regulation, using your volunteered computers.

  The primary means by which genes are regulated is at the stage of "transcription" where a molecule called a polymerase reads along the DNA from the start of the gene to the end of the gene creating an RNA messenger. Other molecules, called transcription factors, bind to the DNA near the beginning of the gene and can help to recruit the polymerase or they can get in the way of, or inhibit, the polymerase. It is the presence or absence of the binding of these transcription factors that determine whether a gene is "on" or "off" but, for the most part, scientists do not know which transcription factors are responsible for regulating which genes.

  Transcription factors have "fingers" that prefer a certain short, sloppy pattern in the nucleotides "letters" of a DNA sequence, but in many cases we don't know what these patterns are. Our software looks for short sequences of nucleotides that appear more-or-less the same near multiple gene beginnings and which also appear more-or-less the same in the corresponding locations in the genomes of related species. As DNA sequences are huge, ranging from millions to billions of nucleotides, and these sequences are short and only approximately conserved from one site to the next, this is a real needle-in-the-haystack problem and requires lots of computational power. We hope that your computers can help.

  Our current plans involve tackling the Mycobacterium tuberculosis genome to thoroughly understand how tuberculosis accomplishes what it does -- so that others can use that information to stop this disease that kills millions every year. We also plan to tackle Yersinia pestis, the cause of bubonic plague.


  Docking@Home is a project which uses Internet-connected computers to perform scientific calculations that aid in the creation of new and improved medicines. The project aims to help cure diseases such as Human Immunodeficiency Virus (HIV). Docking@Home is a collaboration between the University of Delaware, The Scripps Research Institute, and the University of California - Berkeley. It is part of the Dynamically Adaptive Protein-Ligand Docking System project and is supported by the National Science Foundation.

  Before new drugs can be produced for laboratory testing, researchers must create molecular models and simulate their interactions to reveal possible candidates for effective drugs. This simulation is called docking. The combinations of molecules and their binding orientations are infinite in number. Simulating as many combinations as possible requires a tremendous amount of computing power. In order to reduce costs, researchers have decided that an effective means of generating this computing power is to distribute the tasks across a large number of computers.


  Donate at home is a fund raising initiative. Donate@Home allows participants to donate towards funding by using their GPU to ‘mine for BitCoins’. This novel way of generating funding involves contributing within the bitcoin experiment. Crunchers don’t gain bitcoins in this project, the project convert these into standard currencies to raise enough funds through the collective contribution to give fellowships to research students for the Gpugrid project. The science and costs of this new community funded student will be reported in these pages and accounted for. You can also donate directly to Gpugrid via the donation page or make bitcoins donations at 19b62wRL6hGEWa1bLbkdjaiWvZm1C56XuL. Crunchers receive credits, which represent participation and have a symbolic value of their contribution.
The project is at the alpha stage and experimental, and the site is still under development.


  DrugDiscovery@Home is a research project that uses Internet-connected computers to model the behavior of leading compounds that could be developed into new medicines. You can participate by downloading and running a free program on your computer.

  DrugDiscovery@Home is in an early alpha phase and does not have a formal relationship with academia or the pharmaceutical industry.


  The aim of the EDGeS@Home project is to support the execution of selected and validated scientific applications developed by the EGEE and EDGeS community.

  The project currently hosts the ISDEP - Integrator of Stochastic Differential Equations in Plasmas - application at production level.

  It also hosts several other applications at beta (experimental) level.

  The EDGeS@Home Desktop Grid and its applications are partly supported by the DEGISCO project. The work leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 261556. The experts of the International Desktop Grid Federation provide further support for the EDGeS@Home Desktop Grid infrastructure, its applications, and its integration into the DEGISCO infrastructure.


  Einstein@Home is a program that uses your computer's idle time to search for gravitational waves from spinning neutron stars (also called pulsars) using data from the LIGO gravitational wave detector. Learn about this search at, Einstein Online and in our S3 report.

  Einstein@Home also searches for radio pulsars in binary systems, using data from the Arecibo Observatory in Puerto Rico. Read more about this search here.

  Einstein@Home is a World Year of Physics 2005 and an International Year of Astronomy 2009 project supported by the American Physical Society (APS) and by a number of international organizations.

  Einstein@Home is now carrying out a search of data from LIGO's first science run at design sensitivity (S5). The current analysis (S5GC1) uses 8898.5 hours of data from the entire S5 run. S5GC1 is the first analysis deploying the F-statistic plus global-correlations method, which is currently the most sensitive search technology known.


  Enigma@Home is a wrapper between BOINC and Stefan Krah's M4 Project. 'The M4 Project is an effort to break 3 original Enigma messages with the help of distributed computing. The signals were intercepted in the North Atlantic in 1942 and are believed to be unbroken.' [read more] You can participate by downloading and running a free program on your computer.

eOn: Long timescale dynamics

  A common problem in theoretical chemistry, condensed matter physics and materials science is the calculation of the time evolution of an atomic scale system where, for example, chemical reactions and/or diffusion occur. Generally the events of interest are quite rare (many orders of magnitude slower than the vibrational movements of the atoms), and therefore direct simulations, tracking every movement of the atoms, would take thousands of years of computer calculations on the fastest present day computer before a single event of interest can be expected to occur, hence the name eon, which is an indefinitely long period of time.

  Our research group is interested in calculating the long time dynamics of systems. We have developed a method for doing this through distributed computing where a server sends out small data packets for calculation to clients, e.g. over the internet. So, instead of the entire calculation being done on a single processor, it is done on many client computers worldwide. After finishing its calculation, each client computer sends it's results back to the server, which summarizes the results and sends out more jobs.



  Crowd-sourcing antimalarial drug discovery

  Goal: To discover novel targets for antimalarial drugs.

  Context: Malaria kills a child every 45 seconds. The disease is most prevalent in poorer countries, where it infects 216 million people and kills 650,000 each year, mostly African children under 5 years old [WHO]. And Plasmodium falciparum continues to evolve resistance to available medication. We therefore urgently need to discover new drugs to replace existing drugs. Importantly, these new drugs need to target NEW proteins in the parasite. The FightMalaria@Home project is aimed at finding these new targets.

  Resources: The Plasmodium falciparum genome has been sequenced, the proteome has been mapped, and protein expression has been confirmed at various stages in this apicoplexan's life cycle. Numerous crystal structuresmodelled using available structural templates. Excitingly, large research organisations (GSK, Novartis) have already tested millions of compounds and found nearly 19,000 hits that show promising activity against Plasmodium falciparum [MMV]. But they don't know which target protein is inhibited by these compounds. Drug discovery and development will be significantly enhanced by knowing the target protein for each of these hits.

  Problem: We plan to dock each of the 18,924 hits into structures of each of the 5,363 proteins in the malaria parasite. The computational power needed is enormous.

  Solution: We aim to harness the donated computational power of the world's personal computers. Most computers only use a fraction of their available CPU power for day-to-day computation. We have built a BOINC server that distributes the docking jobs to donated 'client' computers, which then do the work in the background. By connecting 1000s of computers this way, we'll be able harness the equivalent power of large supercomputers. If you would like to get involved, please follow the very quick installation instructions here.


  The FreeHAL project's mission is to develop a computer program which very closely imitates human conversation. FreeHAL consists of a server application and several frontends, a cross-platform GUI and a web interface.

  FreeHAL@home is the corresponding BOINC project which generates and converts fact and relationship databases (semantic networks) for FreeHAL . Because these tasks take a lot of time and can be processed on many independent computers, they are sent to volunteers donating CPU time by running the BOINC software.

gerasim@home (alpha)

  The project uses Internet-connected computers to do research in discrete mathematics and logic control.

  The current goal: testing and comparison of heuristic methods for getting separations of parallel algorithms working in the CAD system for designing logic control systems. is a novel distributed supercomputing infrastructure made of many NVIDIA graphics cards joined together to deliver high-performance all-atom biomolecular simulations. The molecular simulations performed by our volunteers are some of the most common types performed by scientists in the field, but they are also some of the most computationally demanding and usually require a supercomputer.
Running GPUGRID on GPUs innovates volunteer computing by delivering supercomputing class applications on a cost effective infrastructure which will greatly impact the way biomedical research is performed.


  Ibercivis is a public computing platform that enables society to participate in scientific research directly and in real time.

  This is a pioneering initiative in Spain that seeks to involve the maximum number of citizens in volunteer computing, which harnesses the computing power of a computer in the times when it is idle to perform tasks derived from a research project.

  Ibercivis about the public art research and made ​​a sharer in the generation of scientific knowledge, while the scientific community provides a powerful calculation tool. The computer becomes a window on science, creating a channel for direct dialogue between researchers and society.


  The aim of the project Ideologias@Home is to study how people in a certain region evolve ideologically over time with respect to an idea. When an idea is introduced in a society, the population is divided naturally into four groups:

  • Extremists: those who defend the idea extremely.
  • Moderates: those who defend the idea moderately.
  • Opponents: those who are against the idea.
  • Abstentionists: those who do not care, abstain or have not opinion.

  People change their minds because of peer pressure, the influence of mass media or because because they come to a different conclusion of their own accord. Under this assumption, we propose dynamic models, determine the parameters, predict trends and analyse results.

La Red de Atrapa Sismos

The Quake Catcher Network (QCN) is a research project that uses Internet-connected computers to do research, education, and outreach in seismology. You can participate by downloading and running a free program on your computer. Currently only certain Mac (OS X) PPC and Intel laptops are supported -- recent ones which have a built-in accelerometer. You can also buy an external USB accelerometer.

The Lattice Project

  The Lattice Project is the research in Grid computing conducted by the Laboratory of Molecular Evolution. It can also refer to the Grid computing system that is currently in production at the University of Maryland. Michael Cummings has directed The Lattice Project from its inception in late 2003 to the present, and Adam Bazinet has been the primary developer. During this time the system has been continually developed, improved, and used to complete many scientific analyses. We were initially motivated by the need for more computing power for our own research, but our development of the Grid system has always been with general, non-domain-specific use in mind. In fact, the majority of our users have been other researchers at the University of Maryland.

Leiden Classical

  Join in and help to build a Desktop Computer Grid dedicated to general Classical Dynamics for any scientist or science student!

LHC@home 1.0 Sixtrack

  SIXTRACK is a research project that uses Internet-connected computers to advance Accelerator Physics.

  SixTrack was developed by Frank Schmidt of the CERN Accelerators and Beams Department, based on an earlier program developed at DESY, the German Electron Synchrotron in Hamburg. SixTrack produces results that are essential for verifying the long term stability of the high energy particles in the LHC. Lyn Evans, head of the LHC project, stated that "the results from SixTrack are really making a difference, providing us with new insights into how the LHC will perform".

  Typically SixTrack simulates 60 particles at a time as they travel around the ring, and runs the simulation for 100000 loops (or sometimes 1 million loops) around the ring. That may sound like a lot, but it is less than 10s in the real world. Still, it is enough to test whether the beam is going to remain on a stable orbit for a much longer time, or risks losing control and flying off course into the walls of the vacuum tube. Such a beam instability would be a very serious problem that could result in the machine being stopped for repairs if it happened in real life.

  Particle motion in accelerator is at least four-dimensional, the coordinates being x (horizontal displacement), y (vertical displacement) and the two conjugate momenta.

  Nowadays, the LHC is producing handful of events each day, and the machine performance should be compared with the results of old and new numerical simulations: all this needs CPU-power! More than this, in spite of the impressive performance of the LHC machine the physicists need even more. It is in the plans of CERN to study an upgrade of the LHC aiming at increasing the machine performance by a factor of ten! This will keep accelerator physicists at CERN busy for some years to come in order to pursue new simulations to study the stability of protons in the upgraded LHC. This will be an ideal challenge for the physicists and volunteers of the SixTrack project within the LHC@home!

LHC@home 2.0 Test4Theory [test project]

  This is a test project, to demonstrate the use of the CERN-developed CernVM and BOINCVM systems to harness volunteer cloud computing power for full-fledged LHC event physics simulation on volunteer computers.

  It is the first of what is expected to be a series of physics applications running on the LHC@home 2.0 platform. These applications will exploit virtual machine technology, enabling volunteers to contribute to the huge computational task of searching for new fundamental particles at CERN's LHC

  The Test4Theory@home project seeks to engage volunteers in theoretical physics computations for the Large Hadron Collider at CERN. The project is based on BOINC and allows participants to run simulations of particle collisions on their home computers. The results are submitted to a central database which is used as a common resource by both experimental and theoretical scientists working on LHC physics. For further details, please visit the following link.


  The project is an application that makes use of network computing for stochastic modelling of the clinical epidemiology and natural history of Plasmodium falciparum malaria.

  The fight against malaria was given a new impetus by the call for eradication at the Gates Malaria Forum in October 2007, making more but still limited resources available for research, development, and combating malaria. To inform decisions on which new or existing tools to prioritize, we have developed a general platform for comparing, fitting, and evaluating stochastic simulation models of Plasmodium falciparum malaria, programmed in C++ (openmalaria).

  We use this to inform the target product profiles for novel interventions like vaccines, addressing questions such as minimal efficacy and duration of effects needed for a vaccine to be worthwhile, and also to optimize deployment of established interventions and integrated strategies. Field trials of interventions consider effects over 1-2 years at most, but the dynamics of immunity and human demography also lead to longer term effects. We consider many different outcomes including transmission reduction or interruption, illness, hospitalization, or death, as well as economic aspects.

  Malaria occurs in an enormous variety of ecological settings, and interventions are not always universally applicable. For instance, indoor residual spraying works only with indoor-resting mosquitoes, and insecticide treated mosquito nets only with nocturnal vectors. The best combinations of interventions vary, as do optimal delivery approaches and their health system implications. There are trade-offs between high coverage and costs or feasibility of deployment. Indiscriminate deployment may lead to evolution of drug resistance or insensitivity to other interventions. To support the analysis of these elements we are assembling databases of health system descriptions, intervention costing, and vector bionomics across different malaria ecotypes.

  Uncertainties inherent in simulations of complex systems are addressed using including probabilistic sensitivity analyses, fitting multiple different models, and basing predictions on model ensembles not single simulations. This requires super-computing, both for statistical fitting (which must simultaneously reproduce a wide range of outcomes across different settings), and for exploring predictions. We obtain this computing power over the internet from spare capacity on the computers of volunteers (

  Meetings with potential users are used to promote the models and their predictions to wider communities of malariologists, planners, and policy specialists. We are also developing web-based job submission and analysis systems to increase internet access to models.


  Mersenne@home is a Polish science project in the field of Mathematics, Number Theory, which, thanks to BOINC platform, uses Internet-connected computers to search for Mersenne primes. Mersenne primes are a numbers of the form 2p-1 and, in the current stage of research, counts millions of digits. Therefore, high computing power is needed to verify their primality.


  Milkyway@Home uses the BOINC platform to harness volunteered computing resources, creating a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.

  In computer science, the project is investigating different optimization methods which are resilient to the fault-prone, heterogeneous and asynchronous nature of Internet computing; such as evolutionary and genetic algorithms, as well as asynchronous newton methods. While in astroinformatics, Milkyway@Home is generating highly accurate three dimensional models of the Sagittarius stream, which provides knowledge about how the Milky Way galaxy was formed and how tidal tails are created when galaxies merge.

  Milkyway@Home is a joint effort between Rensselaer Polytechnic Institute‘s departments of Computer Science and Physics, Applied Physics and Astronomy.

MindModeling@home (beta)

  MindModeling@Home (Beta) is a research project that uses volunteer computing for the advancement of cognitive science. The research focuses on utilizing computational cognitive process modeling to better understand the human mind. We need your help to improve on the scientific foundations that explain the mechanisms and processes that enable and moderate human performance and learning. Please join us in our efforts! MindModeling@home is not for profit.

Moo! Wrapper

  Moo! Wrapper brings together BOINC volunteer computing network resources and the projects. It allows a BOINC Client to participate in the RC5-72 challenge.

Najmanovich Research Group

Najmanovich Research Group - NRG is a research project that uses Internet-connected computers to do research in Molecular Recognition and Computational Biology.

NRG is based at Université de Sherbrooke.

Neurona@home (beta)

  Neurona@Home is a BOINC-based project with the main aim of simulating the behavior of a large assembly of cellular automata neurons connected in a complex network.


  NFS@Home is a research project that uses Internet-connected computers to do the lattice sieving step in the Number Field Sieve factorization of large integers. As a young school student, you gained your first experience at breaking an integer into prime factors, such as 15 = 3 * 5 or 35 = 5 * 7. NFS@Home is a continuation of that experience, only with integers that are hundreds of digits long. Most recent large factorizations have been done primarily by large clusters at universities. With NFS@Home you can participate in state-of-the-art factorizations simply by downloading and running a free program on your computer.

  Integer factorization is interesting from both mathematical and practical perspectives. Mathematically, for instance, the calculation of multiplicative functions in number theory for a particular number require the factors of the number. Likewise, the integer factorization of particular numbers can aid in the proof that an associated number is prime. Practically, many public key algorithms, including the RSA algorithm, rely on the fact that the publicly available modulus cannot be factored. If it is factored, the private key can be easily calculated. Until quite recently, RSA-512, which uses a 512-bit modulus (155 digits), was commonly used but can now be easily broken.

  The numbers what we are factoring are chosen from the Cunningham project. Started in 1925, it is one of the oldest continuously ongoing projects in computational number theory. The third edition of the book, published by the American Mathematical Society in 2002, is available as a free download. All results obtained since, including those of NFS@Home, are available on the Cunningham project website.


  NumberFields@home is a research project that uses Internet-connected computers to do research in number theory. You can participate by downloading and running a free program on your computer.

  NumberFields@home searches for fields with special properties. The primary application of this research is in the realm of algebraic number theory. Number theorists can mine the data for interesting patterns to help them formulate conjectures about number fields. Ultimately, this research will lead to a deeper understanding of the profound properties of numbers, the basic building blocks of all mathematics.


The main idea of this project is the analysis of algorithms.


  OPTIMA@HOME is a research project that uses Internet-connected computers to solve challenging large-scale optimization problems. The goal of optimization is to find a minimum (or maximum) for a given function. This topic is perfectly explained in the Internet. See for example excellent explanation by Arnold Newumaier. Many practical problems are reduced to the global optimization problems. At the moment this project runs an application that is aimed at solving molecular conformation problem. This is a very challenging global optimization problem consisting in finding the atomic cluster structure that has the minimal possible potential energy. Such structures plays an important role in understanding the nature of different materials, chemical reactions and other fields. The details about the problem can be found here. You can participate by downloading and running a free program on your computer.


  We present here the research carried out at the orbit@home project, that focuses on producing an optimized search strategy for dedicated astronomical surveys to search for near-Earth asteroids. This work is lead by Pasquale Tricarico at the Planetary Science Institute (PSI), in collaboration with Ed Beshore, Steve Larson, Andrea Boattini at the Catalina Sky Survey (CSS), and Gareth Williams at the Minor Planet Center (MPC).

Pirates@home [test project]

  Pirates@Home is an ongoing test of BOINC, the Berkeley Open Infrastructure for Network Computing. At present Pirates@Home is not doing any real scientific computation, we are just having fun testing BOINC.


  Proteins are the nanoscale machinery of all the known cellular life. Amazingly, these large biomolecules with up to 100,000 atoms fold into unique three-dimensional shapes in which they function.
These functions include all cellular chemistry (metabolism), energy conversion (photosynthesis) and transport (oxygen transport), signal processing in the brain (neurons), immune response and many others, often with an efficiency unmatched by any man-made process. Protein malfunction is often related to diseases and thousands disease-related proteins have been identified to date, many with still unknown structure.

  To understand, control or even design proteins we need to study protein structure, which is experimentally much harder to obtain than the information about the chemical composition (sequence) of a specific protein.


  This project concerns itself with two hypotheses in number theory. Both are conjectures for the identification of prime numbers. The first conjecture (Agrawal’s Conjecture) was the basis for the formulation of the first deterministic prime test algorithm in polynomial time (AKS algorithm). Hendrik Lenstras and Carl Pomerances heuristic for this conjecture suggests that there must be an infinite number of counterexamples. So far, however, no counterexamples are known. This hypothesis was tested for n < 1010 without having found a counterexample. The second conjecture (Popovych’s conjecture) adds a further condition to Agrawals conjecture and therefore logically strengthens the conjecture. If this hypothesis would be correct, the time of a deterministic prime test could be reduced from O(log N)6 (currently most efficient version of the AKS algorithm) to O(log N).


  PrimeGrid’s primary goal is to bring the excitement of prime finding to the “everyday” computer user. By simply downloading and installing BOINC and attaching to the PrimeGrid project, participants can choose from a variety of prime forms to search. With a little patience, you may find a large or even record breaking prime and enter into Chris Caldwell’s The Largest Known Primes Database as a Titan!

  PrimeGrid’s secondary goal is to provide relevant educational materials about primes. Additionally, we wish to contribute to the field of mathematics.

  Lastly, primes play a central role in the cryptographic systems which are used for computer security. Through the study of prime numbers it can be shown how much processing is required to crack an encryption code and thus to determine whether current security schemes are sufficiently secure.


  Reactions between molecules are important for virtually all parts of our lives. The structure and reactivity of molecules can be predicted by Quantum Chemistry, but the solution of the vastly complex equations of Quantum Theory often requires huge amounts of computing power. In our project we want to aquire the necessary computing time to further develop the very promising Quantum Monte Carlo (QMC) method for general use in Quantum Chemistry.

Quake-Catcher Network Sensor Monitoring

  The Quake Catcher Network (QCN) is a research project that uses Internet-connected computers to do research, education, and outreach in seismology. You can participate by downloading and running a free program on your computer. Currently only certain Mac (OS X) PPC and Intel laptops are supported -- recent ones which have a built-in accelerometer. You can also buy an external USB accelerometer.

Quake-Catcher Network (Taiwan)

The Quake Catcher Network (QCN) is a research project that uses Internet-connected computers to do research, education, and outreach in seismology. You can participate by downloading and running a free program on your computer. Currently only certain Mac (OS X) PPC and Intel laptops are supported -- recent ones which have a built-in accelerometer. You can also buy an external USB accelerometer.


  Radioactive@Home is a polish science project using distributed computing capabilities of BOINC platform. The main goal of the project is to create free and constantly updated map of radiation available for all people, by gathering information about gamma radiation using sensors connected to computers of volunteers willing to participate in the project. The project uses dedicated hardware sensor; without it the app does nothing and no credits are granted.

RALPH@home [alpha project]

  RALPH@home is the official alpha test project for Rosetta@home. New application versions, work units, and updates in general will be tested here before being used for production. The goal for RALPH@home is to improve Rosetta@home. (beta)

  Our volunteer computing network gives 3D artists access to near limitless free rendering power from anywhere, anytime. We enable people to create higher quality 3D art and encourage them to share it. By volunteering your computer today, you can be a part of the revolution and help make the next Sintel, Shrek or Toy Story!

RNA World (beta)

  RNA World (beta) is a distributed supercomputer that uses Internet-connected computers to advance RNA-related research. You can participate by downloading and running a free program on your computer.

Rioja Science

  Science Rioja is a volunteer distributed computing project, born from the collaboration of Knet Communications and the University of La Rioja, with the departments of Mathematics and Chemistry. It is intended to be a platform for biomedical research through the use of volunteer distributed computing, where computing infrastructure is in the hands of volunteers who give cycles that do not use their machines for use in biomedical research. To achieve these goals we rely on the use of BOINC, on the Server that allows us to manage users who want to use the platform, the programs that are part of the project and that are delivered to these users, management updates, and media used, and finally the data units are delivered to the end user to process or calculate and get the results, finally these results are sent back to the server.

  RiojaScience@home is a joint initiative of the company Knet Communications and the University of La Rioja with the aim of providing research groups Rioja intensive computing platform, much higher than currently available on their personal computers or workstations. The initiative has received funding from the Economic Development Agency of La Rioja (ADER) and the entire University of La Rioja. Over time, this platform could become the most powerful computer in the Community.

  The computing power of this new infrastructure is based on two key concepts. First, the infrastructure is designed for home users on a voluntary basis, to grant the use of their computers when not in use. In this way, more users surrender their computing power teams will reach the platform. Interested parties may subscribe to the project via the web

  Moreover, it can exploit the computing power of graphics cards of computers, also called GPU and used primarily for image processing. In computers that have GPU, this card will perform the mathematical operations and can accelerate from 5 to 20 times the calculations traditionally central processing unit (CPU).

  Applications RiojaScience@home are varied. They can range from the search for intelligent life in space (objective of the project SETI@home), to research into new drugs, to improve weather forecasting or simulating the behavior of proteins.The first scientific project that will work RiojaScience@home belongs to the realm of physics and chemistry. This is a project of the Research Group of Kinetics and Dynamics of Chemical Reactions at the University of La Rioja. Reactions will be studied in gas phase and in solution, namely we will study the molecular dynamics of narlaprevir, a potential drug for the disease of hepatitis C.


  Rosetta@home needs your help to determine the 3-dimensional shapes of proteins in research that may ultimately lead to finding cures for some major human diseases. By running the Rosetta program on your computer while you don’t need it you will help us speed up and extend our research in ways we couldn’t possibly attempt without your help. You will also be helping our efforts at designing new proteins to fight diseases such as HIV, Malaria, Cancer, and Alzheimer’s (See our Disease Related Research for more information). Please join us in our efforts! Rosetta@home is not for profit.

RSA Lattice Siever (2.0)

  Rsa Lattice Siever is a research project that uses Internet-connected computers to help others factoring project such as mersenneforum or XYYXf achieve their academic goals. You can participate by downloading and running a free program on your computer.


SAT@home is a research project that uses Internet-connected computers to solve hard and practically important problems (discrete functions inversion problems, discrete optimization, bioinformatics, etc) that can be effectively reduced to SAT. Currently in the project problems of inversion of some cryptographic functions used in keystream generators are being solved. All cryptographic algorithms under investigation are publicly available. Corresponding tasks are randomly generated and do not contain any confidential information. We also plan to publish obtained results. In the nearest future we are going to launch an experiment for solving Quadratic Assignment problem (as a SAT problem) within the project. Project was implemented using DC-API.


  SETI (Search for Extraterrestrial Intelligence) is a scientific area whose goal is to detect intelligent life outside Earth. One approach, known as radio SETI, uses radio telescopes to listen for narrow-bandwidth radio signals from space. Such signals are not known to occur naturally, so a detection would provide evidence of extraterrestrial technology.

  Radio telescope signals consist primarily of noise (from celestial sources and the receiver’s electronics) and man-made signals such as TV stations, radar, and satellites. Modern radio SETI projects analyze the data digitally. More computing power enables searches to cover greater frequency ranges with more sensitivity. Radio SETI, therefore, has an insatiable appetite for computing power.

  Previous radio SETI projects have used special-purpose supercomputers, located at the telescope, to do the bulk of the data analysis. In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the SETI@home project to explore this idea. SETI@home was originally launched in May 1999

SETI@home Astropulse [beta project]

  Test project for apps of SETI@home


  SIMAP contains about all currently published protein sequences and is continuously updated. In BOINCSIMAP, we calculate every month the similarities and domains of newly imported proteins, in order to keep the SIMAP database up-to-date. Protein similarities are computed using the FASTA algorithm which provides optimal speed and higher sensitivity compared to the popular BLAST. Protein domains are calculated using the InterPro methods and databases.


  SLinCA (Scaling Laws in Cluster Aggregation) is a research project that uses Internet-connected computers to do research in field of materails science. You can participate by downloading and running a free program on your computer.


Solar@home is a research project that uses Internet-connected computers to make more efficient solar cells.

Solar@home is based at the University of Houston and is part of the Bittner Research Group.


  Spinhenge@home uses the inactive processor resources of your computer and when the screensaver is active, instead of the usual display, one of our graphics will be displayed. With your participation you will actively support the research of nano-magnetic molecules. In the future these molecules will be used in localised tumor chemotherapy and to develop tiny memory-modules.


  The Subset Sum problem is described as follows: given a set of positive integers S and a target sum t, is there a subset of S whose sum is t? It is one of the well-know, so-called “hard” problems in computing. It's actually a very simple problem, and the computer program to solve it is not extremely complicated. What's hard about it is the running time – all known exact algorithms have running time that is proportional to an exponential function of the number of elements in the set (for worst-case instances of the problem).

  Over the years, a large number of combinatorial problems have been shown to be in the same class as Subset Sum (called NP-complete problems). But, depending on how you measure the size of the problem instance, there is evidence that Subset Sum is actually an easier problem that most of the others in its class. The goal of this project is to strengthen the evidence that Subset Sum is an easier hard problem.

  Suppose we have a set of n positive whole numbers S whose maximum number is m. We will define the ratio n/m to be the density of the set and denote the sum of all elements in the set as ∑S. If you look at the list of sums produced by subsets of S, you notice that very few sums are missing if S is dense enough. In fact, it appears that there is an exact density threshold beyond which no sums between m and half the sum of S will be missing. Our preliminary experiments have led to the following hypothesis: A set of positive integers with maximum element m and size n > floor(m/2)+1 has a subset whose sum is t for every t in the range m < t < ∑S − m.

  So here's where you can help. So far, we haven't been able to prove the hypothesis above. If you want to be really helpful, you can send us a proof (or show us where to find one in the research literature), and the project will be done. But if you want to be slightly less helpful and have more fun, you can volunteer your computer as a worker to see how far we can extend the empirical evidence. You will also be helping us figure out better ways to apply distributed computing to combinatorial problems.

  SubsetSum@Home is based at the Computer Science Department of the University of North Dakota


  sudoku@vtaiwan is one of V-Taiwan projects that uses Internet-connected computers to do research in sudoku.


  Superlink@Technion helps geneticists all over the world find disease-provoking genes causing some types of diabetes, hypertension (high blood pressure), cancer, schizophrenia and many others.


  Surveill@Home is a research project that uses Internet-connected computers to conduct end-to-end fine-grained monitoring of web sites.

SZTAKI Desktop Grid

  The SZTAKI Desktop Grid and its applications are partly supported by the DEGISCO and the EDGI projects. The work leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements n° RI-261561 and n° RI-261556. The experts of the International Desktop Grid Federation provide further support for the SZTAKI Desktop Grid infrastructure, its applications, and its integration into the DEGISCO infrastructure.

See website for applications in the areas of Linguistics, mathematics, Physics.

theSkyNet POGS - the PS1 Optical Galaxy Survey

theSkyNet POGS is a research project that uses Internet-connected computers to do research in astronomy. We will combine the spectral coverage of GALEX, Pan-STARRS1, and WISE to generate a multi-wavelength UV-optical-NIR galaxy atlas for the nearby Universe. We will measure physical parameters (such as stellar mass surface density, star formation rate surface density, attenuation, and first-order star formation history) on a resolved pixel-by-pixel basis using spectral energy distribution (SED) fitting techniques in a distributed computing mode.


  μFluids project is a massively distributed computer simulation of two-phase fluid behavior in microgravity and microfluidics problems. Our goal is to design better satellite propellant management devices and address two-phase flow in microchannel and MEMS devices. Voluntary collaboration of individual computer users, like you, can participate by donating idle computer time using the BOINC software.


  Most desktop computers are virtually idle most of the time. They represent an immense pool of unused computation, communication, and data storage capacity. Available compute power is expanding rapidly with the advent of multi-core systems. But these nodes are "volatile" as their owners can make them unavailable suddenly and without notice. The goal of Volpex is to address the challenge of parallel computing on such volatile nodes.

  This project is developing an implementation of MPI customized for robust execution on volatile nodes. Message passing exchanges are converted to Put/Get operations executed asynchronously. VolPEx design is based on execution of 2 or more replicas of each MPI process - the application progresses at the rate of the fastest replicas and continues seamlessly on failure, as long as at least one replica for each process is alive.

  Current systems for execution on volatile nodes such as, BOINC and Condor primarily support execution of independent tasks on nodes. The central goal of Volpex Dataspace is to enable execution of communicating parallel programs on volatile ordinary desktops. We need a robust communication layer that facilitates data exchanges between tasks.

  The objective of this work is to simulate the performance of parallel applications on desktop grids by creating a virtual model of the real world network, which has all of the characteristics of a real world network plugged in, and to obtain the simulation results for different parallel applications for different combination of network parameters on different varieties of network configurations.

WEP-M+2 Project

  WEP-M+2 (wanless2) is a research project that uses Internet-connected computers to do research in number theory. You can participate by downloading and running a free program on your computer.


  Wildlife@Home is a joint effort between UND's Department of Computer Science and Department of Biology. The project is aimed at analyzing video gathered from various cameras recording wildlife. Currently the project will be looking at video of sharp-tailed grouse, Tympanuchus phasianellus, performing their mating dances (lekking), and then examining their nesting habits and ecology. The goal of the project is to use your volunteered computers to "sift" through the large amounts of video for interesting segments, and then letting you view this interesting video and help us analyze what is happening to the grouse and their nests.

   The nest cameras will be set up both near western North Dakota's oil fields and also within protected state lands. We hope that your participation will help us determine the impact of the oil development on the sharp-tailed grouse and other wildlife in North Dakota, as well as provide some interesting video for everyone to watch and discuss!

World Community Grid

  World Community Grid brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals - like you.


  WUProp@home is a non-intensive project that uses Internet-connected computers to collect workunits properties of BOINC projects such as computation time, memory requirements, checkpointing interval or report limit. You can participate by downloading and running a free program on your computer.

YAFU [alpha project]

  YAFU is a alpha project, which main goal is it to test the latest Boinc server code for bugs. Please report any error which you find in the webpages in the forum.

  In parallel to this it factorize numbers of 80-110 digit length which needs to be factored. You can participate by downloading and running a free program on your computer.


  yoyo@home brings existing distributed computing projects to the Boinc world using the Boinc Wrapper technology.


  • Euler (6,2,5) computes minimal equal sums of power 6. The project is dedicated to all those who are fascinated by powers and integers.
  • ECM is a program for Elliptic Curve Factorization which is used by a couple of projects to find factors for different kind of numbers.
  • Muon simulates and designs parts of a particle accelerator. You are simulating the part of the process where the proton beam hits the target rod and causes pions to be emitted, which decay into muons.
  • evolution@home represents the first and so far only distributed computing project addressing evolutionary research. It simulates different types of populations and focuses on the analysis of human mitochondrial DNA.
  • the client and runs OGR work units. This project searches for the shortest Optimal Golumb Ruler of the length 27.

Biochemical Library

  BCL is a research project that uses Internet-connected computers to do research in BCL. You can participate by downloading and running a free program on your computer.

CS@UH Virtual Prairie

  The project comes across long periods of inactivity (no workunits available). During these periods, we are extensively studying the outcome of the previous workunits and reflecting these results on the application and the coming workunits.

  Environmental problems are increasingly growing and need to take into account human activity and preservation of the environment (see for instance the European Common Agricultural Policy in 2005, the French Grenelle of Environment in 2007) leading to the concept of sustainable development (conference of Rio, 1992). Natural ecosystems have been considered for a long time as supports for primary production for agricultural needs. Research was therefore focused on the evaluation of prairie productivity and the effect of management on the agronomical values of the prairies. The identification of other roles of prairies in ecosystems functioning has been detected while prairies were degraded to be converted into croplands especially after 1950 (through erosion, eutrophication, and biodiversity loss) (Vitousek et al., 1994).

  International policies delimitated new goals for these ecosystems due to ecological services in particular for the two major changes for the following decades which are the availability of fresh unpolluted water and the regulation of carbon emission (through carbon storage, finding energy leading to low carbon emission). In order to achieve these goals, we tend now to create new natural systems as surrogates of the degraded natural systems to provide these ecological services (conf. Rio, 1992). Recent works have for example demonstrated the use of natural prairies to provide alternative biofuels (carbon positive biofuels) (Tilman et al., 2006), or their role in carbon storage (Ni, 2002; Purakayastha et al., 2008).

  The design of these new grassland ecological systems has to respond efficiently to these ecological goals. These new systems are elaborated by sowing mixed-species seeds. Questions are raised therefore on the temporal evolution of these plants which are characterised by different life-strategies and which are in constant interactions with one another (Grime, 1977). Such systems are also managed by farmers and are dependent on the environment. Proposing precise design of these systems need to take into account all these complex interactions. The urgent need for short term responses makes impossible to respond to this sociological demand through the only classical experimental approach which may necessitate long-term surveys.

  Ecological problems have the specificity of being dependent on biological elements with complex interactions and which response may be delayed at a year or pluri-year scales due to biological cycles. New tools are therefore needed to go past over biological constraints and take into account the complexity of living ecosystems. The ViP project aims at using modelling for (i). providing extensive virtual experiments for testing solutions in a shorter time that would have been possible through real experimentation, (ii). optimizing experimental designs to access to the most efficient results.

  For our application, we focused on an example of the European agricultural policy which compel farmers to establish herbal strips in agricultural landscapes. These linear systems have a key role in restoring water quality (to achieve this goal, they need to be located in priority along water courses) (De Cawer et al. 2006). An indirect effect of these systems is also to maintain and restore biodiversity by providing ecological refuges for animal or plant species of high ecological values (Field et al., 2005; Reeder et al. 2005). These herbal strips need to cover 3% of the farm surface into cultures such s cereals. The 2005 environmental policy do not provide precise technical guidelines for creating such buffering systems. Our ViP project aims at providing ecological guidelines on the design of prairies with the best potential for water purification.


  Magnetism@home is a research project that uses Internet-connected computers to explore the equlibrium, metastable and transient magnetization patterns (first and foremost in nano-scale magnetic elements and their arrays, but later other systems may be considered).


  D-Wave’s AQUA (Adiabatic QUantum Algorithms) is a research project whose goal is to predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques. You can participate by running a free program on your computer.


  DNETC@Home is a wrapper between BOINC and
You can read more about it here.

Updated [table "0" not found /]

Project Capabilities

Project NameCPUAMDNVIDIAPS3linuxmacwindowsother
Albert@home [alpha project]cpu (sse,sse2)nonecuda32nonelinuxmacwindowsnone
AlmereGrid TestGrid [test project]cpunonenonenonelinuxmacwindowsnone
BOINC Alpha Test [alpha project]nonenonenonenonenonenonenonenone
BURP (beta)cpu (mt)nonenonenonelinuxnonewindowsnone
Chess960@home (alpha)cpunonenonenonelinuxnonewindowsnone
Collatz Conjecturecpu (mmx,sse)ati  
CPDN [beta project]cpunonenonenonelinuxmacwindowsnone
Einstein@homecpu (sse2)nonecuda32nonelinuxmacwindowsnone
eOn: Long timescale dynamicscpunonenonenonelinuxmacwindowsnone
FreeHALcpu (nci)nonenonenonelinuxnonewindowsnone
gerasim@home (alpha)cpunonenonenonenonenonewindowsnone
La Red de Atrapa Sismos cpu (nci)nonenonenonelinuxmacwindowsnone
The Lattice Projectcpunonenonenonelinuxmacwindowsnone
Leiden Classicalcpunonenonenonelinuxmacwindowsfreebsd
LHC@home 1.0 Sixtrackcpuplanned development maybe in 2012planned development maybe in 2012nonelinuxplannedwindowsnone
LHC@home 2.0 Test4Theory [test project]cpu (requires  virtual  box  installation)nonenonenonelinuxmacwindowsnone
MilkyWay@homecpu (mt)ati14cuda_openclnonelinuxmacwindowsfreebsd
MindModeling@home (beta)cpunonenonenonelinuxmacwindowsnone
Moo! Wrappercpuati14cuda31nonelinuxmacwindowsnone
Najmanovich Research Groupcpunonenonenonelinuxmacwindowsnone
Neurona@home (beta)cpunonenonenonelinux x64 onlynonewindows x64 onlynone
OProject@homecpunonenonePS3linuxmacwindowsandroid, freebsd, solaris
Optima@homecpuunder devlopmentunder developmentnonelinuxmacunder developmentnone
Pirates@home [test project]cpunonenonenonelinuxmacwindowsnone
primaboincacpunonenonePS3linuxmacwindowscell be
Quake-Catcher Network Sensor Monitoringcpu (nci) hardware sensor requirednonenonenonelinuxmacwindowsnone
Quake-Catcher Network (Taiwan)cpu (nci)nonenonenonelinuxmacwindowsnone
Radioactive@homecpu (nci) hardware sensor requirednonenonenonelinuxnonewindowsnone
RALPH@home [alpha project]cpunonenonenonelinuxmacwindowsnone (beta)cpunonenonenonelinuxmacwindowsnone
RNA World (beta)cpunonenonenonelinuxmacwindowsnone
Rioja Sciencecpunonenonenonelinuxmacwindowsnone
RSA Lattice Siever (2.0)cpunonenonenonelinuxnonewindowsnone
SETI@home Astropulse [beta project]cpuati13aticuda  
cpunonenonePS3 linux part onlylinuxmacwindowssee website
Surveill@homecpu (nci)nonenonenonelinuxmacwindowsnone
SZTAKI Desktop Grid cpunonenonenonelinuxmacwindowsnone
theSkyNet POGS - the PS1 Optical Galaxy Surveycpunonenonenonelinuxmacwindowsnone
WEP-M+2 Projectcpunonenonenonelinuxnonenonenone
World Community Gridcpunonenonenonelinuxmacwindowsnone
WUProp@homecpu (nci)nonenonePS3 linux part onlylinuxmacwindowsnone
YAFU [alpha project]cpunonenonenonelinuxnonewindowsnone
Biochemical Librarycpunonenonenonenonenonewindowsnone
Virtual Prairie cpunonenonenonelinuxnonewindowsnone
Updated 2013-01-05 15:07:14.

March 2, 2011 |