It is more than 15 years since DNA microarrays were developed, and in that time they have been adored, attacked and, in an effort to look beyond the hype, appraised. One outcome of this 'soul searching' has been the realization that the flaws inherent to gene-expression arrays are similar to those of other high-throughput platforms. “I don't think microarrays are different from other technologies in that respect, and it's important for people to keep that in mind,” says Janet Warrington, vice-president of molecular diagnostics and emerging markets research and development for Affymetrix, based in Santa Clara, California. “I try to point out 'microarray exceptionalism' wherever I find it.”

John Quackenbush, a computational biologist at the Harvard School of Public Health, sees several fundamental errors in the way many researchers tackle microarray gene-expression studies. “People tend to go out blindly and do experiments, then go back and try to analyse them and figure out what the question is afterwards. I think that's the first thing you have to avoid,” he says. “It's also important to make sure you remove confounding factors from the experiment wherever possible.”

Affymetrix's tiling and exon arrays: two alternatives for in-depth screening of the human genome.

Systematically sorting out sources of error can be a daunting process, but a recent series of investigations by people such as Quackenbush into cross-experimental, cross-platform and cross-laboratory variability between array experiments has helped clarify some issues that were preventing comparisons being made between experiments. Several multi-institutional projects are now under way to develop more reliable experimental protocols and controls (see 'Standards and practices'). Meanwhile, many scientists, manufacturers and programmers are working to develop practical tools that could help eliminate unwanted variability from experiments and analyses.

The biggest problems often occur early on. Existing kits for RNA preparation are effective, but many are designed for a 'best-case scenario': large amounts of fresh biological source material. Many researchers are now interested in studying gene expression in a small number of cells, necessitating efficient systems that can work with limited samples.

Various systems have been developed, many of which are based on a linear-amplification procedure known as the Eberwine method. Labelling strategies typically fall into two main categories: direct and indirect. Direct methods, used in systems such as CyScribe, available from GE Healthcare Life Sciences of Little Chalfont, UK, and ChipShot, from Promega of Madison, Wisconsin, typically involve the incorporation of fluorescent-dye-conjugated nucleotides during complementary DNA (cDNA) synthesis.

One favoured indirect-labelling strategy involves incorporating an aminoallyl-modified nucleotide into cDNA or amplified RNA transcripts, and then labelling these with chemically functionalized fluorescent dyes. EPICENTRE Biotechnologies in Madison, Wisconsin, has incorporated this approach into its TargetAmp kits for RNA amplification. The company says these kits can deliver up to 5-million-fold amplification, and even allow the study of single cells. “We can get down to 10 picograms of starting RNA,” says Shervin Kamkar, a technical-sales specialist at the firm, “so it's really useful for people doing stem-cell work or laser capture.”

Designer labelling

Genisphere of Hatfield, Pennsylvania, uses a unique labelling approach for its 3DNA kits that is based on fluorescently tagged DNA-based dendrimers. These contain sequences complementary to 'capture' sequences added to cDNA during sample amplification. Different dendrimers are available with varying quantities of linked fluorophore molecules that determine the limits of detection, down to less than a microgram of starting material. Detection can be further augmented with the company's SenseAmp kits, which use one or two rounds of a non-Eberwine amplification strategy to produce sense-strand RNA. “We've gone as low as 0.1 nanograms of total RNA,” says Bob Getts, Genisphere's director of research and development, “which is typically from ten cells.”

Another problem is posed by the increasing use of microarrays for analysis of clinically prepared formalin-fixed paraffin-embedded (FFPE) samples, such as tumour biopsies. These may have been in storage for anything from months to decades, and the resulting RNA degradation is a serious challenge for reagent designers.

Several companies, including EPICENTRE and Genisphere, are working on this. According to Getts, SenseAmp is well-suited to FFPE work. “We routinely use samples that have been degraded to between 50 and 250 bases long,” he says. Array manufacturer Illumina of San Diego, California, offers an alternative with the DNA-mediated annealing, selection, extension and ligation (DASL) assay. This is an adaptation of its GoldenGate genotyping technology that takes advantage of a universal array consisting of large numbers of specific tag sequences in order to quantify PCR-amplified primer-extension products. “It's been shown to work on samples as old as 20 years,” says Shawn Baker, Illumina's scientific product manager for gene expression.

Super-sensitive: high-density arrays of long probes are offered by NimbleGen Systems.

Still further optimization will be needed as researchers continue to target smaller and more biologically relevant specimens. “There's just so much variation in the material that you're given,” says Kamkar, “and it's hard to know how effective a technology is until people have tried your approach on their own samples.”

When asked about concerns regarding microarray experimental reliability, many in the field are quick to defend the hardware. “The microarray instrument itself, if used correctly, is precise and accurate,” says Rafael Irizarry, a biostatistician at the Johns Hopkins University in Baltimore, Maryland. “The quality of commercial arrays is improving, whereas their price is dropping, so commercial arrays are supplanting the 'home-brew' approach more and more,” says Quackenbush. Different manufacturers have opted for various strategies to improve experimental quality and to minimize opportunities for human error (see 'Hands off!').

John Blume, vice-president of assay and application product development at Affymetrix, cites ever-increasing probe density and genome coverage as a secret of Affymetrix's success. Their Human Genome U133 Plus 2.0 GeneChip microarrays feature 11 different oligonucleotide probes for each transcript, which confer a number of benefits. “As annotation shifts, people's bets on which sequence is useful for a gene can prove wrong,” says Blume. “This design philosophy of multiple sequence probes per gene provides a buffer that single sequences cannot.” It also protects against unexpected glitches at the hybridization stage. In addition, Affymetrix offers broadened coverage through its genomic-tiling arrays and, more recently, new exon arrays that allow users to assemble detailed, genome-wide exon-usage profiles for human, mouse and rat studies.

Probing for answers

Illumina also uses redundancy to maintain experimental quality control in its bead-based arrays. “Each of our arrays has about 30 replicates of each probe,” says Baker. “Because these 30 measurements are spread randomly across the chip, we don't have to worry about little things like smudges on the array — any outlier measurements get removed.” These arrays also benefit from the combination of high probe density with the inclusion of multiple arrays on a given chip. This allows users to simultaneously profile a number of samples — up to 96 parallel arrays — in an 'array of arrays' format.

Agilent Technologies of Santa Clara, California, touts the use of long probes — 60mers, compared with Affymetrix's 25mers — as an advantage for enhancing sensitivity to low-abundance transcripts, typically a weakness for microarray platforms. Agilent's instruments incorporate a multiple-scan approach, further extending the sensitivity of detection. “You can look at a broader range of transcripts and still get linearity with regard to the signal recorded,” explains Kevin Meldrum, director of genomics marketing. Agilent has also incorporated proprietary 'spike-in' controls into its platform, which allow monitoring of experimental quality.

An efficient and cost-effective production process gives NimbleGen Systems of Madison, Wisconsin, particular flexibility in the generation of its arrays. These combine a maskless photolithography method with a proprietary chemical process for efficient and accurate in situ synthesis of high-density probe arrays. The company's latest generation chips contain more than 2 million probes. NimbleGen also favours the use of long, typically 60mer, probes. “We are the only company that combines long oligomers with high density,” says vice-president of business development Emile Nuwaysir. NimbleGen's rapid production process also allows it to continually update its probe sequences to align with the latest genome-annotation data. Affymetrix is currently taking advantage of this process for the production of NimbleGen-manufactured NimbleExpress custom GeneChips.

A relatively recent entrant into the gene expression array field, Applied Biosystems of Foster City, California, has used years of experience in genomic work — and access to the proprietary genome databases of Celera Genomics, based in Rockville, Maryland — to good advantage in the design of its oligonucleotide arrays. “We've basically front-loaded all of the bioinformatics work,” says staff scientist Chris Streck. “We do all the curation and annotation of these particular genes, and we make sure we have the most comprehensive and complete view of the genome to begin with.” Applied Biosystems also benefits from a chemiluminescence-based approach to detection, with considerably reduced background noise relative to standard fluorescent systems.

The number crunch

However, high-quality samples and high-tech instrumentation alone won't save the microarray experiment. Some of the most fundamental challenges lie in gleaning biological significance from mounds of data and designing experiments with a statistically sound foundation.

David Allison, a biostatistician at the University of Alabama at Birmingham, remembers the early days of microarray work with horror. “The sample sizes were way too small, unjustified statements were made, and the analyses were primitive,” he says. Fortunately, he adds, “the field recognized this, and a lot of people started weighing in with their own methods”.

According to Irizarry, an important first step for good analysis is the effective pre-processing of raw data, using algorithms that accurately convert spot fluorescence to gene-expression estimates. “Changing those algorithms can make a difference,” he says, “and you can turn an experiment that looks so-so into something that looks powerful and precise.” Irizarry has also called attention to the importance of data normalization, and designed an online tool, Affycomp II, which allows users to benchmark their normalization methods using 'known' data sets from Affymetrix GeneChip experiments, and makes those benchmark results publicly available — extending an ongoing trend in the community of increasing data sharing (see 'Share and share alike').

Programs such as Ingenuity Pathways Analysis use extensive databases to assemble detailed pathway models.

Most major chip and instrument manufacturers also market software packages with which to analyse raw microarray data. Agilent offers the GeneSpring suite, whereas Illumina has developed BeadStudio, which is specifically designed for its array format and can also interact with other analytical tools. Affymetrix distributes a variety of programs, and has also established its 'GeneChip Compatible' program with various other companies. “It's not practical for us to be experts at everything”, says Blume. “By working with our partners, we can provide better solutions for a more diverse range of users.”

The open-source movement has also taken firm hold in this field, and a particularly strong contributor has been the Bioconductor program, now in its fifth year. Bioconductor was launched to make high-quality, community-developed and community-tested tools for statistical analysis freely available. The foundation language for Bioconductor is 'R', an optimal choice for statistical analyses. “When a statistician develops a method and wants people to use it, he or she will carefully create software for people to implement this method in R,” explains Irizarry. “And now, any method that's good, that people like and want, will be implemented in R and made available in Bioconductor.”

Pathfinders

Unfortunately, Bioconductor can be difficult for scientists lacking programming skills to use effectively. In an effort to bring R's analytical capabilities to these users, Quackenbush's group has developed the TM4 suite. “These programs are the biologist-friendly version of what people are doing at Bioconductor,” he explains. A user-friendly solution is also offered by Insightful of Seattle, Washington, in the form of its S+ArrayAnalyzer software, which ports the complete set of Bioconductor statistical tools.

We're facing too many options for analysing the same data set. Leming Shi

But this is just scratching the surface, and the variety of analytical tools available can be confusing. “We're facing too many options for analysing the same data set,” says Leming Shi of the US Food and Drug Administration in Rockville, Maryland, “and there has not been adequate scientific vetting of the capabilities and limitations of available methods.”

Scientists may cringe at the effective long-term solution to this problem — acquiring a solid background in practical statistics. Many in the field recognize that biologists have an unfortunate tendency to 'plug and play' in analytical methods without understanding the underlying principles, which results in misuse of otherwise effective strategies. Ultimately, good maths may become the key to good science. “Soon, you're probably not going to be able to say that you're a molecular biologist if you don't understand some statistics or rudimentary data-handling technologies,” says Blume. “You're simply going to be a dinosaur if you don't.”

Of course, the objective of microarray experiments is not to generate endless spreadsheets and scatter-plots, but to produce data that can be used to formulate an understanding of biological events. This requires a way to predict the impact of gene-expression shifts on networks of interacting gene products, and this, in turn, requires detailed databases in which the function and behaviour of these individual gene products has been accurately defined and annotated.

Several such databases now exist, thanks to projects such as Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG). These resources serve as the foundation for a number of different tools for 'second-order' microarray analysis. Some of these, such as Gene Set Enrichment Analysis (GSEA), which attempts to identify significant shifts in sets of interacting or associated gene products, or GenMAPP, a software tool for the assembly of interactive, graphic maps of biological pathways based on gene-expression-array data, are academic in origin and freely available. Several others, such as Ingenuity Pathways Analysis, from Ingenuity Systems of Redwood City, California, PathwayStudio, from Ariadne Genomics of Rockville, Maryland, and BiblioSphere, from Genomatix of Munich, Germany, have been commercially developed.

Ingenuity's efforts at pathway assembly complemented GO and KEGG with a large-scale 'knowledge base' that was assembed, from scratch, for its Pathways Analysis software. “It took us about four and a half years to reach critical mass in terms of ontology and content,” says chief technology officer Ramon Felciano, “and then go through a decade's worth of literature and manually structure millions of pathway relationships using those ontologies.” This knowledge base now incorporates definitional data from GO as well as a number of user-uploaded pathways defined from new or unpublished experimental data, and continues to be closely curated. “If you're building pathways based on biological-data models, the quality, accuracy, richness of detail and breadth of coverage of the biological content are critical,” says Felciano.

BiblioSphere uses a multi-pronged approach for its analysis that, as its name indicates, starts in the library. “The program's first line of analysis is literature” explains Martin Seifert, vice-president of microarray business at Genomatix. “We build up a literature network from the co-citation of genes in abstracts from PubMed.” This is followed by an overlay of other lines of evidence, including a curated pathway database and information from ontology databases such as GO and the US National Library of Medicine's Medical Subject Headings. “The most important step,” says Seifert, “is finding the biological aspects that are buried in data from chips.”

Effective pathway-building systems may offer the promise of making gene-expression arrays a potent tool for performing detailed diagnostic analyses in fields such as toxicology and pathology. “Our customers see a lot of promise in generating de novo pathways that may not be exactly like the ones you see in your textbooks, but may be more specific to the disease or tumour stage that you are looking at,” says Felciano.

This approach may also serve as a model for further integration of microarray findings with other data collections, ranging from the combination of different sets of chip data — such as associating genome-wide expression patterns with transcription-factor binding and DNA methylation — to more ambitious syntheses with massive databases such as PubMed, OMIM and DrugBase. “The way to deal with the problem of big data is to beat it senseless with other big data,” says Quackenbush. “There's a host of information out there on how biological systems function that has been collected over the past 300 years. What we want to be in a position to do as a community is leverage that information, synthesize it and discover things that we couldn't discover using any technology on its own.”