«

»

Jun 04

Print this Post

What Scientific Questions Should Cancer Epidemiology Address in the Next Decade to Impact Public Health?

Trends in 21st Century Epidemiology: From Scientific Discoveries to Population Health Impact on December 12-13, 2012

 

The Epidemiology and Genomics Research Program (EGRP) has initiated a strategic planning effort to develop scientific priorities for cancer epidemiology research in the next decade in the midst of a period of great scientific opportunity and resource constraints. EGRP would like to engage the research community and other stakeholders in a planning effort that will include a workshop in December 2012 to help shape new foci for cancer epidemiology research.

image of two people having a conversation related to cancer researchTo facilitate this process, we invite the research community to join in an ongoing Web-based conversation to develop priorities and the next generation of high-impact studies. A recent commentary published in Cancer Epidemiology, Biomarkers & Prevention summarizes our efforts.

Success Stories From 20th Century Cancer Epidemiology

Cancer epidemiology has led to many success stories that have improved policy and practice. These include, among others, the unraveling of cigarette smoking as a cause of lung and many different types of cancer, the role of HPV in cervical and other cancers, and the discovery of hundreds of genetic loci as risk factors for various types of cancer.

Future Opportunities and Challenges

We are now at a major crossroad in our understanding of cancer. Tools of molecular biology, genomics, and other high throughput “omic” technologies are increasingly integrated into epidemiologic investigations.

In addition, we are able to increasingly take social, behavioral, and environmental measurements at the individual, community, and health system levels. Moreover, research is increasingly supported by advances in bioinformatics and information technology, allowing us to collect, analyze, and synthesize information from multiple disciplines at an ever increasing pace.

With these opportunities, however, come the major challenge of dealing with the data deluge and uncovering true causal relationships from the millions and millions of observations that are background noise. Thus, we now confront important challenges and must make choices about the scientific direction in order to maximize the use of existing research infrastructures and plan wisely for new ones in the face of working to fund cancer epidemiology studies and respond to changing resources.

EGRP Invites Your Feedback

We would like the research community to contribute their thoughts on several areas which we will introduce throughout the summer.

This week, we ask the following fundamental question:

  • What are the major scientific questions that cancer epidemiology should address in the next decade to impact public health?

Please use the comment section below to share your perspectives.  We encourage you to be as specific as possible in your reply. You can use or be inspired by the NCI Provocative Questions exercise.  Comments provided through our blog will be used to shape the workshop discussion in December.  Ultimately, we will all benefit from a vibrant dialogue that will help shape the future of cancer epidemiology in the next decade.

 

Photo of Muin KhouryMuin J. Khoury, M.D., Ph.D., serves as Acting Associate Director, Epidemiology and Genomics Research Program (EGRP), in NCI’s Division of Cancer Control and Population Sciences (DCCPS). Since 2007, Dr. Khoury has served NCI as a Senior Consultant in Public Health Genomics. He has helped integrate public health genomics research into the Division’s research portfolio, such as comparative effectiveness research in genomics and personalized medicine.  Dr. Khoury is also the founding Director of the Centers for Disease Control and Prevention’s (CDC) Office of Public Health Genomics.

Dr. Khoury received his B.S. degree in Biology/Chemistry from the American University of Beirut, Lebanon, and his medical degree and pediatrics training from the same institution. He received a Ph.D. in Human Genetics/Genetic Epidemiology and training in medical genetics from The Johns Hopkins University. Dr. Khoury is board certified in medical genetics and is internationally recognized for his expertise in genetic epidemiology and public health genomics.

 

Permanent link to this article: http://blog-epi.grants.cancer.gov/2012/06/04/what-scientific-questions-should-cancer-epidemiology-address-in-the-next-decade-to-impact-public-health/

16 comments

Skip to comment form

  1. Jaymie Meliker

    Under the premise that most major unique causes of cancer have been identified and interactions between multiple factors are what likely remain to be discovered:
    1. Development of statistical methods for investigating mixtures –be it GxE, ExE, ExExExG, etc.
    2. Development of national registries to link cancer outcomes with relevant data including electronic medical records, pharmaceuticals, blood spots collected at birth, etc. — somehow we need to go beyond the limited cross-sectional or case-control studies. We need larger sample populations if we are going to investigate mixtures.

  2. Paul Brennan

    Most cancers are still identified at a late stage, and I would argue the epidemiologists have an essential role to play in determining the most appropriate strategy to screen and identify biomarkers for early detection of cancer. This may include the use of proteomic, metabolomic, micro RNA, circulating tumour DNA biomarkers etc that are highly specific and sensitive with the aim of detecting pre-clinical cancer. Epidemiologists have control of the large population biorepositories (both cohort and large case-series) that are needed to test potential biomarkers. We need to interact with molecular colleagues to devise a strategy of how to best prioritise potential candidates, eg agnostic screening of multiple markers in large case series, including tumour samples, prior to testing in pre-diagnostic blood samples.

  3. Gertraud Maskarinec

    Looking at the many large epidemiologic investigations in recent years, two methodologic topics deserve more attention for future planning. How can we return to a more hypothesis-based approach instead of processing large amounts of nutritional, genetic, or metabolomic data with the hope of finding a few significant associations? And within the molecular epidemiology field, how can we design smaller and more efficient intervention studies in humans to understand potential biologic mechanisms that were identified in experimental studies and that serve as rationale for large cohort studies or prevention trials?

  4. PAT HARPER

    Melanoma will still be recognized for a late level, and also I’d dispute your epidemiologists have an essential position to perform in figuring out the most likely strategy to monitor and also identify biomarkers regarding beginning recognition regarding cancers. This could incorporate the use of proteomic, metabolomic, minuscule RNA, moving tumor DNA biomarkers and so on which are hugely specific and also delicate together with the purpose of discovering pre-clinical cancers. Epidemiologists have manage on the big inhabitants biorepositories (both cohort and also big case-series) which are necessary to analyze potential biomarkers. We have to connect to molecular acquaintances to devise a method regarding how to ideal prioritise potential applicants, like agnostic screening regarding numerous marker pens in big scenario collection, as well as tumor trials, just before examining in pre-diagnostic blood vessels trials.

  5. J. Bernabe

    Muin,
    With this awesome wave of big data technolgies we found a way of dealing with volumes of data we couldn’t even dream of before. The cancer research could immensely benefit from this wave… If we convince people to share more information about their habits and whereabaouts to help the society, you could end up having much more data sources available and finding more determining patterns.
    Just one example: let’s say you could register where a given cancer patient has been hour by hour the last 20 years. Crossing this information with the weather records, you can estimate the exposure to the sun light… or the average distance to a place with a given positive influece over the cancer development, etc…
    The technology is now there… we just need to ask people to share their data to bring the cancer research to the next level!

    1. A. Bluredson

      It’s a good idea, but here comes a question about data safety and interference with one’s privacy. Very small number of people would agree for such long observation, and the smaller is this number, the higher is the possibility of inadvertence

  6. Eiliv Lund

    Moving into the post-GWAS era we should go for improved and extended design of the traditional prospective study in order to study the carcinogenic process over time and not simply run risk assessments as today. The challenge for cancer epidemiology will be to incorporate in the design the potential for studies of metabolic pathways in observational studies of humans as verifications of results from cell culture and animal experiments. In order to be able to use the upcoming technologies of functional genomics we need to collect biological material, blood samples, for the specific analyses of transcriptomics and epigenetics both at time of entry, repeated during follow-up and at time of diagnosis when also tumor tissues should be included. This will give new challenges in biostatistics and bioinformatics, not to say for epidemiologists to interpret the results. Better knowledge of the carcinogenic process could improve the public health advices and the understanding of causality.

  7. Mandy Toland

    One potential area of impact that has been potentially missed thus far is the impact of the epigenetic modifiers on disease state and transgenerational epigenetic effects. Work from animal models suggests that exposures seen in parents and grandparents can impact the disease state of later generations independent of DNA sequence. How do exposures in previous generations alter the epigenome in such a manner as to impact risk? Similarly, are there parent-of-origin effects of risk variants that we are missing by our current approaches to study design and analysis.

  8. Jonathan M. Samet

    The December meeting is timely and does come at a time of “paradigm shift”, a shift that is, in fact, occurring far more rapidly than anticipated. The shift is driven by technological advances and the predominance of questions in many fields, including cancer, that require very large N studies if there is to be sufficient power to identify meaningful subgroups. There is also the promise of “personalized” or “precision” medicine; its fulfillment will require sifting through extremely large datasets to find signatures that are sufficiently predictive to be of use in individuals.

    Undoubtedly, other commenters will address the rise of “omics” data and the need for very large cohorts and the debate as to whether there should be multiple cohorts or one very large cohort that can serve as a multipurpose research “laboratory”. This window of “paradigm shift” is uncomfortable and some will lament a move to larger cohorts and the attendant implications for investigators and point to the need for clinging to element of the “old” paradigm. These topics will be well attended to and here I focus on the organization of epidemiology as a discipline and its structures for collective decision-making.

    The meeting’s agenda gives attention to matters related to epidemiology, a scientific discipline, but future directions depend on decision-making by epidemiologists and associated institutions. The moment calls for collective reflection and discussion as to how research resources can be best allocated. The discussion needs to involve intra- and extra-mural researchers and the reach needs to be global and not limited to the United States. One useful point for discussion at the meeting that needs to be among the “recommendations” relates to collective decision-making at a time of paradigm change. There are many “stakeholders” and some “decision makers’ and “key opinion leaders”, all groups being represented across the meeting attendees. One useful exercise would be to consider how a collective and optimal decision can be made and who will be involved in making it. To that end, the following needs to be considered:

    • Who are the “decision-makers” and what are the venues for decision-making?
    • How can decision-making involve the broad “community” of epidemiologists? What are the underlying networks? What is the role of professional organizations?
    • What resources are available for initiating large cohorts? What about the competition between resources for such cohorts and the funding of individual researchers?
    • How does population research directed at etiology and risk extend into the full continuum of clinical and translational research?
    • What about the various cohorts being assembled around the world, and particularly those associated with health care systems? Can there be networks of such cohorts? The development of a global network?
    • How will research institutions, particularly academic institutions, respond to the changing nature of epidemiological research?

    Some Relevant References
    1. Hoover RN. The evolution of epidemiologic research: from cottage industry to “big” science. Epidemiology. 2007; 18(1): 13-7.
    2. Manolio TA, Weis BK, Cowie CC, et al. New models for large prospective studies: is there a better way? Am J Epidemiol. 2012; 175(9): 859-66.
    3. Samet JM. “Big epidemiology for big problems”. 17th annual Robert Gordon lecture, National Institutes of Health. May 18, 2011. http://videocast.nih.gov/launch.asp?16677 [podcast]; http://wals.od.nih.gov/2010-2011/samet_abstract.html [abstract]
    4. Trochim WM, Cabrera DA, Milstein B, Gallagher RS, Leischow SJ. Practical challenges of systems thinking and modeling in public health. Am J Public Health. 2006; 96(3): 538-46.

  9. Christopher P. Wild

    Two-way translational cancer research: a better balance between prevention and treatment

    Advances in understanding the human genome are being translated into improved therapeutics, tailored to exploit the underlying tumor genetics of an individual patient. This personalized, or stratified, medicine promises improved survival for patients with tumors amenable to this therapy and for which there is a commercial imperative for drug development.

    Simultaneously population growth and improved life expectancies will result in 70% more new cancer patients every year by 2030, with the greatest increases in the developing countries (Globocan 2008). In many developing countries access to cancer therapy of any sort remains limited and personalized medicine is unlikely to make great inroads in the foreseeable future. Therefore thinking on a global scale, one must ask how advances in molecular sciences can be translated towards prevention in order to combat this impending epidemic.

    Identification of risk factors is one foundation for cancer prevention. Much is known already, with typical estimates of 30-50% of cancers preventable based on current knowledge (Parkin et al., 2011). However, there remain a number of common cancers for which the etiology remains obscure (e.g. prostate, pancreas, kidney, brain and hematological cancers) and others where only a proportion is explained (e.g. colorectal and breast). In some cases interventions need to be evaluated while for others there are barriers to implementation.

    Applying new technologies to epidemiology can help in a number of areas (Wild 2011; 2012a). First, biomarkers promise improved measurement of environmental and lifestyle exposures. Refinements in personal and environmental monitors, geographic information systems, and more sophisticated questionnaires provide complementary approaches. All need careful validation and adaptation to large numbers of subjects at low cost, to make best use of prospective cohort studies and associated biobanks. Second, molecular data across different “omics” platforms provide comprehensive portraits of cancer sub-types. Such molecular profiling allows exposure to be analyzed in genetically-defined sub-sets of cancers, possibly revealing new underlying associations. Third, recognition that exposures may act through epigenetic alterations is leading to new paradigms, biomarkers and fresh opportunities to investigate the biological plausibility of exposure-disease associations. Fourth, biomarkers may provide valid surrogate endpoints in evaluation of interventions. Finally, molecular tools will allow exposures throughout the life-course (including the perinatal period) to be linked to biological changes that may provide clues to subsequent cancer risk.

    The above are promising areas for laboratory sciences to be applied to epidemiology. However, for major advances to be made, a number of critical changes are needed: 1) develop and promote inter-disciplinary research, underpinned by appropriate training; 2) policymakers and funders to prioritize translational research into cancer prevention, recognizing that investment in this area is less attractive for the private sector than clinical research; 3) clear advocacy to illustrate to the public, politicians and non-governmental organizations the limitations of improved treatment in addressing the growing global cancer burden.

    In summary, translational cancer research stands at an exciting but critical point in time. What is needed is a concerted effort to drive the advances in basic science towards cancer prevention, with an eye to reducing global inequalities in health in the process (Wild 2012b).

    References
    GLOBOCAN 2008 v2.0, Cancer Incidence and Mortality Worldwide: IARC CancerBase No. 10 Lyon, France: IARC http://globocan.iarc.fr, accessed on 24/10/12.
    Parkin DM et al., (2011). The fraction of cancer attributable to lifestyle and environmental factors in the UK in 2010. Br. J. Cancer, 105: S77-S81.
    Wild CP (2011). Future research perspectives on environment and health: the requirement for a more expansive concept of translational cancer research Env. Health 10 (Suppl 1): S15
    Wild CP (2012a). The exposome: from concept to utility Int. J. Epidemiol., 41: 24-32
    Wild CP (2012b).The Role of Cancer Research in Noncommunicable Disease Control J. Natl. Cancer Inst., 104:1051-1058

  10. Margaret Spitz

    Identifying gaps and provocative questions in cancer epidemiology

    There are still gaps in our discovery of mechanisms underlying associations between the two major lifestyle risk factors (obesity and tobacco use) and cancer risk. We need a coordinated, integrative and interdisciplinary team science approach to extend the boundaries of cancer epidemiology to comprehensively explore the mechanistic underpinnings of epidemiologic observations. In this way, we can tackle these and other important questions, to achieve economies of scale, nimble responses to emerging technologies, and integrate and translate knowledge to impact public health.

    Smoking and Lung Cancer: Genome-wide association studies (GWAS) have identified novel genetic loci contributing to lung cancer risk, but explain only a fraction of inherited contribution, leading to the hotly debated issue of the ‘missing heritability’ of cancer. There is still uncertainty as to the degree with which risk for lung cancer is mediated through the genetic risk for nicotine dependence. Though more common variants are anticipated to contribute to risk, their effect sizes will be too small to reach significance in genome-wide screens, and there are diminishing returns in further evaluating these SNPs. Instead we need to focus on functional studies of the loci identified in GWAS including in vitro and in vivo studies. Emerging metabolomic markers may provide useful biomarker dosimeters of smoking damage relative to carcinogenesis. We should focus efforts on “smarter” study designs either within existing cohorts or as ancillary studies. For example, to increase power and reduce sample sizes, we can select extreme phenotypes in which rare variants will be enriched (eg probands from high risk families, young onset cases, or light smokers). Lung cancer risk prediction has not been sufficiently robust to integrate replicated SNPs into disease prediction and construction of clinically valid risk prediction models for current and former smokers remains a challenge for enrollment into screening trials. The answer may lie in genomic or epigenomic markers in the serum, sputum or nasal epithelium.

    Obesity: Much obesity research has focused on attempts to disentangle intertwined pathways between obesity and cancer risk (e.g., inflammation, sex hormones, insulin, insulin growth factors, Akt/mTOR, adipokines, and sirtuins). Epidemiologic studies, however, are unlikely to accelerate mechanistic understanding. We need to be asking different questions. What have we learned about the genetics of obesity and of physical activity? What is the role of activated brown fat? Is the association reversible if obesity is treated? How does the colon microbiome impact obesity? Can we use this information to impact the obesity epidemic?

    Demonstrating the reversibility of cancer risk with weight reduction (as in cohorts of bariatric cancer patients) could be a powerful behavioral incentive. We need well-designed prospective studies with “next generation” measures of diet; physical activity; body fat and type distribution; and extensive serial biospecimens for high-throughput multiplex measurements of metabolic hormones and inflammatory markers. Comparative studies in populations with wider and different ranges of body fat distribution and diets are recommended. Preclinical models mimicking Western diet exposures to examine pathway perturbations (systems genetics) and nutri-epigenomics could contribute to mechanistic understanding. We need to investigate the role of brown adipose tissue (BAT) in energy expenditure and the obese gut microbiome that can harvest more energy from the diet.

    Summary: There are exciting opportunities to collaborate across large observational studies and to forge new interdisciplinary collaborative ventures. However, we must never abandon the rigors of epidemiologic study design, pristine data collection and meticulous data analysis. In parallel, we need to ponder how academia rewards such team science.

    References
    Chung CC, Chanock SJ. Current status of genome-wide association studies in cancer. Hum Genet. 2011 Jul;130(1):59-78.
    Khoury MJ, Gwinn M, Yoon PW, Dowling N, Moore CA, Bradley L. The continuum of translation research in genomic medicine: how can we accelerate the appropriate integration of human genome discoveries into health care and disease prevention? Genet Med 2007;9(10):665-74.
    Spitz MR, Caporaso NE, Sellers TA. Integrative Epidemiology. Next Generation. Cancer Discov (in press).
    Sellers TA, Caporaso N, Lapidus S, Petersen GM, Trent J. Opportunities and barriers in the age of team science. Cancer, Causes & Control 2006 Apr; 17(3):229-37.
    Hursting SD, Dunlap SM. Obesity, metabolic dysregulation, and cancer: a growing concern and an inflammatory (and microenvironmental) issue. Ann N Y Acad Sci. 2012 Oct;1271:82-7.

  11. Ping Yang

    Epidemiology at a crossroad with more than four directions to go

    Epidemiology has evolved tremendously from its traditional scope and developed close bonds with other scientific disciplines, especially biology, genetics, biostatistics, social and behavioral sciences, engineering, bioinformatics, as well as clinical medicine. Each of these disciplines are focused on better understanding disease processes and causes, assessing harmful or beneficial exposures, making efficient and accurate use of the data, and informing policy and practice decisions in defined populations.

    Regardless of whether scientific advances in these fields occur gradually (e.g., biostatistics) or rather rapidly (e.g., bioinformatics), noticeable gaps have been created during the adaptation of epidemiology in a new era while maintaining its identity as a discipline. Therefore, new challenges emerge frequently and demand new analytic tools and non-traditional study designs to reconcile seemingly incompatible principles, or hypotheses.

    Multiple layers and dimensions of unharmonized development across the aforementioned special fields have led to the following eight challenges:

    In genomic epidemiology, p-values versus effect sizes, reflecting quantity vs. quality, have been sometimes contradictory. “GWAS” – not limited to SNP-based results – have revolutionized the way to discover disease-associated common genomic variations, but they have also generated huge amount of “hits” with trivial effects despite having magnificent p-values. This is particularly relevant with large-scaled meta- and pooled analyses. New meaningful criteria and thresholds are needed, requiring a timely, expert workshop to develop practical recommendations.

    In clinical epidemiology, scientists face the dilemma of choosing an observational (natural) patient cohort versus randomized clinical trials to evaluate intervention effects. In order to balance the need for timely findings, population representativeness, and available funding, different “juries” should be called to judge the value of close clinical monitoring of new drugs or invasive regimens versus non-medication based interventions. Comparative effectiveness research should be applicable to both stringent clinical trials and adequately followed observational patient cohort studies, depending on the nature of the disease and the intervention, e.g., incurable and deadly diseases under new drugs versus improving quality of life using psychosocial and behavioral interventions.

    Etiology and outcome-focused studies have inherent differences in study design and methodology, although both fit within the realm of “big epidemiology facing big problems.” Epidemiologists often find themselves in a double-quandary: to be or not to be included in various consortia of cohorts and to do or not to do both etiology and outcome studies at the same time. The difficulty arises due to concerns of inadequate funding to maintain high-quality cohort, different set-ups and expertise required for disease risk assessment versus disease outcome cohorts, and inequality among cohorts in terms of comparability and combinability. Standards and harmonization models should be established to enable “big epidemiology to resolve big problems.”

    Biospecimen banks and registries with accurate, up-to-date annotation for the banked specimens do not often co-exist. Inconsistent rigor and disproportional funding allocation between the two have been in operation, mostly compelled by the prejudice against detailed yet tedious data collection. Although electronic information records hold promise for rapid data loading, the dynamics of changing conditions and locations of study participants would either require continued data updates or high-cost database-linking from multiple sources.

    Test-measured surrogates and self-reported patient outcomes should be studied simultaneously in clinical outcomes research, which require improved study designs and analytic methods.

    Advice from biostatisticians and bioinformaticians to epidemiologists is invaluable but approaches can be hard to apply in chorus, especially between exploratory data mining and rigorous statistical tests with extraordinarily large data points for finite sample sizes. Issues of study design (e.g., sources and definitions of samples) and potential confounders are often paid little attention.

    Regarding sample sizes and published results in the era of whole-genome sequencing (be it DNA-, exon-, RNA-, epigenetic, or mitochondria level sequencing), misalignment is no longer a technical term only for “omics” data interrogation. It happens more often than expected in biased results reporting, i.e., questionable accuracy and repeatability, interpretations, extrapolations, conclusions, all of which can mislead to the application in real life.

    Concerns of ethical issues and representativeness of the study samples can be on discordant paths, both driven by legitimate justifications and often unresolvable.

    In summary, epidemiology in the 21st century can be stimulated and strengthened by collaborating with multiple scientific disciplines.

  12. Timothy Rebbeck

    Multilevel Epidemiology for the 21st Century

    Epidemiology in the 20th century has been very successful in identifying factors that explain cancer risk and outcomes. Epidemiological studies have elucidated factors and processes that have improved the health of Americans. This research has been translated into cancer prevention strategies (e.g., smoking cessation and sun avoidance); has informed cancer biology (e.g., identification of cancer susceptibility genes and biomarkers); and has improved cancer clinical management (e.g., identification of cancer subtypes). Equally important have been the development and application of methods that allow for meaningful inferences and the translation of research into relevant populations. Epidemiology has led the development of large, multidisciplinary research consortia that have brought together scientists of diverse backgrounds to address critical cancer questions and provide the large sample sizes needed to address complex cancer problems.

    With these major achievements have come new challenges. Some of the more recently identified risk factors, including susceptibility loci, confer very small magnitude of effects on cancer risk or outcomes. Conceptual and statistical models have by necessity become highly complex to accommodate the many factors thought to be associated with cancer risk or outcome. Because of these and other challenges, the successful translation of some recently generated knowledge into improved clinical or public health strategies is still being realized.

    Despite the successes of 20th century epidemiology, the proportion of risk explained is incomplete and the prediction of clinical outcomes remains imperfect. As we continue to evolve thinking about the current array of complex models, interactions among multiple risk factors, population subsets, and tumor subtypes, the next generation of epidemiological research needs to explore new directions.

    What areas have yet to be exploited that may explain the “missing etiology” of cancer? First, novel classes of risk factors should be considered that capture cancer-associated exposures better (or at least differently) than those that have been traditionally studied. Among these factors, “macro-environmental” variables measured at the neighborhood or community level may be surrogates of socio-economic, environmental, demographic, or health care access factors that are not adequately captured by more traditional risk factors measured on the individual level. Second, these novel factors should be considered alongside traditional risk factors in integrative conceptual and statistical models. For example, macro-environmental factors may have meaning on their own, and have been studied in the field of social epidemiology and other disciplines. However, approaches that integrate multiple levels of risk factors provide a new opportunity to understand cancer risk and outcomes. For example, two women with breast cancer may have very different clinical outcomes depending on biological factors (e.g., tumor marker profiles), individual-level factors (e.g., individual educational attainment), and macro-environmental factors (e.g., the communities in which they live). Since combinations of these factors may have an impact on risk or outcome, an integrative, multilevel approach is needed to generate insights that have clinical or public health import. Epidemiology is the fundamental science upon which molecular, genetic, environmental, and social epidemiological investigations are built, and around which a “Multilevel Epidemiology” framework can be developed. Multilevel epidemiology represents a potentially important expansion of epidemiological thinking in the 21st century.

  13. Robert Hoover

    In the early years of systematic cancer epidemiology, high-impact studies were generally relatively small, simple, and conducted by small study teams with the principle investigator being personally responsible for doing or closely directing all aspects of the study. Over the most recent decade, high-impact epidemiology has frequently come from very large studies that were complex, and conducted by large interdisciplinary teams where the level of specialized knowledge needed required each component to be supervised by different team members.

    In hind-sight, the reasons for these changes in approach from those of a cottage industry to those of big-science are fairly clear. For one, in classical epidemiology, in the early days the focus was on identifying large risks associated with obvious and easy-to-assess exposures, and with an interest in only main effects. More recently, focus has shifted to relatively low-level risks, difficult to measure exposures and with a focus on effect modification as well as main effects. The second reason is the introduction of molecular science into epidemiologic studies and the remarkable recent advances in molecular science and technologies. This has enabled us to overcome some of the weaknesses in classical approaches to measure exposures and outcomes, assess susceptibility, and gain biologic insights into carcinogenic mechanisms in humans. It also afforded us with the opportunity to assess large numbers of biomarkers simultaneously.

    That this transition over time was neither systematic nor expeditious is well illustrated by two examples, one from classical, and the other from molecular epidemiology. Following the first study to suggest that hormone therapy of the menopause might cause breast cancer, we endured 2 decades of a plethora of underpowered studies providing a vast array of conflicting data precluding any public health conclusions. A pooled study involving over 50,000 breast cancer cases conducted in 1997 allowed the disentangling of correlated variables and the discovery of a strong interaction with adiposity, and definitively established the likely causal relationship. Within molecular epidemiology, genetic epidemiologists in the early 1990s began to assess the role of individual genetic variants in “candidate genes” for cancer susceptibility. The result was a “lost decade” of assessments of thousands of variants in tens of thousands of studies whose “hits” were almost universally not replicable. With the completion of the human genome project and the development of high density “SNP-chips”, very large case-control studies were launched to agnostically allow the genome to inform us which variants were important. The result is that within 6 years we have moved from 6 established variants associated with cancers to over 300.

    These historic patterns have distinct lessons for the future. First, we are not as smart as we think, and will need to rely less on a-priori and more on listening to the data. In addition, remarkable opportunities are increasingly emerging from new science and technologies, and we need to work closely with our laboratory colleagues to bring these into epidemiology prime- time shape. And, perhaps most importantly, many critical questions in biology and public health can only be addressed by aggregating large amounts of high quality epidemiologic data.

  14. David Hunter

    The last five years have seen several thousand genetic variants newly associated with a wide variety of diseases and traits through the magic of GWAS.(1) Most of these variants are “weak” risk factors (relative risks of 1.05-1.50 per allele). The speed and robustness of these discoveries, operating at very low relative risks previously thought to be infeasible due to selection, information, and confounding biases, contrasts with the previous decades of observational epidemiology research in non-communicable diseases in which progress has been much slower. “New” discoveries in chronic disease epidemiology are rarely accepted as valid on first publication, indeed, Ioannidis has proposed that “Most Published Research Findings Are False”.(2) The single most important difference between GWAS and past approaches in epidemiology has been the large sample sizes needed to drive P values to genome-wide significance (now customarily P <5×10-8) resulting in an unprecedented drive to form consortia to share data. Unlike past approached to meta-analysis, most of the data contributed has not been subject to study-specific publication but is directly pooled into the large discovery datasets. This “Flight to Quantity” (3) has led to relaxed standards of methodologic rigor in component studies, without resulting in a flood of false positives. The false positives in the field were generated by underpowered studies, some of them of high methodologic rigor, that failed to acknowledge the multiple comparisons implicit in the approach. Although the GWAS are somewhat privileged because the accuracy of exposure measurement (i.e. the genotyping) is very high, selection bias less likely, and confounding can be controlled(3), they still show us a better paradigm for success in establishing associations in Epidemiology. Rather than publishing underpowered small studies, then arguing about whose study is right, we should attempt to establish structures where as much as possible of the prevalent data are analyzed ab initio(4). This would require substantial changes in funding policies and academic promotion norms.

    1. Hindorff LA, Junkins HA, Hall PN, et al. A Catalog of Published Genome-Wide Association Studies. http://www.genome.gov/gwastudies
    2. Ioannidis, JP. Why most published research findings are false. PLoS Med. 2005 Aug; 2(8):e124. Epub 2005 Aug 30.
    3. Hunter, DJ. Epidemiology. Lessons for Epidemiology from genome-wide association studies. 2012 May;23(3):363-7.
    4. Thun MJ, Hoover RN, Hunter DJ. Bigger, better, sooner–scaling up for success. Cancer Epidemiol Biomarkers Prev. 2012 Apr;21(4):571-5. Epub 2012 Feb 28.

  15. Susan Pinney

    Future direction – to develop methods to integrate and interpret information from various “omics” studies on the same individuals. These types of studies should yield valuable information, but the challenge is to first develop good methods for integrate the information through the use of bioinformatics. (Integrating the data is not the challenge – well-developed methods exist to do that.) Otherwise, without well-developed and validated methods, these approaches may end up a “bad rap” as did GWAS studies.

Leave a Reply

Your email address will not be published. Required fields are marked *

41,994 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>