Skip to main content

Frequently Asked Questions about HINTS

General questions

Questions about mode

Questions about trends

General questions

What is HINTS?

The National Cancer Institute sponsors the Health Information National Trends Survey (HINTS), which is a biennial, cross-sectional survey of a nationally-representative sample of American adults that aims to assess information support needs in the population. HINTS examines how people access and use health and cancer information from multiple sources, how people use technology to manage health and health information, and how exposure to, use of, and perceptions about health information influence health behaviors.

How is HINTS different from other surveys?

HINTS is the only national surveillance vehicle devoted to monitoring changes in the health communication environment and assessing the impact of communication on key processes affecting health. Compared to other population-level health surveys, HINTS is unique in its emphasis on cancer, health communication, and health information technology. More information about what makes HINTS different from other national surveys is available here.

How many times has HINTS been fielded?

There have been 16 iterations of HINTS to date:

  • HINTS 1 (2003)
  • HINTS 2 (2005)
  • HINTS 3 (2008)
  • HINTS Puerto Rico (2009)
  • HINTS 4 Cycle 1 (2011)
  • HINTS 4 Cycle 2 (2012)
  • HINTS 4 Cycle 3 (2013)
  • HINTS 4 Cycle 4 (2014)
  • HINTS-FDA (2015)
  • HINTS-FDA Cycle 2 (2017)
  • HINTS 5 Cycle 1 (2017)
  • HINTS 5 Cycle 2 (2018)
  • HINTS 5 Cycle 3 (2019)
  • HINTS 5 Cycle 4 (2020)
  • HINTS-SEER (2021)
  • HINTS 6 (2022)
Do I need permission to use the data?

Most HINTS datasets are available for public use and can be downloaded from the HINTS website. However, some HINTS data are considered restricted-use and are only available by request. To request access to the restricted-use datasets that contain geocodes and other restricted variables, please fill out this form.

How are the survey instruments created?

Some items in HINTS are borrowed from existing national surveys (e.g., CDC's Behavioral Risk Factor Surveillance System); some come from smaller health-related surveys, and some are developed by staff at the National Cancer Institute and experts in the field. In all cases, items are carefully tested through cognitive interviewing before the surveys are fielded to ensure that the items are psychometrically sound. Please see the HINTS Instrument Development and Validation Procedures page for more information.

What are the response rates for HINTS?

Health Information National Trends Survey (HINTS) 2003-2022

Administration (Year)

Data Collection Period


Sample size (N) and Response Rate (RR)

HINTS 1 (2003)

Oct 2002-Apr 2003

Random digit dial

N=6369, RR=33%

HINTS 2 (2005)

Feb 2005-Aug 2005

Random digit dial (+ online pilot)

N=5586, RR=21%

HINTS 3 (2008)

Jan 2008-Apr 2008

Postal and random digit dial

Mail: N=3582, RR=30.9%; RDD: N=4092, RR=24.2%

HINTS Puerto Rico (2009)

Apr 2009-June 2009

Random digit dial

N=639, RR=76%

HINTS 4 Cycle 1 (2011)

Oct 2011-Feb 2012


N=3565, RR=37.91%

HINTS 4 Cycle 2 (2012)

Oct 2012-Jan 2013


N=3630, RR=39.97%

HINTS 4 Cycle 3 (2013)

Sept 2013-Nov 2013


N=3185, RR=35.19%

HINTS 4 Cycle 4 (2014)

Aug 2014-Nov 2014


N=3677, RR=34.44%

HINTS-FDA (2015)

May 2015-Sept 2015


N=3738, RR=33.04%

HINTS-FDA Cycle 2 (2017)

Jan 2017-May 2017


N=1736, RR=34.05%

HINTS 5 Cycle 1 (2017)

Jan 2017-May 2017


N=3285, RR=32.4%

HINTS 5 Cycle 2 (2018)

Jan 2018-May 2018


N=3527, RR=32.85%

HINTS 5 Cycle 3 (2019)

Jan-May 2019

Postal (+ web pilot)

Mail: N=4573 , RR=30.2%; Web: N=865 , RR=30.6%; Overall: N= 5438 , RR=30.3%

HINTS 5 Cycle 4 (2020)

Feb 2020-June 2020


N=3865, RR=37%

HINTS-SEER (2021)*

Jan 2021-Aug 2021


N=1234, RR 12.6% (overall)

HINTS 6 (2022)

Mar 2022-Nov 2022

Postal and push to web

N=6252, RR=28.07% (overall)

*Note: The sample for HINTS-SEER (2021) was drawn from 3 SEER registries

What geographic units of analysis are available?

Weighted HINTS data provide nationally-representative estimates, and there are variables in the data set that allow researchers to compare rural vs. urban metropolitan statistical areas (MSAs), as well as census regions (four total) and census divisions (nine total). There are typically not enough respondents from individual states to produce state-level estimates, but there are certain cases where data can be pooled across multiple iterations of HINTS to yield sufficient sample size to perform state-based investigations that are statistically appropriate (though it is important to keep in mind that HINTS doesn’t sample at the state level). There are also restricted HINTS datasets that contain state, county, and zip code information — investigators can submit a request form if they are interested in obtaining these geocoded data.

Can I use HINTS items in my own survey or research?

Yes. We encourage researchers to standardize measures and to utilize HINTS items for their own independent primary data collection efforts. However, the full HINTS instruments should not be copied or duplicated for studies that are not led by NCI.

What journals would you recommend for HINTS studies?

A full list of peer-reviewed journals that have published HINTS studies can be found at

Where can I find information on sampling procedures?

Each iteration of the survey has an associated methods report that is available on the HINTS website. These reports describe the sampling procedures in detail. Briefly, HINTS uses stratified address-based sampling, and random selection within the household. See the HINTS Methods Reports for specific information about the sampling procedures used for each survey.

What are the limitations of the data?

Because HINTS is a cross-sectional survey, it is not possible to infer causal relationships between constructs or items in the survey. Additionally, while researchers can examine trends over time at the national level for outcomes included in multiple iterations of the survey, because each study is considered a random sample from the same population, it is not possible to assess change over time at the individual level.

The response rate for some HINTS surveys is below 35%. Does that suggest the data are biased?

To compensate for non-response and coverage error, selection weights are calibrated using data from the American Community Survey conducted by the U.S. Census Bureau. HINTS non-response has historically been correlated with being male, young, belonging to a minority group, having less education, and being Hispanic. The calibration therefore uses age, gender, educational attainment, race, ethnicity, and Census region to adjust for this pattern. An analysis conducted on earlier rounds of HINTS found that non-response is also negatively correlated with access to health care and to health status. Those with less access to health care services and those who have fewer health problems were less likely to respond to the survey. To compensate for these patterns, insurance status and cancer status are used as additional calibration adjustments. The data to make these adjustments were taken from the National Health Interview Survey.

What procedures are in place to increase the response rate?

To maximize the response rate and the representativeness of the sample, HINTS includes multiple non-response follow-up mailings, a pre-paid incentive at the first mailing, and express delivery as one of the non-response follow-up mailings. A Spanish version of the questionnaire is distributed to households in high minority census tracts. Further, each iteration of HINTS includes embedded experiments to test strategies to increase response rates and improve data quality. These experiments have included adding a “push-to-web” option to allow sampled respondents to complete the survey online, and “promised incentives,” which provide respondents with an additional incentive after they have completed the survey.

Can other groups conduct international or community versions of HINTS?

Researchers interested in conducting a HINTS-like health communication survey in their community or country are encouraged to use HINTS items from our publicly available survey instruments for their data collection efforts. The NCI HINTS program supports any opportunity to standardize measures and promote data harmonization for health communication research.

Members of the NCI HINTS management team are available to provide limited consultation for these types of data collection efforts. After reviewing our publicly available materials (including the methodology report, instrument, and data from the most recent HINTS cycle), please contact us directly at and someone from the NCI HINTS management team will get back to you about providing consultation on your study.

Due to limited resources and competing priorities, any further consultation and collaboration will be decided on a case-by-case basis after an initial meeting with you and your research team. Because your health communication survey is not initiated, led, monitored, or funded by NCI, neither the HINTS name (in full or acronym) nor the HINTS logo should be used for these unaffiliated projects. Moreover, while you are welcome to use HINTS items, the entire NCI HINTS instruments should not be duplicated for external use.

How do I contact the program if I have additional questions?

For information or questions about the HINTS program, please use our contact form or email us at

Questions about mode

What modes have been used to collect HINTS data?

HINTS 1 (2003) and HINTS 2 (2005) were administered by landline telephone using a random-digit-dial (RDD) sampling frame.

HINTS 3 (2008) was administered using two different modes: 1) by landline telephone, with an interviewer reading the questions, and 2) by mail with a self-administered paper questionnaire. The telephone mode was administered by drawing a sample using an RDD frame, which involves randomly generating telephone numbers among those exchanges that are used for landline telephones. The sample for the mail survey was based on a list of all addresses to which the United States postal service delivers residential mail.

Both a mail and a telephone survey were implemented in 2007-2008 to allow HINTS to bridge between survey administrations. Since there is a possibility that certain estimates will be different depending on the survey mode, it may be difficult to compare the mail results in 2008 to the telephone results in prior years. Including both an address and RDD frame enables trend analyses while keeping the mode constant. See this report for more information regarding merging HINTS surveys with multiple modes: 

All iterations of HINTS following HINTS 3 were conducted using a self-administered mail questionnaire, with HINTS 5 Cycle 3 and HINTS 6 including a push-to-web option for surveys to be completed online. All future iterations of HINTS will be dual mode: postal and push-to-web.

What are mode effects?

Mode effects are differences in results associated with the mode used to administer the survey. HINTS has been administered as a self-administered mailed paper survey, a telephone-based interview survey, and a web-based survey. Research has shown that when asking certain types of questions, results will differ depending on the mode. For example, self-administered surveys have been shown to yield more reports of socially sensitive information when compared to interviewer-administered surveys.

Do I need to consider mode effects if I'm looking at trends across HINTS years?

Mode effects need to be considered when combining data within an iteration that uses multiple modes (e.g., HINTS 3, which used telephone and postal modes or HINTS 5 Cycle 3, which used self-administered paper and web surveys) and when conducting trend analyses on data that were collected with different modes (e.g. HINTS 1 which was collected by telephone, and HINTS 4 which was collected through self-administered mailed surveys). HINTS datasets typically contain a variable that specifies the mode used by the respondent and can be used by analysts to test for mode effects. If there is a significant mode effect, you may: 1) use only one mode (and the respective weights) in your analysis; or 2) control for mode in the analyses. 

How can I know if I need to be concerned about mode effects?

Compare the estimates from the address-based sample to the estimates from the RDD sample. To do this, use the address sample and RDD sample weights to produce the two estimates. You can then conduct a formal test to see if the estimates are statistically different. Note that statistical significance is not particularly meaningful for samples as large as the ones used in HINTS. Relatively small differences may be statistically different but not substantively meaningful. See the Overview and Analytic Recommendation documents included with the data download for more information and example code for comparing modes. If there are no mode effects, then it is appropriate to combine the multiple samples into a single analysis.

How do I address mode effects in my analyses?

When relevant, information and guidance on how to address mode effects can be found within the analytic recommendations documents found within each HINTS data package. Data packages can be downloaded at:

What should I do if some of the items in my analysis have mode effects and some do not?

The main advantage of using the combined sample is increased statistical precision (larger sample size). If it is possible, report the results not affected by mode using the combined sample. For results that are affected by mode, either use the "preferred" mode or report results for each mode separately (if there is no preferred mode). If the increased precision is not essential to the analysis, then consider reporting all of the results in either the preferred mode or for each mode (if there is no preferred mode).

If there are no mode effects for an item, can I combine data collected by both modes?

Yes. Use the combined weights for this analysis.

Questions about trends

If HINTS data are cross-sectional, how can I look at trends over time?

HINTS is a series of repeated cross-sectional surveys from the same population. By comparing measures across different survey years, one can examine change over time. For example, using HINTS data, it is possible to see if the proportion of adults in the United States who have looked for information about cancer has changed since HINTS was first fielded in 2003. This is a standard methodology that is applied to virtually all social and economic surveys that examine change over time. See the Overview and Analytic Recommendation document included when you download HINTS data and reports found here for more information on conducting trend analyses (as well as example code).

How can I tell if an item is appropriate to examine in trend analyses?

You should examine the question wording, the response options, and the denominator (i.e., who was asked) for the item for each year. If these aspects have not changed substantially (or could be made comparable), then it is appropriate to trend on that item.

If the wording and denominator has not changed, can I combine data across HINTS years to increase sample size?

Combining data across years will increase statistical power by increasing the sample size. This might be especially useful if you want to focus on particular population groups that a single survey administration may not provide large sample sizes for, or if there are other groups of interest that a single year does not provide sufficient sample to analyze. Before combining data across years, it is important to first determine that the wording for items of interest has not changed between years and the denominator is also consistent (i.e., there were no changes in skip patterns in the survey that would change who sees the question, such as a change from all respondents being presented with the item to only those who report seeking health information being asked the question). To generate estimates and standard errors for combining multiple iterations of data, it is necessary to combine the final sample weight and create a new set of replicate weights that permits the statistical program to compute the correct standard errors. The procedure to do this is provided in Rizzo et al (2008) Chapter 4 (combining data files)—see HINTS also provides a data merging tool to assist with creating code for trend analyses.

Do I need to consider mode effects if I'm looking at trends across HINTS years?

Yes, you need to consider mode effects for trend analyses. If there is a mode effect, you should use only one mode in your analysis to keep mode consistent across the different HINTS iterations.