Posters - Detailed Abstracts
For a quick overview of accepted poster presentations, see: Posters – At a Glance
- POSTER SESSION I: Tuesday, August 25, 2015 – 8:00AM-12:30PM
- POSTER SESSION II: Tuesday, August 25, 2015 – 1:00PM-5:30PM
- POSTER SESSION III: Wednesday, August 26, 2015 – 8:00AM-1:00PM
POSTER SESSION I: Tuesday, August 25, 2015 – 8:00AM-12:30PM
1. Treatment Patterns Among Breast Cancer Patients Across Hormone and HER2 Receptor Subtypes: An Analysis of SEER Data
Matthew Alcusky, Pharm D, MS, David Delgado, PhD
BACKGROUND: Guideline-based surgery and radiation treatment for local-regional breast cancer (BC) is associated with improved survival and reduced likelihood of recurrence. Studies have reported deviation from guideline-based care. Variation in treatment patterns across joint hormone receptor (HR)/human epidermal growth factor receptor 2 (HER2) subtypes and race/ethnicity require further evaluation.
OBJECTIVE: To estimate prevalence rates of guideline-recommended local-regional BC treatment and assess the relationship between demographic and clinical-pathological characteristics with the likelihood of receiving guideline-based care.
METHODS: Women in the 2014 Surveillance, Epidemiology, and End Results data with primary BC diagnosed in 2010-2011, stages 1-3a, and malignant behavior-type were included. A multivariate logistic regression model examined factors associated with receiving primary surgery and radiation treatment based on the National Comprehensive Cancer Network 2010-2011 Invasive BC Guidelines. The model included age, year of diagnosis, marital status, race/ethnicity, SEER-registry, insurance, grade, stage, histology, and joint receptor subtype.
RESULTS: Among 61,582 BC patients, 53,099 (86.2%) received guideline-based primary treatment. 1,266 (2.1%) patients did not have surgery. A total of 1,504 (64.8%) of 2,321 patients with advanced BC who underwent mastectomy received guideline-recommended post-mastectomy radiation and 21,958 (80.7%) of 27,215 patients younger than 70 who underwent breast-conserving surgery received recommended post-surgical radiation.
Compared with HR+/HER2-, patients with triple-negative (0.69;0.64-0.74), HR+/HER2+ (0.77;0.72-0.84), and HR-/HER2+ (0.70;0.62-0.78) subtypes had decreased adjusted-odds of receiving guideline-based treatment. Black non-Hispanic (0.87;0.81-0.94), unmarried (0.83;0.79-0.87), uninsured (0.65;0.56-0.77), stage 2 (0.61;0.58-0.65), stage 3 (0.30;0.28-0.33), and ductal histology (0.81;0.74-0.89) were associated with reduced likelihood of receiving guideline-based treatment.
CONCLUSION: The proportion of women receiving recommended post-mastectomy radiation remains sub-optimal and the rate of post breast-conserving surgery radiation was below the current quality measure target of 90%. The decreased adjusted-odds of receiving guideline-based treatment for HER2+ and triple-negative BC may contribute to worse outcomes with subtypes other than HR+/HER2-. Observed differences in treatment by clinical presentation and race/ethnicity require further research.
2. Continued Decrease in Prostate Cancer Testing Following the Preventive Services Task Force Recommendations
Jun Li MD, PhD; Zahava Berkowitz, MSPH, MSc; Ingrid J. Hall, PhD, MPH
Background: In 2012, the US Preventive Services Task Force (USPSTF) expanded its 2008 recommendation against prostate-specific antigen (PSA) testing among men aged greater than or equal to 75 years to also include men of all ages.
Objective: To assess possible changes of PSA testing rates after the release of recent USPSTF recommendations.
Methods: Using the 2005-2013 National Health Interview Survey (NHIS) data, we calculated the prevalence of PSA testing for each survey year among men greater than or equal to 40 years by age group and age-adjusted race. Differences between years were assessed with linear contrasts after combining data from all survey years.
Results: The overall prevalence of PSA testing was highest in 2008 (31.8%) and decreased significantly in 2013 (24.2%). Men greater than or equal to 75 had the highest testing prevalence over the entire period. Compared with 2008, a significant reduction in the proportion of PSA testing was observed in 2013 for each age group, especially for men greater than or equal to 75 years (-14.0% points; P less than 0.001). Compared with 2010, a significant reduction occurred only among men greater than or equal to 50 years in 2013. For white and black men, the PSA testing rates were highest in 2008 (33.6% and 31.1%, respectively) and decreased significantly in 2013 (24.7% and 23.4%, respectively). Only white men had a significant decrease in PSA testing between 2010 and 2013
Conclusions: Significant declines in PSA testing from 2008 to 2013 in men greater than or equal to 75 may reflect the impact of the 2008 USPSTF recommendations. While the cause of the decreases in PSA testing between 2010 and 2013 among men aged 50-74 years or white men is unknown, the decreases may suggest the early impact of the 2012 recommendations.
3. Racial/Ethnic Disparities in 6-Month Survival Among Patients with Locoregional Pancreatic Cancer: A Trend Analysis Of SEER from 1990 to 2009
Vegesna A, Delgado D
BACKGROUND: Patients diagnosed with pancreatic cancer at the local or regional stage are candidates for pancreatic resection, and resected patients experience better overall survival. Evidence suggests that racial disparities in pancreatic cancer survival may persist despite treatment, especially during the initial year post-diagnosis.
OBJECTIVE: To examine whether racial/ethnic differences in 6-month survival among patients with locoregional pancreatic cancer have changed over 20 years.
METHODS: The 2014 Surveillance, Epidemiology, and End Results data from 1990 to 2010 was used for this analysis. Multivariate Cox proportional hazards models examined disparities in 6 month survival for four separate time periods: 1990-1994, 1995-1999, 2000-2004, and 2005-2009. An overall model assessed which later time periods predicted improved survival compared to 1990-1994. Models were adjusted for age, resection status, sex, stage, SEER registry and marital status.
RESULTS: A total of 26,077 cases were included in the final analysis. A larger percentage of non-Hispanic blacks and Hispanics were diagnosed younger than 65 years of age compared to non-Hispanic whites (44.4%, 42.1% and 34.1%, respectively; p less than 0.0001). In the overall model, being diagnosed from 2005-2009 was associated with a 17.3% decreased risk of 6 month mortality compared to patients diagnosed between 1990 and 1994 (HR, 0.83; 95% CI, 0.77-0.89). Non-Hispanic blacks had an increased risk of 6 month mortality compared to non-Hispanic whites from 1990-1994 (HR, 1.24; 95% CI, 1.05-1.47) and 1995-1999 (HR, 1.20; 95% CI, 1.03-1.41); this difference became non-significant in the two latest time periods. While Hispanics had an increased risk of 6 month mortality compared to non-Hispanic whites from 1994-1999 (HR, 1.21; 95% CI, 1.00-1.47), disparities in survival were non-significant among all other time periods.
CONCLUSION: The survival difference between non-Hispanic blacks and non-Hispanic whites appears to be closing over time. Continual improvement in pancreatic resection rates for racial-ethnic minorities who have locoregional disease remains critical.
4. Cardiovascular risk Assessment using WHO/ISH risk Prediction charts in a rural area of North India
Anurag Chaudhary, Priya Bansal, Mahesh Satija, Vikram Gupta, Sangeeta Girdhar, Sarit Sharma, Urvashi Kataria
Background: Low and middle income countries are experiencing demographic and epidemiological transition and are becoming increasingly vulnerable to the impact of cardiovascular diseases (CVD), a leading cause of deaths in these countries. The predicted risk of an individual can be a useful guide for making clinical decision on the intensity of preventive interventions.
Objectives: To assess the cardiovascular risk among adults aged ≥40 years, utilizing the WHO/ISH risk charts in a rural population of Ludhiana, Punjab, India.
Methods: Cross-sectional study was carried out at Rural health centre under Department of Community Medicine, Dayanand Medical College & Hospital, Ludhiana, India. Information was obtained from adults aged ≥40 years attending a health checkup camp, using a pretested questionnaire. Anthropometric measurements and laboratory investigations were completed for 133 participants. WHO/ISH risk prediction charts for the South-East Asian region were used to assess the cardiovascular risk among the study participants.
Results: Risk of CVD was assessed to be ≥ 10% in 44.4% of subjects. When this data was analyzed age wise, one-third of subjects in 50-59 years, 82.2% of subjects in 60-69 years and 86.7% subjects in age group of more than 70 years were having moderate to high risk (≥10%) for CVD. In addition to this, the subjects having moderate to high risk for CVD were found to have other independent risk factors such as raised pulse rate (19.0%), premature menopause in females (26.5%), positive family history for CVD (31.0%) and obesity (79.3%).
Conclusion: This study gives snap shot view of risk of CVD events in rural India. The overall risk for CVD as per WHO/ISH scale as well as prevalence of other important risk factors for CVD is found to be alarmingly high in this study. This necessitates risk stratification of population and targeted intervention strategies.
5. Positive Trends in Out-of-Hospital Cardiac Arrest Survival: CARES Program 2010-2014
Kimberly Vellano, Allison Crouch, Monica Rajdev, Tiara Sinkfield, Brad Swanson & Bryan McNally; Emory University, CARES Program
Background: In 2004, the Centers for Disease Control and Prevention (CDC) established the Cardiac Arrest Registry to Enhance Survival (CARES) in collaboration with the Department of Emergency Medicine at the Emory University School of Medicine. CARES was developed to help communities determine standard outcome measures for out-of-hospital cardiac arrest (OHCA). Participating EMS systems can compare their performance to de-identified aggregate statistics, allowing for longitudinal benchmarking capability at the local, regional, and national level. The program has expanded to include 12 state-based registries and more than 50 community sites in 23 additional states, representing a catchment area of almost 80 million people or approximately 25 percent of the US popu
Objective: To identify trends in survival and bystander intervention rates in a cohort of CARES participants from 2010-2014.
Methods: Trend analyses were conducted using the CARES 2010 cohort from January 1, 2010 – December 31, 2014. This analysis tracked all worked, presumed cardiac OHCAs from 68 agencies that were participating in CARES in 2010, representing 34 communities with a combined population of approximately 26 million.
Results: Utstein Survival (bystander witnessed arrests presenting in a shockable rhythm) increased from 31.3 percent in 2010 to 34.6 percent in 2014 (p equal to 0.043). Overall survival to hospital discharge increased from 10.3 percent to 11.0 percent in the same timeframe (p equal to 0.05). Bystander CPR provision increased from 36.5 percent in 2010 to 46 percent in 2014 (p less than 0.0001) and Bystander AED use increased from 3.7 percent to 5.1 percent (p less than 0.0001).
Conclusions: Data drawn from a large subset of CARES participants suggests that rates of survival from OHCA and bystander intervention rates have improved over time.
6. Latent Transition Analysis: Exploring Symptom Clusters in Heart Failure
Jumin Park, MSN, RN; Meg Johantgen, PhD, RN
Background: A symptom cluster is a group of two or more symptoms that occur together and are related to each other. Latent Transition Analysis (LTA) is a variant of latent class profile analysis used for modeling change over time. LTA is appropriate for answering questions about the types of individuals who change over time.
Objective: The purpose of this study is to examine changes over time in symptom cluster membership in heart failure (HF).
Methods: The study used existing data in a longitudinal design. The data were obtained from 895 HF patients from in- and out- patients settings between baseline (T1) and one year after (T2) in the United States. Symptoms were identified from Minnesota Living with Heart Failure Questionnaire (MLHFQ). Five physical symptoms (edema, shortness of breath, fatigue/increased need to rest, fatigue/low energy, and sleep difficulties) and three psychological symptoms (worrying, feeling depressed, and cognitive problems) were examined. Latent class profile analysis was used to identify symptom clusters and then LTA was performed to explore changes over time in symptom cluster membership.
Results: There were four relatively distinct classes of individuals between T1 and T2: all mild symptoms (Class1), moderate physical symptoms (Class2), moderate psychological symptoms (Class3), and all sever symptoms (Class4). HF patients had approximate 20% of remaining in the same Class or transitioning to other Classes. In Class 3, HF patients had a 54.5% probability of transitioning in Class 1 and a 48.2% probability of transitioning to Class 4.
Conclusions: The results provide useful information about for whom, and in what direction, change occurs over time. LTA is well suited to answering questions where understanding in which direction change occurs for subgroups across discrete qualitative states is important. LTA is a valuable procedure since it reduces large continuous observations into meaningful patterns.
7. Examining the Relationship Between Ethnicity and Diabetes Management Among Asians and Pacific Islanders
Adrian Matias Bacong, Christina Holub, PhD, MPH, & Liki Porotesano: San Diego State University Research Foundation, San Diego State University Institute for Behavioral and Community Health
BACKGROUND: Recent disaggregation of health data among Asians and Pacific Islanders has revealed significant disparities in diabetes. However, little is known about diabetes management practices and clinical outcomes between ethnic groups.
OBJECTIVE: To compare diabetes management practices in relation to perceived self-management confidence, emergency room (ER) visits, and inpatient hospital admission between Asian ethnicities with Native Hawaiians and Pacific Islanders (NHPI).
METHODS: Data from the California Health Interview Survey (CHIS) 2011-2012 examined management practices among Chinese, Vietnamese, Korean, Filipinos, and NHPI diabetics (N = 420). Management practices, perceived self-control of diabetes, and hospital admission prevalence were analyzed using regression modeling while controlling for age, gender, Body Mass Index (BMI), and household income. A significance level of p = 0.05 was used for all statistical tests.
RESULTS: Hemoglobin-A1C checks within the past year, insulin-taking behavior, diabetic pill-taking behavior, and multiple treatment approaches did not differ by ethnicity. However, monthly sugar/glucose self-examinations (F(4, 415) = 4.46, p = 0.002), diabetes care plan creation (Chi-Square(4, N = 420) = 98.97, p less than 0.001), and possessing a written care plan copy (Chi-Square(4, N = 246) = 18.68, p = 0.001), differed significantly by ethnicity. After controlling for predisposing variables, care plan creation (p less than 0.001) and possessing a written care plan copy (p = 0.002) remained statistically significant. Diabetes-related ER visits and hospital admission did not differ by ethnicity. Perceived self-confidence to manage/control diabetes (Chi-Square(4, N = 419) = 9.70, p = 0.046) differed by ethnicity. Controlling for predisposing and diabetes management variables attenuated the effects of ethnicity on perceived self-confidence to manage/control diabetes.
CONCLUSIONS: Diagnostic and treatment practices appear similar among Asian ethnic groups and Pacific Islanders. However, development of comprehensive plans may impact the quality of care and perception of diabetes management/control between different Asian ethnicities and Pacific Islanders.
8. The Impact of Repeat Hospitalizations on Hospitalization Rates among Adults with and without Diabetes
Stephanie M Benjamin, Jing Wang, Linda S Geiss, Theodore J Thompson, Edward W Gregg
Background: Hospitalization data are often used to monitor outcomes in disease surveillance. Frequently, hospitalization rates are calculated using the number of events (includes repeat hospitalizations) instead of the number of individuals hospitalized (excludes repeat hospitalizations).
Objective: We examine the impact of repeat hospitalizations on hospitalization rates for various conditions and on comparisons of rates by diabetes status.
Methods: We analyzed 2011 hospitalization data capable of distinguishing repeat hospitalizations among individuals aged 18 years and older from 12 states using the Agency for Healthcare Research and Quality’s State Inpatient Database. The Behavioral Risk Factor Surveillance System was used to estimate the number of adults with and without diagnosed diabetes in each state (denominator). Hospitalization rates by diabetes status were compared using the total number of hospitalizations (including repeat hospitalizations) in the numerator and using only the number of individuals hospitalized (excluding repeat hospitalizations) in the numerator.
Results: Regardless of diabetes status, hospitalization rates were considerably higher when repeat hospitalizations within a calendar year were included. The excess rates ranged from 12.1% higher for stroke to 38.4% higher for heart failure for adults with diabetes; for adults without diabetes, these rates ranged from 10.1% higher for stroke to 28.3% higher for heart failure. However, including repeat hospitalizations for common diabetes-related complications had little impact on the relative comparison of rates by diabetes status.
Conclusions: Hospitalization rates including repeat hospitalizations overestimate rates in individuals and are especially pronounced for some causes. However, inclusion of repeat hospitalizations had little impact on the comparison of rates by diabetes status for common diabetes-related complications.
9. Diabetes Status and Dietary Characteristics among U.S. Noninstitutionalized Adult Population
Ping Cao, MS, Pharm D. Candidate; Janice L. Gilden, MD
Background: Dilatory factors may play a significant role in the development of diabetes.
Objective: To explore the association between diabetes status and dietary factors by comparing the sample characteristics and dietary factors between diabetics and non-diabetics.
Methods: Data were National Health and Nutrition Examination Surveys 2009-2010. This study included 2284 observations, representing 195.75 million US civilian non-institutionalized adults. Subjects were divided into diabetic group or non-diabetic group according to the fasting blood glucose measurement. Chi-square test was used for categorical variables and t-test was for continuous in the descriptive analysis. Multivariate logistic regression, controlling demographics, comorbidities, and nutritional factors, was used to test the association between diabetes and considered factors.
Results: Descriptive analysis demonstrated that among diabetics, 61.95% were male, 50.92% 60 years or older, 59.66% with BMI larger than or equal to 30 kg/m2, 62.6% had comorbidity, and 56.83% had close relatives with diabetes, all of which are significantly different from non-diabetics. Compared to the daily nutrition intake of non-diabetics, diabetics took 47.27mg more cholesterol (p=0.0231), 33.83mg more choline (p=0.0458), but 17.23g less sugar (p=0.0005). More non-diabetics took 1-10 meals/week outside of home (27.69 vs.16.20%, p=0.0481) and were vegetarian (2.87 vs. 1.09%, p=0.024). There were no significant differences in total energy, fat, protein, and carbohydrate intake between diabetics and non-diabetics. Regression analysis showed that BMI larger than or
Conclusion: This study showed that obesity, family history, old age, male, Mexican, and sedentary lifestyle are significantly associated with diabetes status. However, the association between diabetes and sugar intake need further exploration.
10. Mexican American Diabetes Rates: Clinical Status versus Self-Report
Megan E. Douglas, M.S. and Charles A. Guarnaccia, Ph.D.
Background: Among Mexican American individuals, the high rate of diabetes negatively affects overall health. Awareness of diabetes status may be impacted by nativity and can further exacerbate risk for disease complications. Effective treatment begins with disease-status knowledge, so factors impacting self-report of diabetes-status must be studied.
Objective: To examine whether country of birth, age, gender, HbA1c levels, and insurance coverage significantly contributes to correct/incorrect self-report of diabetes-status among Mexican Americans.
Methods: The present study examined diabetes status among Mexican Americans (born in either the U.S. or Mexico) using the NHANES data sets from 1999 to 2012. Comparisons were made between self-reported diabetes status (yes/no responses) and objective clinical diagnosis based on a 6.5% HbA1c cut-score.
Results: A binary logistic regression was run and country of birth was found to significantly predict misreport of diabetes status: Chi-squared (1, N =795) = 9.71, p = .002. Mexican Americans born in Mexico were more likely to report they had not been told they had diabetes when their HbA1c did meet clinical criteria. A second binary logistic regression was performed to assess the impact of several significant factors. Older age (Wald = 11.57, p = .001), higher HbA1c levels (Wald = 11.00, p = .001), and having health insurance coverage (Wald = 5.89, p = .015) increased the likelihood that an individual would be correctly classified by self-report, Chi-squared (1, N =786) = 51.19, p less than .001. Interestingly, country of birth was no longer significant control.
Conclusions: Nativity was related to awareness of disease status; however this relationship was affected by underlying differences that vary with country of birth. This finding represents an important area for targeted health interventions which may increase awareness of diabetes status.
11. Trends in Diabetes-Free Life Expectancy and its Variance: United States, 2000-2010
Ezra I. Fishman
Background: In the last decade, adult mortality in the United States has continued its long-run decline, while diabetes prevalence has increased. It is unknown whether the additional person-years lived in the adult population have mostly been spent in a diseased or a disease-free state. Furthermore, although illness and death are stochastic processes, little is known about the variance in diabetes-free life expectancy (DFLE) when compared across ages.
Objective: Estimate DFLE and its variance in the United States in 2000 and 2010.
Methods: Data on diabetes prevalence for ages 20+ come from the National Health and Nutrition Examination Surveys (NHANES), 1999-2000 (n=4,205) and 2009-2010 (n=5,752). Diabetes prevalence was defined as HbA1c at least 6.5% or taking diabetes medication. Deaths and population counts by age and sex come from the Human Mortality Database, covering the entire U.S. population. DFLE was estimated using Sullivan’s method. Variance in DFLE was estimated using the delta method.
Results: Although life expectancy at age 20 rose by approximately 3 years for both males and females between 2000 and 2010, DFLE at age 20 did not change during this decade. At age 70, life expectancy rose by 2.5 years for males and 2.7 years for females, but DFLE rose only 0.7 years for males and 0.8 years for females. For both sexes and in both years, variance in DFLE was larger at younger ages (males, 2000, age 20: 0.020) than at older ages (males, 2000, age 70: 0.012).
Conclusions: The vast majority of the person-years of life gained by the U.S. adult population between 2000 and 2010 were spent with diabetes. Variance in DFLE is larger at younger ages, when mortality and diabetes prevalence are lower, than at older ages.
12. Development and Evaluation of a Diabetes Risk Assessment Tool for Early Onset Type 2 Diabetes: Using Strong Heart Study Family Cohort Data
Fengxia Yan, Elisa T. Lee, EunSeok Cha
Background: Self-assessment tools for detecting type 2 diabetes have been developed in general population. There is a need to develop an age specific diabetes risk assessment tool to screen pre-diabetes or diabetes in young adults.
Objective: The purpose of this study is to develop an assessment tool to assess the risk for early onset type 2 diabetes in young adults aged 18-29.
Methods: Longitudinal Strong Heart Family Study data (SHFS) were used to develop the assessment tool. American Indian young adults, aged 18-29, who had euglycemia and did not receive any diabetes treatment at baseline, and who had follow-up data at second examination (average of 4 years after baseline) were included to develop a risk-score algorithm for pre-diabetes or diabetes (n=590, 242 males). Logistic regression model was constructed to select the potential factors that might increase diabetes risk in young adults. The area under the receiver operating characteristic curve (AUROC) was used to assess the predictive ability.
Results: More than 25% (n=156) of normal young participants at baseline developed pre-diabetes or diabetes at the second exam. The final model shows that parental history of diabetes, obesity level, alcohol consumption, and higher fasting glucose at baseline were significantly associated with occurrence of pre-diabetes or diabetes at the second exam. Obesity status was categorized into four levels with score 0-3 while other risk factors were all bi-level with score 0 or 1 which yield total score 6. A point of 3 yields a sensitivity of 86% and specificity of 36%, positive predictive value of 30% and negative predictive value of 89%.
Conclusions: This new screening tool has potential to screen early-onset diabetes risk in young adults. Our findings warrant a future study to apply and test this risk assessment tool in other races/ethnicities young adults.
13. Teen Childbearing in Rural America
Alison Stewart Ng, Kelleen Kaye
Background: We had often been asked whether teen childbearing was higher in rural areas or metropolitan areas, and had heard abundant speculation regarding which might be true as well as what factors play a role. Yet, we found virtually no empirical evidence to shed light on these questions.
Objective: To assess whether teen birth rates are higher in rural or metropolitan areas, and ascertain which factors account for disparities by metropolitan status.
Methods: We used multivariate analysis (a negative binomial specification) to estimate the relationship between county teen birth rates (based on restricted-use 2010 all-county natality data) and numerous county level factors (typically lagged to avoid endogeneity), as well as a rural-metropolitan indicator (based on the NCHS Urban-. Rural Classification Scheme) and state fixed effects. Using these results and a Oaxaca-Blinder style decomposition, we assessed the contribution of each factor to the rural-metropolitan disparity in teen birth rates. For added context, we also assessed rural-metropolitan differences in sexual activity and contraceptive use, using the 2006 and 2010 National Survey of Family Growth.
Results: Rural teen girls were significantly more likely to have ever had sex and less likely to have used contraception at first sex compared to metropolitan teens. Consistent with these patterns, the 2010 teen birth rate was nearly one-third higher in rural counties. Our multivariate and decomposition analyses suggest that this disparity in teen birthrates is largely explained by lower college enrollment, higher poverty, reduced availability of health services, and lower rates of health insurance, accounting for 20%, 19%, 18% and 13% of the disparity respectively.
Conclusions: These results highlight rural youth as an often overlooked but high-risk group when it comes to teen childbearing. What’s more, we show that their higher birthrates can be directly tied to greater economic disadvantage and reduced access to clinical care, offering important lessons for teen pregnancy prevention.
14. Characterization of Emergency Department Visits Related to Schizophrenia by Patients Aged 18-64 Years: United States, 2009-2011
Michael Albert, National Center for Health Statistics; Linda F. McCaig, National Center for Health Statistics
Background: Emergency department (ED) care is important for the treatment of acute presentations of schizophrenia and serves as a safety net for schizophrenic patients.
Objective: The objective of this study is to evaluate the rates and characteristics of schizophrenia-related ED visits in the United States.
Methods: This is a cross-sectional study of 2009-2011 data from the National Hospital Ambulatory Medical Care Survey (NHAMCS). NHAMCS is an annual, nationally representative survey of visits to nonfederal, general, and short-stay U.S. hospitals. Schizophrenia-related visits were defined as those made by persons aged 18-64 years and having a 1st-, 2nd-, or 3rd-listed ICD-9-CM diagnosis code of 295. Differences among subgroups were evaluated using a two-tailed t test.
Results: During 2009-2011, there was an annual average of 382,000 schizophrenia-related ED visits made by persons aged 18-64 years, corresponding to a visit rate of 20.1 per 10,000 persons. The visit rate was approximately two times higher for men (26.5 per 10,000) compared to women (13.8 per 10,000). Non-Hispanic black persons had a visit rate (63.8 per 10,000) that was more than 3 times higher than Hispanic persons (17.7 per 10,000) and non-Hispanic white persons (14.0 per 10,000). The highest visit rate was among non-Hispanic black men (97.3 per 10,000). Visits by homeless persons represented 7.5% of schizophrenia-related visits compared with 0.9% of non-schizophrenia visits. The most common primary source of payment for schizophrenia-related visits was Medicaid (40.2%), which was higher than non-schizophrenia visits (23.2%). About one-third (32.7%) of schizophrenia-related visits resulted in admission to that hospital compared with 10.3% of non-schizophrenia visits. For an additional 16.7% of schizophrenia-related visits, patients were transferred to a psychiatric hospital.
Conclusions: Disparities in schizophrenia-related ED visits were observed. Findings from the analysis of NHAMCS data can help inform health policy on addressing disparities and access to care for schizophrenic patients.
15. Depression Among Housewives In A Rural Area Of North India: An Alarming Situation
Sangeeta Girdhar, Urvashi, Anurag Chaudhary, Sarit Sharma
Background: Mental health is related to one’s emotional, psychological and social well being. With increasing stress in modern life, mental illnesses are increasing worldwide. Depression is the most common psychiatric disorder in general practice. Depression may occur at any age during a women’s life irrespective of educational and economic status. Considering the role of women in family, social health and cultural characteristics of our country, housewives are the backbone of Indian society.
Objectives: To assess the depression and study its epidemiological co-relates among housewives aged 18-59 years in rural area of district Ludhiana.
Methods: It is a community based cross-sectional study. It was carried out in the field practice area of Rural Health Training Center (RHTC) located at village Pohir, District Ludhiana, Punjab, India. 300 subjects were included in the study. The subjects were selected by systematic random sampling. The information was collected on the proforma by house to house visits. Assessment of depression was done by using self reported instrument, Patient Health Questionnaire -9 (PHQ-9). The data collected was analyzed by using appropriate statistical techniques percentage and chi-square test.
Results: A total of 43% housewives were having depression. There was an increasing trend of depression among housewives with increase in age (p =0.00) The housewives who had no schooling (64.7%) and 1-5 years of schooling (56.2%) were having more depression as compared to those who had 5-13 years of schooling (35.1%, p =0.000). There was significant association of depression with increase in parity (p= 0.036). Type of family was found to be non significant.
Conclusion: About half of the housewives were found to be suffering from depression. The important co-relates observed were age, parity and education status.
16. The Interaction Effects of Binge Drinking, BMI, and Smoking as Combined Risk Factors for Self-Reported Lifetime Depression Diagnosis
Betelihem B. Tobo1, Christian Geneus3, Lei Yang1, Usha Rawat2, Sumaya Hammami4, Eric Adjei Boakye5: 1Department of Epidemiology, College for Public Health and Social Justice, Saint Louis University, St. Louis, MO, USA: 2Department of Biostatistics, College for Public Health and Social Justice, Saint Louis University, St. Louis, MO, USA: 3Department of Environmental and Occupational Health, College for Public Health and Social Justice, Saint Louis University, St. Louis, MO, USA: 4Department of Behavioral Science and Health Education, College for Public Health and Social Justice, Saint Louis University, St. Louis, MO, USA: 5Center for Outcomes Research (SLUCOR), Saint Louis University, St. Louis, MO, USA:
Background: Depression is one of the most common psychiatric conditions in the United States. Binge drinking, BMI, and smoking are known correlates with depressive symptoms. However, the interaction effects between binge drinking, BMI, and smoking have not been previously examined.
Objective: This study aims to examine the interaction effects between binge drinking (binge drinker, non-binge drinker), BMI (normal weight, overweight, obese), and smoking (nonsmoker, former smoker, current smoker) using a national sample from the 2012 Behavioral Risk Factor Surveillance System (BRFSS), a national cross-sectional telephone survey.
Methods: Data on 473,604 participants were retrieved from the 2012 BRFSS for analysis. The outcome variable was self-reported lifetime depression and the independent variables were binge drinking, BMI, and smoking status. A binary logistic regression model was constructed to evaluate the interaction effects between binge drinking, BMI, and smoking and depression.
Results: The interaction between binge drinking and smoking, and BMI and smoking were statistically significant (p <0.001). However, the interaction between BMI and binge drinking was not statistically significant. After adjusting for covariates, current smokers who were overweight and obese were 1.72 (95% CI: 1.60 – 1.84) and 2.16 (95% CI: 2.01 – 2.32) times more likely to report lifetime depression compared to non-smokers who were normal weight, respectively. Similarly, current smokers who were overweight (OR=1.34; 95% CI: 1.26 – 1.42) and current smokers who were obese (OR=1.78; 95% CI: 1.68 – 1.88) were more likely to report lifetime depression compared to non-smokers who were normal weight. In addition, current smokers who were binge drinkers were 1.30 (95% CI: 1.18 – 1.42) times more likely to report lifetime depression compared to non-smokers who were not binge drinkers.
Conclusions: The interaction effects between binge drinking and smoking, and smoking and BMI were significantly associated with lifetime depression. These findings have implications for health professionals intervening for depression.
17. Measuring Price Growth in Nursing Homes from 2000-2009
Tina Highfill & David S. Johnson
Background: The proper measurement of medical care inflation is important for policymakers to understand the drivers of price growth in the health care sector. For this reason, the U.S. Bureau of Economic Analysis (BEA) recently released alternative measures of medical care inflation that use disease-based price indexes. However, this account does not yet incorporate spending on nursing home care, providing an incomplete picture of overall health care inflation.
Objective: Calculate disease-based price indexes for nursing home care from 2000-2009.
Methods: Laspeyres price indexes by disease are calculated for Medicare beneficiaries with a nursing home stay using the Medicare Current Beneficiary Survey. The allocation of spending to exclusive disease categories, which is necessary to calculate disease-based price indexes, is conducted using two methods. First, annual patient spending is divided evenly to all diseases indicated on an annual survey for long-term residents or from claims diagnoses for those with short-term stays. Second, spending is allocated using coefficients determined from regressing annual patient costs onto dummies for diagnosed conditions.
Results: Prices in the overall nursing home sector grew at an average annual rate of only 1.1% during the study period. Price growth was slower for long-term nursing home residents (1.5%) compared to short-term (2.7%). Diseases of the circulatory system was the most prevalent category of disease, followed by mental illness for long-term residents and diseases of the musculoskeletal system and connective tissue for short-term stays. These three categories of diseases also received the largest allocations of spending; mental illness received the bulk of overall spending, garnering more than half of all expenditures by one estimate.
Conclusions: Nursing home price growth in the 2000s was much slower than other health care sectors. Incorporating these disease-based price indexes into BEA’s current health care satellite account will provide a more comprehensive picture of spending trends and inflation in the sector.
18. A Bayesian Semiparametric Quantile Model for Longitudinal Growth Data
Xin Tong, Bo Cai
Background: Mixed-effect model is a commonly used tool to account for variations across different subjects in longitudinal analysis. Quantile regression for longitudinal data assumes a parametric error distribution to get full inference according to precious studies.
Objective: To propose a Bayesian semi-parametric approach which can relax the parametric error assumption and provide full inference.
Methods: Using scale mixture models for the error distribution, we develop quantile regression models which allow the error distribution to change nonparametrically with the covariates. We also specify Dirichlet process priors to random effects instead of the Gaussian assumption. We present a method for Markov Chain Monte Carlo (MCMC) posterior simulation, and illustrate inferences with simulated data and a longitudinal growth curve data.
Results: Our method provides estimates for the subgroup specific parameters and the detection of heterogeneity in the random effects population can be helpful as an exploratory cluster analysis.
Conclusions: The nonparametric probabilistic modeling is flexible to capture shape (e.g., heavy tail and skewness) of error distribution and can be incorporated to analyze longitudinal data.
19. Obesity and Physical Inactivity in South Carolina: Role of Healthcare Supply and the Built Environment
Duaa Aljabri1 and Dr. Jan M. Eberth2. 1PhD Candidate, Health Services Policy and Management, University of South Carolina: 2Assistant Professor, Epidemiology and Biostatistics, University of South Carolina
Background: Obesity and physical inactivity epidemics are growing, and few studies examined their spatial variation to recommend county-specific prevention strategies.
Objectives: To demonstrate the geographic variation of obesity and physical inactivity in South Carolina (SC) counties, and to analyze them in relation to county-level measures of healthcare supply (healthcare facility and primary care physician density) and built environment (access to exercise opportunities) in an ecological study framework.
Methods: Data from multiple sources including the US Census Topologically Integrated Geographic Encoding and Referencing (TIGER)/Line Files, Area Health Resource File, Behavioral Risk Factor Surveillance System, and Robert Wood Johnson Foundation County Health Rankings were utilized. Descriptive statistics including Pearson’s correlation coefficient and choropleth maps were used to describe county-level obesity and physical inactivity rates. Comparisons were made to HealthyPeople2020 objectives to identify counties in need of intervention. Local clustering tests and linear regression modeling was also performed to identify spatial clusters and factors associated with each outcome separately.
Results: As expected, obesity had strong positive linear correlation with physical inactivity (r=0.78, p-value-less-than 0.01). More than 70% of SC counties have not yet met the HealthyPeople2020 targets on reducing obesity and physical inactivity. Spatial analysis showed statistical clusters of high obesity (Bamberg, Allendale, and Hampton), high physical inactivity (Allendale and Dillon), and low access to exercise opportunities (Bamberg, Allendale, Hampton, and Marlboro) in rural areas, while healthcare supply density was clustered in urban areas. Physical inactivity, healthcare facility and primary care physician density were significant correlates of obesity in our model. Physical inactivity was also associated with the county obesity rate and low perception of the health status.
Conclusions: Variations in obesity, physical inactivity, healthcare supply and built environment across SC counties may encourage local policymakers to develop interventions according to community needs, and assist in planning and resource allocation decisions.
20. Obesity and the Association with Infectious Complications Following Traumatic Injury
Demetria Bayt, MPH, Samantha Stokes, Joseph Yoder, MS, Teresa Bell, PhD
Background: Obesity is a public health concern in the United States due to the increasing prevalence, particularly in younger age groups. Trauma is the most common cause of death for people younger than 40 years old. Currently, the literature reports mixed results regarding the effect of obesity on trauma outcomes. The aim of this study is to assess whether morbid obesity is associated with increased risk of infectious complications following traumatic injury.
Methods: A retrospective analysis was conducted using data from the National Trauma Data Base (NTDB) for years 2011 through 2012. Patients between the ages of 18 and 89 with an injury severity score (ISS) greater than 8 were included in the study. Obesity was defined as having a body mass index greater than or equal to 40. Descriptive statistics were calculated and stratified by obesity, non-obese, and unknown obesity status. A multilevel model was used to model the probability of experiencing one of the following infectious complications: deep surgical site infection, organ/space surgical site infection, pneumonia, superficial surgical site infection, urinary tract infection (UTI), catheter-associated blood stream infection (BSI), severe sepsis, and systemic sepsis
Results: Obese patients had an increased risk of infectious complication following traumatic injury compared to non-obese patients (OR= 1.81; 95% CI 1.74-1.90). There was a decrease in odds of having an infectious complication for age group 18-40 years of age compared to age group 61-89 years of age (OR= 0.56; 95% CI 0.54-0.58). Patients with at least one comorbidity other than obesity had 1.33 times the odds of having an infectious complication compared to patients without at least one comorbidity other than obesity (OR= 1.33; 95% CI 1.29-1.38). Patients with blunt injuries had a decrease in risk of experiencing an infectious complication compared to patients with penetrating injuries (OR= 0.93; 95% CI: 0.89-0.97).
Conclusions: The results indicate that obese patients, patients between the ages of 61-89 years, patients having one or more comorbidities other than obesity, and patients with penetrating injuries have an increased risk of developing an infectious complication.
21. Vegetables, Fruit, and Whole Grains Consumption by Children 2-19 Years of Age at Meals and Snacks: WWEIA, NHANES 2011-12
Shanthy A. Bowman, James E. Friday, John C. Clemens, Alanna J. Moshfegh. U.S. Department of Agriculture, Agricultural Research Service, Beltsville, MD
Background: The Dietary Guidelines for Americans (DGA) encourage Americans to consume more vegetables and fruits, and recommend consuming half the total grains as whole grains.
Objective: The research objective was to estimate mean intakes of these food groups by children 2-19 years (n=3,132) at breakfast, lunch, dinner, and snacks.
Methods: What We Eat in America, National Health and Nutrition Examination Survey 2011-12, day 1 dietary data were used for the study. Contribution of meals and snacks to vegetables, fruit, and grains intakes were estimated for the 3,132 children in the survey.
Results: Children consumed 0.93 cup equivalents (eq.) of total vegetables. About one-half of all vegetables were consumed at dinner, about one-third at lunch, and about one-tenth as snacks. Of the 0.28 cup eq. of potatoes and 0.24 cup eq. of tomatoes consumed, about one-half were consumed at dinner. The mean intake of total fruit was 1.14 cup eq. of which, fruit juice was 0.45 cup eq.; citrus, melons, and berries was 0.19 cup eq.; and all other fruit was 0.50 cup eq. Most of the fruit juice was consumed as snacks (38%) and at breakfast (29%). Thirty seven percent of citrus, melons, and berries and 51% of other fruit were consumed as snacks. Children consumed 6.8 ounce (oz.) eq. of total grains of which, only 10 percent (0.7 oz. eq.) was consumed as whole grains. Dinner (33%) provided the highest amount of total grains followed by lunch (28%). Though consumed in meager amounts, about one-third of whole grains were consumed at breakfast.
Conclusions: The study showed that snacks were good sources of fruit; and dinner and lunch were good sources of vegetables and total grains, in children’s diet. The need to increase whole grains consumption was noted.
22. Relationship Between Protein Intake per Meal and Lean Muscle Mass in Older Adults: The 1999-2004 National Health and Nutrition Examination Survey (NHANES)
Carrie Brown, Gail T. Rogers and Paul F. Jacques
Background: Adequate protein intake is important for minimizing the loss of lean muscle mass associated with aging. In addition to meeting a daily protein requirement, recent research suggests that a minimum amount of protein per meal may be necessary to stimulate muscle protein synthesis.
Objective: To test the hypothesis that higher protein intake distributed equally across three meals is associated with a greater lean muscle mass in free-living older American adults.
Methods: This cross-sectional analysis used data from the National Health and Nutrition Examination Survey (NHANES 1999 – 2004) for 6,428 adults over 50 years of age. An appendicular skeletal muscle mass index (ASI) was calculated from dual-energy X-ray absorptiometry measurements. Total protein and protein per meal were estimated from a single 24-hr dietary recall. Regression models were used to examine the associations between the natural logarithm of ASI and both total protein consumption and daily protein distribution across meals.
Results: Total protein intake was positively associated with ASI after adjusting for energy and other potential confounders (p =0.002). Protein intake was highest at dinner and lowest at breakfast. Among the 61% of the population that consumed three meals per day, 4.4% consumed at least 25g of protein per meal. After multivariable adjustment, ASI was not significantly higher among those who consumed 25g of protein per meal (p = 0.08).
Conclusion: While our findings reinforce the importance of adequate protein for maintaining lean muscle mass in older adults, our test that higher protein intake distributed equally across three meals is associated with a greater lean muscle mass was inconclusive. This is likely the consequence of infrequent consumption of an even daily distribution of protein across meals among older adults resulting in insufficient statistical power to test this hypothesis in NHANES. Further research is needed to adequately test this hypothesis.
23. Association between Shift Work and Leisure Time Physical Activity (LTPA) among U.S. Working Adults: Results from the 2010 National Health Interview Survey (NHIS)
Desta Fekedulegn, Cecil M. Burchfiel, Ja K. Gu, Tara A. Hartley, Luenda E. Charles, Claudia C. Ma, Michael E. Andrew,Peter R. Giacobbi
Background: Shift workers may be less likely to exercise than day workers, however evidence on their physical activity is limited.
Objective: To examine associations of work schedule with physical activity (PA).
Methods: Data from the 2010 National Health Interview Survey (NHIS) were used. The NHIS employs multi-stage complex sampling to collect data on health of the civilian population in the United States. Adult survey respondents in 2010 who were working at a paying job and had data on work schedule and PA were analyzed (n=15,084). Usual work schedule included day, evening, night, rotating, or some other shift. Using federal guidelines, participants were classified as inactive (not engaged in regular PA), insufficiently active (less than 150 minutes/week of moderate intensity or less than 75 minutes/week of vigorous PA), or sufficiently active. The SUDAAN software which accounts for the complex survey design was used to produce national prevalence estimates of PA by work schedule.
Results: The proportion of working adults who were inactive and not sufficiently active were 26.6% and 47.8% respectively. Prevalence of inactivity across work schedule was: day (26.2%), evening (29.2%), night (33.4%) and rotating (29.0%). Proportion of adults not sufficiently active was: day (47.5%), evening (49.5%), night (53.5%) and rotating (49.6%). After adjusting for age, gender, and race/ethnicity, the proportion of workers who were inactive was 24% higher among those working on night shifts (PR=1.24,1.08-1.43) and 13% larger among those on rotating shifts (PR=1.13,1.02-1.26) relative to day shift workers. Prevalence of being not sufficiently active was 12% higher among those working on night shifts (PR=1.12, 1.02-1.24) relative to day shifts.
Conclusion: Results indicate that shift work, particularly night and rotating shifts, is associated with elevated prevalence of physical inactivity and insufficient activity. Work place wellness programs and interventions may enhance level of physical activity and support healthy behavior among shift workers.
24. Prevalence and Trends of Leisure-time Physical Activity by Occupation in US Workers: the National Health Interview Survey 2004-2013
Ja K. Gu1, Luenda E. Charles1, Desta Fekedulegn1, Claudia C. Ma1, Tara A. Hartley1, Michael E. Andrew1, John M. Violanti2, Cecil M. Burchfiel1. 1 Biostatistics & Epidemiology Branch, Health Effects Laboratory Division, NIOSH, CDC, Morgantown, WV, 2 State University of New York at Buffalo, Buffalo, New York
Background: Numerous studies have reported the health benefits of physical activity. However, data describing prevalence and trends of physical activity among workers in the United States (US) are scarce.
Objective: To estimate prevalence and trends of physical activity during the 2004-2013 time period among US workers.
Methods: Self-reported occupation and leisure-time aerobic physical activity were collected annually for US workers in the National Health Interview Survey during 2004-2013. Physical activity was categorized by intensity and duration as inactive (moderate or vigorous-intensity aerobic activity: less than 10 minutes/week), insufficiently active (moderate-intensity: 10-150 or vigorous-intensity: 10-75 minutes/week), and sufficiently active (moderate-intensity: greater than or equal to 150 or vigorous-intensity: greater than or equal to 75 minutes/week). Trends of physical activity were obtained using weighted linear regression models.
Results: Prevalence of ‘sufficient’ physical activity significantly increased from 2004 to 2013 across gender and race/ethnicity (slope=1.219; p=0.001). Among all males and females, this prevalence was 53.2% and 47.5%, respectively. The lowest prevalence was observed among Hispanic males and Black females, 43.1% and 36.3%, respectively. Among occupational groups, the lowest prevalence of ‘sufficient’ physical activity was observed among workers in Farming/Fishing/Forestry (32.1%) and Building/Grounds Cleaning & Maintenance (38.5%). The highest prevalence was observed among workers in Life/Physical/Social Science (66.5%), and Arts/Design/Entertainment/Sports (64.5%). Prevalence of ‘sufficient’ physical activity significantly increased from 2004 to 2013 in most occupational groups. The largest increases were observed among workers in Food Preparation/Serving (37.8% to 47.5%, Slope=1.582; p=0.001), followed by Protective Service (52.6% to 67.2%, Slope=1.543; p=0.001), and Construction/Extraction (37.2% to 44.4%, Slope=1.440; p=0.001).
Conclusions: Among NHIS participants, trends of ‘sufficient’ leisure-time aerobic physical activity significantly increased between 2004 and 2013. Overall, a larger proportion of white-collar workers compared to blue-collar workers were engaged in ‘sufficient’ leisure-time physical activity.
25. Linking Federal Food Intake Surveys to Examine Long-Run Shifts in Food Intake Sources
Joanne Guthrie, Travis Alan Smith, Biing-Hwan Lin
Background: Understanding long-run food consumption trends can inform nutrition policy. However, methodological changes such as coding differences across surveys can complicate trend analysis.
Objective: Assess long-run shifts in food intake sources by linking Federal dietary surveys conducted between 1977 and 2012.
Methods: We linked the Nationwide Food Consumption Survey (NFCS) 1977-78, Continuing Survey of Food Intakes by Individuals (CSFII) 1989-91 and 1994-98; and the National Health and Nutrition Examination Survey (NHANES) 2003-04, 2005-06, 2007-08, 2009-10, 2011-12. Information on where foods were obtained was recoded so that across all surveys foods were classified into these source categories: (a) home; (b) full-service restaurant; (c) fast-food establishments; (d) school/day care; (e) other. For each time period, share of food energy (calories) obtained from each source was estimated for the total population and by subpopulations.
Results: Over 1977-78 to 1989-91, for the total population, calories obtained from home-prepared food fell from 82.2% to 74.4% of total; the home share continued to decline until 2005-06 when it garnered 66.2% of calories. Over that period, fast-food grew the most in share of calories obtained, increasing from 5.7% of calories in 1977-78 to 15.6% in 2005-06. Between 2007 and 2010, the share of food obtained at home briefly rebounded, reaching 70.9% of calories in 2009-10, while fast-food consumption dropped to 13.2% of calories. By 2011-12, home food dropped again to 66% of calories, and fast-food grew to 15.8% of calories. Consumption shifts by subpopulations also occurred.
Conclusions: Examination of data from 1977-78 to 2011-12 demonstrates a long-run shift in consumption to food prepared away from home. A marked reversal of this trend occurred over 2007-10, possibly associated with the economic recession during that period. However, it was short-lived, with home food intake dropping again in 2011-12, suggesting that over time the shift to food prepared away from home is likely to continue.
26. Clinical, Social and Genomic Factors Associated with Obesity at 12 Months of Age
Sahel Hazrati⊃1;, Suchitra K. Hourigan⊃1;, Kathi Huddleston⊃1;, Yvonne Yui⊃2; , Nancy Gilchrist⊃2;, Wendy S.W. Wong⊃1;, John Niederhuber⊃1; (1)Inova Translational Medicine Institute, (2) Inova Children’s Hospital
Background: Early childhood obesity is a multi-factorial condition, and genetic predisposition is one of the poorly understood risk factors.
Objective: To examine genomic, social and clinical predictors of obesity at 12 months of age.
Methods: Over 2000 families from various races or ethnicities have been recruited in prenatal stage at Inova Translational Medicine Institute. 483 children had clinical and genomic data available at age 12 months. Whole genome sequencing was performed on blood. Weight for length at 12 months was calculated using WHO gender specific growth charts with the following definitions: overweight greater than or equal to 85th, obese greater than or equal to 95th, severely obese greater than or equal to 99th percentiles. A supervised genetic analysis was conducted using the SKAT for the association of rare variants (MAF less than or equal to 0.1) with obesity. Gene-set level p-values were computed for 363 obesity related genes from the human obesity gene map. Chi-square and One-way ANOVA were used to test for association of clinical and social factors with obesity.
Results: Of the 483 infants, 31% were overweight, 20% obese and 13% severely obese. Clinical or social factors including ethnicity, maternal education, parental BMI, juice consumption were significantly associated with being overweight, obese or severely obese at 12 months (p less than 0.05). After adjusting for multiple testing with Bonferroni correction, only two genes were significant at the 0.1 level, namely, WT1 (P=0.0033) and CNR1 (P=0.090), for the severely obese group.
Conclusion: Many genomic, clinical and social risk factors are associated with obesity. None of the genes were significant after Bonferroni correction for the overweight or obese groups. Clinical and socials factors that were significantly associated with being overweight and obese in all 3 groups were Hispanic origin, lower maternal education and any juice consumption at 12 months.
27. Children Screen Time at 12 Months and Pediatrics Developmental Screening Score
Sahel Hazrati⊃1;, Kathi Huddleston⊃1;, Kathleen Donnelly⊃2;, David Ascher⊃2;, John Niederhuber⊃1; (1)Inova Translational Medicine Institute, (2) Inova Children’s Hospital
Background: American Academy of Pediatrics (AAP) recommends no screen media exposure for children under age 2.
Objectives: To investigate predictors of early childhood media exposure: To explore the association between pediatric health outcomes and media exposure at 12 months of age.
Methods: Over 2000 families have been recruited in prenatal stage in First 1000 Days of Life, at Inova Translational Medicine Institute, Falls Church, VA. 750 families were included in this analysis by reason of availability of 12 months survey data. Association of maternal variables (maternal confidence, ethnicity, age and education) to screen time variables (including TV/video time, computer/tablet time and feeding child while TV is on) were tested using Chi-square or Fisher s exact test and logistic regression. Furthermore, the association of Communication and Symbolic Behavior Scales Score (CSBS-DP) to children’s television time and computer/tablet time was tested using linear regression.
Results: 94% of 12 month old children had some type of screen time including computer and TV. Younger and more educated mothers provided more computer/tablet time for their children compared to less educated or older mothers. Not-Hispanic mothers with lower social support score fed children in front of the TV more frequently, compared to Hispanic mothers or mothers with higher social support score (p < .01). Regardless of maternal education or ethnicity CSBS-DP total score was significantly (p < .001) associated with playing with a computer or tablet/smartphone.
Conclusion: There is a significant association between maternal/demographic factors and children’s screen time. The longitudinal study of genomics and child health provides the foundation for future studies to investigate the correlation of screen time to attention deficit problems, anxiety, and obesity as well as assessing the age relevant developmental milestone.
28. Sandwich consumption and contributions to dietary intake among U.S. children, What We Eat In America, NHANES 2009-2012
M Katherine Hoy, Rhonda S. Sebastian, Cecilia Wilkinson Enns, Joseph D. Goldman, Alanna J. Moshfegh
Background: Consumption of sandwiches is commonplace in the American diet. However, little is known about sandwich intake by children. In light of efforts to promote healthful choices, information about sandwich consumption by children is valuable.
Purpose: The purpose of this study is to describe sandwich consumption and its contributions to intake of USDA Food Patterns (FP) components of children 2-19 years of age using data from What We Eat In America, NHANES 2009-10 and 2011-12.
Methods: One day of dietary intake data for children 2 to 19 years (N=6,412) from What We Eat In America, NHANES 2009-2012 was used to describe percentages of children consuming sandwiches by eating occasion, sandwich source, place consumed, and sandwich type. The contribution of sandwiches to intake of FP components by those who consumed a sandwich (reporters) was examined using the Food Patterns Equivalents Database (FPED) 2009-2012.
Results: On any given day, 48% of children ate a sandwich. No difference was seen when comparing by age groups (2-5, 6-11, and 12-19 year olds). Of all sandwiches reported, 10% were consumed at breakfast, 51% at lunch, 29% at dinner, and 9% as a snack. The primary sources of sandwiches were the store for 57%; fast food, 21%; and school cafeteria, 12%. Over half of sandwiches (57%) were consumed at home (vs. away from home). About 75% of sandwiches reported contained meat, predominantly cold cuts, hamburgers, hot dogs, and poultry. Contributions to total intake of FP components by sandwich reporters were: protein foods, 47%; grains, 34%; oils, 27%; dairy, 18%; and vegetables, 14%. Sandwiches also provided 23% of total energy, 27% of solid fats, and 7% of added sugars for sandwich reporters.
Conclusions: Sandwiches are highly consumed by children and contribute substantially to their dietary intake. These results can inform nutrition education initiatives for children.
29. Sodium Intakes of U.S. Children Age 1-2 Year
Ashley B. Jarvis, John C. Clemens, Donna G. Rhodes, Alanna J. Moshfegh. USDA, ARS, Beltsville Human Nutrition Research Center, Beltsville, MD 20705
Background: Sodium intake and foods contributing to sodium among young children have been identified as a research gap.
Objective: Identify foods contributing energy and sodium to diets of children age 1-2 years.
Methods: Nationally representative one-day dietary data of U.S. children age 1-2 years (n=469) participating in What We Eat in America, NHANES 2011-2012 were analyzed to assess food, beverage, and nutrient intakes. The 5-step USDA Automated Multiple-Pass Method was used to collect the dietary data through a 24-hour recall from a proxy knowledgeable about the child’s intake. Food and beverage groups were defined using the WWEIA food categories; contribution to energy and sodium intakes were calculated as a percentage of total energy and sodium intakes.
Results: Mean daily energy intake was 1335 ± 26.2 kcal and mean daily sodium intake was 1836 ± 39.5 mg. Grains, mixed dishes, and protein foods contributed less than one-fifth (4.8 ± 0.29, 8.5 ± 0.54, 4.3 ± 0.36 %, respectively) of energy intake; however, these same food groups contributed nearly two-thirds of mean daily sodium intake (15.4 ± 0.76, 27.0 ± 1.46, 20.6 ± 1.26 %, respectively). The foods contributing the most to total sodium in the grains group were breads, rolls, and tortillas (6.2 ± 0.71 %); in mixed dishes were macaroni and rice mixtures (8.4 ± 1.02 %); and in protein foods were chicken and cured meats, such as cold cuts (6.7 ± 0.77, 6.6 ± 0.64 %, respectively).
Conclusion: Foods contributing the most to energy intakes of children age 1-2 years were not the same foods contributing the most to sodium intakes. This study was funded by the USDA Agricultural Research Service.
30. Body Mass Index Variations in Individuals from Adolescence to Adulthood
Wasantha Jayawardene (MD, PhD), David Lohrmann (PhD, MCHES), Mohammad Torabi (PhD, MPH), Jefferson Davis
Background: Despite numerous epidemiological studies that revealed increasing obesity prevalence in the United States, trends in which body mass index (BMI) variations in individuals longitudinally from adolescence to adulthood remain inadequately explored at the national level. Objective: To demonstrate the BMI variation patterns for ages 17-27 and prevalence of those patterns in a nationally representative sample.
Methods: Height and weight data from the National Longitudinal Survey of Youth-1997 were obtained for years 1997-2011 (N=8,985). Participants who reported weight for ages 17 (adolescence), 22 (young adulthood), and 27 (adulthood) and height for age-17 along with any year after age-20 were retained for analysis, after deleting 46 unrealistic values for height, weight, and calculated BMI (n=6,269). Missing value analysis revealed missing at random. Five BMI categories were set using age-sex percentiles for age-17 and adulthood cutoffs. SAS9.4 detected transitions, followed by multinomial logistic regression to identify associated factors, while ggplot2 and gmisc in R were utilized for graphical visualization of findings.
Results: At age-17, 2.3% were underweight (UW17), 70.6% healthy weight (HW17), 15.0% overweight (OW17), 4.6% obese (OB17), and 7.5% severe obese (SO17). Most prevalent transitions for UW17 group were UW17-HW22-HW27 (48.3%), UW17-UW22-UW27 (19.6%), UW17-UW22-HW27 (16.1%); for HW17 group, HW17-HW22-HW27 (42.6%), HW17-HW22-OW27 (18.6%), HW17-OW22-OW27 (15.8%); for OW17 group, OW17-OB22-OB27 (25.9%), OW17-OW22-OW27 (23.3%), OW17-OW22-OB27 (20.3%); for OB17 group, OB17-OB22-OB27 (36.0%), OB17-OW22-OB27 (14.2%), OB17-OW22-OW27 (13.2%); and for SO17 group, SO17-OB22-OB27 (33.8%), SO17-SO22-SO27 (19.0%), SO17-OB22-SO27 (16.5%). Also, key socio-demographic variables (i.e., sex, race/ethnicity, average income, and age-27 educational status) and/or their interactions associated with most prevalent and important transitions were discovered.
Conclusions: BMI transition patterns explained the underlying mechanisms of variations in BMI prevalence across ages, including factors associated with those transitions. While ages 17-27 is a critical period in which accelerated weight gain occurs, changes in BMI may vary substantially across population subgroups, requiring further study of weight-related behaviors.
31. Comparison of Body Mass Index Weight Categories with Frequency and Length of Combat Deployment in the U.S. Military, 2001-2011
Diana D. Jeffery, Ph.D.; Department of Defense, Defense Health Agency
Background: Maintaining military height/weight standards is essential for readiness and fitness, most critically for combat operations. Limited research is available on how combat deployments affect weight status. The Millennium Cohort Study found no significant weight gain among military personnel deployed at least once to Iraq or Afghanistan between 2001 and 2006. Another study found male Navy personnel deployed to Kuwait and Iraq between 2005 and 2008 had significant weight gain for deployments more than 7.6 months.
Objective: This study seeks to describe the relationship between Body Mass Index weight categories and the number and length of combat deployments over a decade (2001 – 2011), as well as predictors of weight categories among those combat deployed.
Methods: Data from the 2011 Health Related Behaviors Survey of Active Duty Military Personnel (HRBS) were used (N = 39,877). Following bivariate analysis, multinominal regression analysis was used to identify predictors of weight categories.
Results: Significantly more obesity was found among the 29.1% (weighted) of active duty personnel who combat deployed a total of 13 or more months. In independent models, the number of combat deployments, and the total length of combat deployment were highly predictive of weight categories after controlling for other variables. Strongest predictors of higher BMI categories were sex (male) and having to lose weight prior to joining the military; moderate predictors included age, branch of service, race/ethnicity, and marital status; weakest predictors were anxiety level and deployment-related stress; depression level was nonsignificant.
Conclusion: Frequency and length of combat deployments predict exceeding military weight/height standards, a finding with implications for force readiness. Amounts and type of food available, and ability to maintain usual fitness regimes during combat deployment warrant investigation, preferably using longitudinal, cohort research designs. The HRBS provides a useful data source for measuring behavioral health in the U.S. military.
32. Has the Relationship Between Obesity and Disability Changed from 1999 to 2013?
Tapan Mehta, Valeriya Semenova; University of Alabama at Birmingham
Background: Recent studies have suggested that medical advances for obesity-associated cardiometabolic disruptions (e.g. hypertension) have reduced the obesity-associated mortality burden. It is plausible that the reduction in obesity-associated mortality may have increased the obesity-associated disability burden over time.
Objective: To evaluate whether the obesity-associated disability, where the disability is due to obesity-related conditions, has increased between 1999 and 2013.
Methods: Adult population data from National Health Interview Survey (NHIS) 1999-2013 were used. Our outcome of interest was presence of functional limitation or activities of daily living limitation due to stroke, hypertension, diabetes, weight or either chronic condition. We stratified adult data by age categories: 18 to 39 (young); 40 to 59 (middle-aged); 60 and above (older). Odds ratio (ORs) were estimated by body mass index [BMI, kg/m2] category (i.e., normal weight: 18.5 to less than 25 [reference category], overweight: 25 to less than 30, grade 1 obesity: 30 to less than 35, grade 2-3 obesity: greater than or equal to 35), separately for each age category, using a logistic regression. The interaction term between BMI category and year, where year was treated as a continuous variable, was tested to evaluate whether the obesity-associated disability had changed.
Results: In young adults, for every unit increase in year between 1999 and 2013, the grade 1 and grade 2-3 obesity ORs decreased by 1.23% (p=0.53) and 0.77% (p=0.66) respectively. In middle-aged adults, for every unit increase in year, the grade 1 and grade 2-3 obesity ORs decreased by 1.33% (p=0.15) and 2.31% (p=0.009) respectively. In older adults the grade 1 obesity OR decreased by 0.53% (p=0.47) and increased by 0.76% (p=0.36) for grade 2-3 obesity.
Conclusion: Our results suggest that the obesity-associated disability burden has not increased between 1999 and 2013 and in fact may have decreased for the middle-aged grade 2-3 obese.
33. Optimizing Flavonoid Dietary Intakes Data Transformation for Stronger Population Subgroup Mean Comparisons: What We Eat in America, National Health and Nutrition Examination Survey (NHANES) 2007-2008
Theophile Murayi; Joseph D. Goldman; Rhonda Sebastian
Background: In What We Eat in America (WWEIA), NHANES 2007-2008, flavonoid intakes follow a positively skewed distribution and include an excess of zeros. Reference mean and median estimates were calculated using the Weibull empirical distribution function moments formula and a distribution shape of one. However, distributions required data transformation to approximate normality prior to population subgroups mean comparisons.
Objective: Determine optimal transformations to normalize flavonoid intake distributions for stronger population subgroup mean comparisons.
Methods: Isoflavones, anthocyanidins, flavan-3-ols, flavanones, flavones, flavonols, and total flavonoids mean intakes estimated from day-one dietary intakes of 2,662 men and 2,758 women age 20+ years participating in WWEIA, NHANES 2007-2008 were compared across population subgroups. In a first approach, we searched for distributions having reference Weibull distribution function equivalent mean and median estimates using a well documented empirical relationship between the Weibull distribution shape and its optimal Box-Cox transformation lambda in three schemes of sample scaling. In a second approach, we log-transformed individual flavonoid group intakes after scaling the sample with 1 to 6% of its data range.
Results: Normalization of flavonoid distributions was imperfect after Box-Cox or log transformation. Normal probability-probability plots (ppplots) appeared near linear, and mean to median ratios were closer to one, but Kolmogorov-Smirnov test of normality remained significant. However with isoflavones, anthocyanidins, and flavones, normal ppplots lined up along the plot diagonal, and mean to median ratios came closer to one only with log transformation. Many subgroup mean comparisons changed significance following data transformation.
Conclusions: WWEIA, NHANES isoflavone, anthocyanidin, and flavone intake distributions approach normality with log-transformation. For the remaining flavonoid classes and total flavonoids, Box-Cox transformation seems preferable. Until a better data treatment method becomes available, analysts should scale NHANES flavonoid intakes with optimal data range percentages before applying Box-Cox or log transformation to minimize subgroup mean comparison errors.
34. A Model of Environmental Correlates of Adolescent Obesity in the United States
Kathryn C. Nesbit, PT, DPT, DSc: Thubi H, Kolobe, PhD: Susan B. Sisson, PhD: Isabella Ghement, PhD
Background: The prevalence of adolescent obesity is an ongoing public health concern. In the United States, 16.4 percent of children ages 10-17 are obese according to the 2007 National Survey of Children’s Health (NSCH). Adolescent obesity is influenced by physical and social attributes of both the proximal and the distal environment.
Objective: To test a conceptual model of proximal (home) and distal (neighborhood) environmental correlates of adolescent obesity.
Methods: This was a descriptive, cross-sectional study, using the 2007 NSCH, of 39,542 children ages 11-17. Structural equation modeling was used to test the fit of the model, identify direct and indirect effects of environmental correlates and determine reliabilities for latent constructs.
Results: The model fitted the data well (Root Mean Square Standard Error of Approximation, 0.038 (90% CI: 0.038 to 0.039), Comparative Fit Index, 0.950 and Tucker-Lewis Index, 0.934). Access to Physical Activity, Social Capital, Home Sedentary Behavior and Physical Activity had direct effects on obesity (- 0.053, p less than 0.001; 0.017, p less than 0.001; 0.110, p less than 0.001; – 0.119, p less than 0.001). Neighborhood Condition had indirect effects on obesity through Access to Physical Activity, Social Capital, and Home Sedentary Behavior (- 0.001, p equal to 0.009; 0.032, p less than 0.001; 0.044, p less than 0.001). Access to Physical Activity had indirect effects on obesity through Physical Activity, Social Capital and Home Sedentary Behavior (- 0.013, p less than 0.001; -0.005, p less than 0.001; – 0.005, p equals 0.003). Home Sedentary Behavior had indirect effect on obesity through Physical Activity (0.052, p less than 0.001).
Conclusions: Results of this model fit to US population-based data suggest that interventions should target not only sedentary behavior and physical activity, but also parent perceptions of safety, access to physical activity and the neighborhood condition.
35. Comparisons of sodium intake and sodium density, WWEIA, NHANES 2001-02 versus 2011-12
Courtney Winston Paolicelli, Tashara Marie Leak, Joseph Goldman, Alanna Moshfegh
Background: Since 1980, federal dietary guidelines have urged Americans to reduce their sodium consumption as a means to decrease hypertension risk; however, overconsumption of sodium continues to be a public health concern.
Objective: This study explored changes in Americans’ sodium intake and dietary sodium density (mg/1000kcal) between 2001-02 and 2011-12. Because of the relationship between sodium intake and hypertension control, the study also sought to detect changes in dietary sodium density among hypertensive adults between these two time periods.
Methods: What We Eat in America, NHANES day 1 dietary data for individuals 2 years+ from 2001-02 (n=9032) and 2011-12 (n=7932) were analyzed for sodium intake and density. NHANES Blood Pressure Questionnaire data were used to identify adults (20 years+) with hypertension and taking prescribed medication for hypertension.
Results: Mean daily sodium intake and density by individuals in 2001-02 and 2011-12 was 3499 mg and 1646 mg/1000 kcal and 3478 mg and 1659 mg/1000 kcal, respectively. When evaluating mean daily sodium intake by 12 age/gender groups, there were no significant differences between 2001-02 and 2011-12. However, during this same period, sodium density increased in adolescent females (age 12-19 years) and young adult males (age 20-39 years) (by 138 and 116 mg/1000kcal/day, respectively; p less than 0.001), and decreased in females age 60 years and older (by 107mg/1000kcal/day, p less than 0.001). Among adults reporting they were told they had hypertension and subsequently reporting they were taking prescribed hypertension medications, sodium intake and density remained unchanged for both groups.
Conclusions: Over the past decade, Americans’ sodium intake has remained high, and sodium density has increased in certain groups. Among adults who could greatly benefit from blood pressure reduction, sodium density did not change between 2001-02 and 2011-12, thus warranting further investigation into means of assisting hypertensive individuals to improve their dietary choices.
36. Beverages: Contribution to energy and preferences among U.S. children
Elizabeth A. Parker, John C. Clemens, Alanna J. Moshfegh: USDA, ARS, Beltsville Human Nutrition Research Center, Beltsville, MD 20705
Background: Few studies have examined beverage intake among children by race/ethnicity.
Objective: Examine beverage contribution to energy intake and compare beverage choices among US children.
Methods: Nationally representative one-day dietary data from What We Eat in America, NHANES 2011-2012 were analyzed to determine beverage intake among US children age 2-19 years by four race/ethnicity groups (white, n=690; black, n=936; Hispanic, n=963; Asian, n=375). Beverage intake data were obtained from an in-person 24-hour dietary recall collected using the USDA 5-step Automated Multiple-Pass Method. Beverages were categorized into eight groups: 100% juice, coffee/tea, diet drinks, fruit drinks, milk, soft drinks, other sweetened drinks, water.
Results: Beverages contributed nearly 1 out of every 5 calories consumed among children regardless of race/ethnicity. Although water was the top reported beverage among all children, percent reporting and mean daily intake of beverages varied by race/ethnicity. Reporting of water was higher in Asians vs. blacks (85 vs. 71%; P less than 0.01). Fruit drinks were more likely and milk was least likely to be reported by blacks as compared to whites, Hispanics and Asians (48 vs. 25, 35 and 22%; and 36 vs. 63, 53 and 66% respectively; P less than 0.01). The percentage of whites and Hispanics reporting soft drinks was almost twice that of Asians (40 and 41 vs. 22%, respectively; P less than 0.01). Mean daily beverage intake was greatest for whites (1585g) compared to other groups (black, 1283g; Hispanic, 1369g; Asian, 1256g; P less than 0.01).
Conclusion: Beverage contribution to total energy intake was similar among children by different racial/ethnic groups; however, beverage choices varied and should be taken into consideration when examining diet quality. This study was funded by the USDA Agricultural Research Service.
37. Perception of Weight Status in U.S. Children and Adolescents Aged 8-15 Years, 2005-2012
Neda Sarafrazi1, Ph.D.; Jeffery P. Hughes1, MPH; Lori Borrud1, Ph.D.; Vicki Burt1, ScM RN; Ryne Paulose1, Ph.D.1 National Center for Health Statistics, Centers for Disease Control and Prevention, Hyattsville, MD, USA
Weight misperception in children and adolescents may lead to unhealthy or excessive weight control practices among overweight, underweight or normal weight individuals. Accurate self-perception of weight status has been linked to appropriate weight control behaviors in youth. Differences by gender, race and Hispanic origin in the prevalence of weight misperception among children and adolescents were examined. We used data from 6156 children and adolescents aged 8-15 years who participated in NHANES 2005-2012. Accuracy of weight status perception was determined by comparing self-reported weight status (overweight, about right and underweight) to actual weight status based on measured height and weight. The weight status misperception prevalence was estimated and compared among age, gender and race and Hispanic origin groups. Nearly, 30% (95% Confidence Interval; 28.55%-31.82%) of children and adolescents misperceive their weight status. Weight status misperception was more common among boys (32.3%) than girls (28.0%; p<0.05) and among children (33.1%) than adolescents (27.4%; p<0.01). The prevalence of weight status misperception was lowest among non-Hispanic white children and adolescents (27.7%) compared to non-Hispanic black (34.4%; p<0.01) and Mexican-American (34.0%; p<0.01) children and adolescents. Weight status misperception also varied by body mass index: 87.4% of normal weight youth, 76.0% of overweight youth, and 41.5% of obese youth considered themselves to be about the right weight. Current estimates of weight perception status in children and adolescent may be useful in developing obesity prevention programs and in promoting healthy weight control behaviors.
38. Incidence of Cardiovascular and Pulmonary Complications Following Trauma for Obese versus Non-Obese Patients
Samantha Stokes, Joseph Yoder, MS, Demetria Bayt, MPH, Teresa Bell, PhD
BACKGROUND: Obesity rates have persistently increased in the United States over the last 20 years. It is generally accepted that obesity puts patients at an increased risk for cardiovascular and pulmonary complications after surgical procedure. However, in the setting of trauma, there have been mixed findings in regards to whether obesity puts patients at risk for additional complications.
OBJECTIVE: The aim of this study was to identify whether obese patients suffer an increased risk of cardiac and pulmonary complications following intervention for traumatic injury using national data.
METHODS: A retrospective analysis was conducted using data from The National Trauma Data Bank (NTDB) from 2011 through 2012. Hierarchical regression modeling was performed using GLIMMIX to determine the probability of experiencing one of the following complications: acute lung injury/ARDS, cardiac arrest, deep vein thrombosis (DVT), pulmonary embolism (PE), and unplanned intubation. Patients were excluded from analysis based on age, injury type, injury mechanism, and injury severity score (ISS). Hospital was included as a random effect to account for clustering of observations.
RESULTS: A total of 517,008 patients (obese: 22,042; not obese: 487,034; missing/unknown obesity information: 7,932) were included for analysis. Obese patients had an increased risk of cardiovascular and pulmonary complications after trauma as compared to non-obese patients (OR=2.0301; 95% CI: 1.9312-2.1340). Patients with blunt injuries had a decreased risk of cardiovascular and pulmonary compilations following trauma compared to penetrating injuries (OR=0.746; CI: 0.7131=0.7817). Patients aged 18-40 were also at a decreased risk of suffering these complications (OR=0.5510; CI: 0.5324-0.5702).
CONCLUSION: Based on our findings, obesity is associated with an increased risk for cardiovascular and pulmonary complications following trauma. Furthermore, patients with penetrating injuries and patients in the age group over 40 are also at an increased risk for suffering these complications following trauma.
39. Number of Monitoring Days Needed for Accurate Population Weekly Activity Estimates
Dana L. Wolff-Hughes1, James J. McClain1, Kevin W. Dodd2, David Berrigan1, and Richard P. Troiano1: 1 Division of Cancer Control and Population Sciences, National Cancer Institute, Bethesda, MD; 2Division of Cancer Prevention, National Cancer Institute, Bethesda, MD
Background: Previous research suggests multiple monitoring days are necessary to accurately estimate activity at the individual level; however, there is limited research at the population level.
Objective: To determine number and distribution of days required to produce stable population level estimates of a true 7-day mean for common accelerometer-derived activity parameters.
Methods: Data from the 2003 – 2006 NHANES were used in this analysis. The sample included 986 youth (6 – 19 years) and 2532 adults (20 years and older) with 7 days of 10 or more hours of wear. Accelerometer parameters included sedentary, light PA, moderate-to-vigorous PA (MVPA), and bouted MVPA minutes; and total activity counts/d. Twenty-five deletion schemes were bootstrapped with 250 samples drawn for each scheme. The deletion schemes included keeping: 1-6 random days, Saturday plus 1-5 random weekdays (WD), Sunday plus 1-5 random WD, 1 random weekend day (WE) plus 1-5 WD, and both WE plus 1-4 random WD. To compare outcomes, relative difference was calculated between the true 7-day mean and each deletion scheme mean (((deletion mean – true mean)/ true mean)x 100)
Results: Adult MVPA is used as an example, however, similar trends were observed across age groups and variables except adult sedentary time, which was stable across deletion schemes. Adult’s MVPA for any 1-6 random days ranged from 19.9(0.3) to 20.0(0.2) min/day with a mean bias ranging from -0.2(1.6)% to 0.1(0.8)%. For deletions with non-random components, MVPA ranged from 18.5(0.1) to 20.6(0.0) min/d with a mean bias ranging from -7.2(0.5)% to 3.1(0.0)%.
Conclusions: Simulation data suggest that stable estimates of population means can be obtained from a single randomly selected day of monitoring from a sampled week. However, bias is introduced in population estimates based on non-random selection of weekend days. Purposeful sampling of adults which forces inclusion of weekend data in analyses should be discouraged.
40. Obesity Increases Length of Hospital Stay for Surgical Trauma Patients
Joseph Yoder, MS; Demetria Bayts, MPH; Samantha Stokes; Teresa Bell, PhD
Introduction: The United States has the highest rate of obesity in the world – it is currently estimated that one third of U.S. adults are obese. The impact of obesity on outcomes after traumatic injury has been inconsistent. The aim of this study is to evaluate the association between obesity and length of hospital stay after blunt or penetrating trauma. We hypothesized that morbidly obese patients would have longer lengths of hospital stay.
Methods: We performed a retrospective analysis of trauma patients using the National Trauma Data Bank (2011-2012). Patients with recorded comorbidity of obesity were identified. Patients were excluded from the cohort if they were less than 18 years of age, were dead on arrival, or transferred from another facility. Patients had to have a valid ICD9 procedure code. Covariates of interest were age, gender, race, Injury Severity Score (ISS), Glasgow Coma Scale, systolic blood pressure on presentation at emergency department, comorbidities, and complications. Generalized linear modeling with a negative binomial distribution was performed to determine the effect that obesity has on length of stay while controlling for blunt verses penetrating injuries as well as surgical procedures.
Results: Overall, morbidly obese patients had longer lengths of hospital stay (1.093 days) on average than non-morbidly obese patients. Morbidly obese patients with procedures to more than one area of the body tended to have a length of hospital stay that was 1.329 days longer than non-morbidly obese patients with more than one area of procedure; similarly, 0.898 days longer when comparing patients with single area procedures.
Conclusion: Morbid obesity increases a patient’s length of hospital stay, even when controlling for the comorbidities and complications associated with obesity. This trend is suggested for both patients with procedures to more than one area of the body, and those patients with single area operations.
POSTER SESSION II: Tuesday, August 25, 2015 – 1:00PM-5:30PM
41. Variation in Urinary Flow Rates According to Demographic Characteristics and Body Mass Index in NHANES: Potential Confounding of Associations between Health Outcomes and Urinary Biomarker Concentrations
Sean M. Hays,1 Lesa L. Aylward,2,3 and Benjamin C. Blount4: 1 Summit Toxicology, LLP, Lyons, Colorado, USA: 2 Summit Toxicology, LLP, Falls Church, Virginia, USA: 3 National Research Centre for Environmental Toxicology (Entox), University of Queensland, Brisbane, Queensland, Australia: 4 National Center for Environmental Health, Centers for Disease Control and Prevention, Atlanta, Georgia, USA
Background. Urinary analyte concentrations are affected both by exposure level and by urinary flow rate (UFR). Systematic variations in UFR with demographic characteristics or body mass index (BMI) could confound assessment of associations between health outcomes and urinary biomarker concentrations.
Objectives. We assessed patterns of UFR (ml/hr) and bodyweight-adjusted UFR (UFRBW, ml/hr-kg) across age, sex, race/ethnicity, and BMI category in the NHANES 2009-2012 datasets.
Methods. Geometric mean (GM) UFR and UFRBW were compared across age-stratified (6 -11, 12-19, 20-39, 40-59, and 60+ years) subgroups (sex, race/ethnicity, and BMI category). Patterns of analyte urinary concentration or mass excretion rates (ng/hr and ng/hr-kg BW) were assessed in example age groups for case study chemicals bisphenol A and 2,5-dichlorophenol.
Results. UFR increased from ages 6 to 60 and then declined with increasing age. UFRBW varied inversely with age. UFR, but not UFRBW, differed significantly by sex (males greater than females after age 12). Differences in both metrics were observed among categories of race/ethnicity. UFRBW, but not UFR, varied inversely with BMI category and waist circumference in all age groups. Urinary osmolality increased with increasing BMI. Case studies demonstrated different exposure-outcome relationships depending on exposure metric. Conventional hydration status adjustments did not fully address the effect of flow rate variations.
Conclusions. UFR and UFRBW exhibit systematic variations with age, sex, race/ethnicity, and BMI category. These variations can confound assessments of potential exposure-health outcome associations based on urinary concentration. Analyte excretion rates are valuable exposure metrics in such assessments.
42. Exposure to ambient particulate matter (PM2.5) air pollution and biomarkers of cardiovascular disease in adult National Health and Nutrition Examination Survey participants
Arvind Dabass*, University of Pittsburgh, United States; Evelyn Talbott, University of Pittsburgh, United States; Judy Rager, University of Pittsburgh, United States; Ravi Sharma, University of Pittsburgh, United States; Gary Marsh, University of Pittsburgh, United States; Arvind Venkat, University of Pittsburgh, United States; Fernando Holguin, University of Pittsburgh, United States
Background and Objectives: Exposure to particulate matter (PM2.5) is associated with increased cardiovascular morbidity and mortality, a hypothesized biological mechanism being systemic inflammation and oxidation. Our objective was to examine the association of ambient PM2.5 with markers of systemic inflammation and oxidation in adult National Health and Nutrition Examination Survey (NHANES) participants.
Methods: NHANES data (2001-08) were merged with meteorological data from CDC Wonder and air pollution data from downscaler Community Multiscale Air Quality model for each census tract in the mainland United States. The effects of short (lags 0 to 3), and long (30 & 60 day moving average and annual average (anavg)) term PM2.5 levels on C-reactive protein (CRP, n=16160), white blood cells (WBC, n=16136), fibrinogen (n=2461) and homocysteine (n=11224) were analyzed using multiple linear regression, adjusting for demographic and cardiovascular risk factors, maximum apparent temperature and ozone. SAS SURVEYREG was used to account for the complex survey design of NHANES. Stratified analyses were conducted for obesity, diabetes, hypertension and smoking status.
Results: Overall, we found no statistically significant positive association of either short or long term PM2.5 exposure for any of the biomarkers after controlling for confounders. However, we found evidence suggesting stronger associations in participants with obesity, diabetes, hypertension and smokers. For example, in diabetic participants for every 10 microg/m3 increase in anavg PM2.5 (adjusted for short term exposure of PM2.5 and Ozone), there was an increase of 36.9 % (0.1 %, 87.2%) in CRP. For smokers, there was a significant increase of 2.6 % (0.1 %, 5.1%) in homocysteine for every 10 microg/m3 increase at lag zero PM2.5.
Conclusions: Subsets of individuals within a nationally representative sample manifest increase in markers of systemic inflammation and oxidation in response to ambient PM2.5 exposure. Additional work with metabolic syndrome individuals would be important as well as confirmation of this finding in other cohorts.
43. Bootstrapping the tails: analysis of an inhalation toxicity data distribution
Eugene Demchuk, Jedidiah S. Snyder, Andrew J. Prussia: Agency for Toxic Substances and Disease Registry, Atlanta, GA 30333
Statistical estimates from the tails of a peaked distribution often carry large uncertainties. The problem stems from sparse coverage in finite samples. Due to that, even bootstrapped tail statistics may be biased because of non-normality. In the present work, several techniques that extrapolate tail statistics from the entire sample were compared. They include parametric fitting, smooth bootstrapping, and bootstrapping using kernel density estimates. These methods were applied to toxic load exponent (TLE) data derived by probit regression to mortality counts caused by exposure to noxious gases: Y=b0+b1*ln(C)+b2*ln(t), where C is concentration, t is exposure duration, and b1/b2 is TLE. 127 TLEs were collected from 271 Acute Exposure Guideline Levels (AEGLs) documents and the literature. The TLEs followed a lognormal distribution, albeit approximately. As the sample size increased, the TLEs failed the log-normality test (in resampling from the kernel density, for Shapiro-Wilk/Francia alpha=0.05: beta=0.5 for a sample size of 103, beta=0.05 for 292). Thus, parametric estimates of tail percentiles were deemed biased. Smoothed bootstrap performed better but uncertainty remained about optimal kernel selection. Results were obtained using bootstrapping from the Gaussian-kernel density with Silverman’s bandwidth and studentized confidence intervals. By this method, 90% of TLEs were confined between 0.73 (95% CI: 0.66-0.83) and 3.72 (95% CI: 3.06-4.48). These estimates are proposed as the revised health-protective defaults for concentration-time extrapolation from short-to-long and long-to-short durations, respectively. Currently, the AEGL Committee relies on TLE defaults of 1 and 3 estimated 30 years ago from only 20 chemicals. Based on the present study, they cover approximately 70% of the TLE distribution between 21st and 90th percentiles, respectively. Thus, the present study finds that the current TLE defaults may fall short of being protective of public health.
44. Analyses of NHANES WWEIA Food Consumption Outliers: Effects upon EPA’s Acute Dietary Risk Assessments
The US Environmental Protection Agency (EPA) uses food consumption data from national surveys to perform its pesticide dietary risk assessments. The Agency recently updated its Dietary Exposure Evaluation Model (DEEM-FCID) model to incorporate recent food consumption data from the Center for Disease Control’s National Health and Nutrition Examination Survey (NHANES), What We Eat in America (WWEIA) survey. (http: //www.epa.gov/pesticides/science/deem/) Previously, stakeholders have voiced concerns that national food consumption surveys provide insufficient data to reliably estimate high-end dietary exposures for certain age groups. Some commenters also suggested that some respondents over-report consumption, leading to over-estimates of dietary exposure, and that the agency should also perform its assessments without such data. (http: //www.epa.gov/oppfead1/trac/science/trac2b055.pdf). Since the amount of data collected is given, the Agency considered several issues: (i) Are there ‘outliers’ in the food consumption data? (ii) Are any extreme values likely due to measurement error vs reflecting actual events?, and (iii) What is the effect of such extreme values in the agency’s dietary exposure assessment? After examining the earlier food consumption data, the agency concluded that over-reporting was likely to affect mean consumption values, and have a lesser effect on extreme tails of the exposure distribution. We present some preliminary findings of our review of the WWEIA Food Commodity Intake Data (FCID), 2005-2015 data (http: //fcid.foodrisk.org/). We used scatterplots and various statistical criteria, including Dixon-type tests (e.g., Q3 + 2 x (Q3-Q1)) to detect high-end outliers. We present some charts to depict results for select commodities. As before, it is not apparent that any high-end values are measurement error. We present examples to illustrate how ‘outliers’ may or may not affect dietary exposure estimates depending on the registered food uses and corresponding residue inputs. Such information may be used to characterize dietary exposure estimates.
45. Fast Food: a Source of Exposure to Phthalates and Bisphenol A in a Nationally Representative Sample
Cassandra A. Phillips, Ami R. Zota, and Susanna D. Mitro
Background: Certain phthalates and bisphenol A (BPA) are industrial chemicals widely used in consumer products that can adversely impact human health. Diet is hypothesized to be a major source of exposure but little is known about the impact of specific food sources.
Objective: This study aims to investigate the association between fast food consumption with human exposure to high-molecular weight phthalates (di(2-ethylhexyl) phthalate (DEHP) and diisononyl phthalate (DINP)) and BPA in the general population.
Methods: We pooled data from 2003-2010 National Health and Nutrition Examination Survey (NHANES) including participants who provided both a spot urine sample that measured phthalate metabolites and BPA, and completed a 24-hour dietary recall survey. We calculated kilocalorie intake of fast food, and modeled fast food consumption in the prior 24-hours dichotomously and categorically as the percent of total daily calories (0%, 1-50%, 51-100%). We evaluated associations between fast food consumption and urinary chemical concentrations using linear regression. We also examined the association between percent of total daily calories from each fast food food-group (dairy, eggs, grains, meat, other) and phthalate metabolites.
Results: Over 90% of participants had detectable levels of urinary phthalate and BPA metabolites in their urine. Those who had eaten fast food had significantly higher urinary metabolite levels of DEHPsum [percent change (95%CI): 18.63% (10.38%, 27.50%)] and DINP [percent change (95%CI): 32.17% (20.04%, 45.52%)], but not BPA [percent change (95%CI): 2.36% (-2.59%, 7.56%)] compared to those who had not eaten fast food in adjusted models. For DEHPsum and DINP, there was evidence of a positive dose-response effect (p for trend less than 0.0001). Meat and grain consumption were associated with elevated DEHPsum and DINP levels.
Conclusion: Findings suggest that fast food consumption, specifically meat and grains, may be an important exposure source for DEHP and DINP, but not BPA, among the general population.
46. Are there inequalities in receiving physician’s advice on diet and exercise?
Nasar U Ahmed1, PhD; Anshul Saxena2, BDS, MPH; Hafiz Khan3, PhD: Corresponding author: 1Department of Epidemiology: email@example.com: 2Department of Health Promotion & Disease Prevention: 3Department of Biostatistics: Florida International University: Miami, FL 33199
Background: Physician’s advice play crucial roles in motivating patients for healthy behavior. Authoritative instruction is needed on diet and exercise to address chronic diseases that are prevalent among minorities.
Objective: To examine disparities in receiving physician’s counseling on diet and exercise among the physically-able US adult-population.
Method: National Health Interview Survey 2000, 2005, 2010 and 2011 data were examined using logistic regression model, including race/ethnicity, gender, age, education, income, insurance and body mass index (weight-status). The outcome variable was receipt of physician advice on diet, exercise or both among adults who visited a physician/healthcare provider in the past 12 months. Taylor Series Linearization was used to adjust for the design effects of complex survey data.
Results: As compared with Whites, Blacks were 27% less likely to receive advice on exercise in 2000; however, these differences were observed later years; on the other hand, Blacks were 42% more likely to receive advice on diet in 2011. Hispanics were 26% less likely to receive advice on diet in 2000; while in 2011 they were 18% more likely to receive advice on diet and 27% more likely to receive advice on exercise. As compared with women, men were 34% less likely to receive advice on exercise and 19% less likely to receive advice on diet although improved over the years, disparities remained significant.
The prevalence of physician counseling on diet and/or exercise increased with age and weight status remained similar over the years. The uninsured consistently 35%-65% less likely of receiving counseling either diet or exercise; whereas, higher the educational level, higher the likelihood of receiving counseling on diet and exercise, this trend is more prominent in recent years.
Conclusion: Although changing, the prevalence of physician’s advice on diet and exercise showed significant disparities between genders and racial/ethnic lines after accounting for socioeconomic status and insurance status.
47. Being overweight at 12 months of age in Hispanic versus Non-Hispanic children: Comparing clinical and social factors
Sahel Hazrati, Suchitra K. Hourigan, Kathi Huddleston, Wendy S.W. Wong, John Niederhuber. Inova Translational Medicine Institute.
Background: Prevalence of childhood obesity is higher among Hispanics compared to Non-Hispanic Whites.
Objective: Compare the clinical and social factors associated with being overweight in Hispanic versus Non-Hispanic children.
Methods: Over 2000 families have been recruited in prenatal stage at the Inova Translational Medicine Institute, Falls Church, VA. Clinical and social data were collected during pregnancy and at birth and parents completed surveys every 6 months. Weight for length at 12 months was calculated using WHO gender specific growth charts and overweight was defined as weight for length greater than or equal to 85th percentile. Factors associated with overweight among Hispanics and Non-Hispanic were analyzed using Chi-square and two sample t-tests. Trio-based (mother, father and newborn) whole genome sequence was generated for the individuals in the study. Self-reported ethnicity was validated using estimated ancestral admixture proportions of four super populations using the 1000 genomes reference markers.
Results: Of the 587 children, 12% were Hispanic and 88% Non-Hispanic; 217 (37%) in total were overweight at 12month of age. Of the overweight children, 31(14.3%) were Hispanic. Clinical and social factors significantly associated (p less than 0.01) with being overweight in Hispanics children were (1) early solid food introduction, (2) juice and sugar sweetened beverage, (3) lower maternal education, and (4) gestational diabetes. Factors significantly associated with being overweight in the Non-Hispanics were (1) increased weight gain during pregnancy, (2) lower maternal confidence score, and (3) higher perceived stress score. Admixture proportion of individuals showed a strong congruence to self-reported ethnicity (M= 0.85, SD=0.20).
Conclusions: Different factors were found to be associated with being overweight in Hispanic and Non-Hispanic children. Knowledge of these differential factors may allow targeted anticipatory guidance to different populations at an early age for modifiable factors such as dietary interventions, to reduce the risk of obesity later in life.
48. Racial and ethnic disparities in cost-related medication non-adherence among cancer survivors: 2006-2013
Min Jee Lee, University of South Carolina
Background: Prescription drug costs have continued to increase rapidly in the past few decades and by 2012, drug expenditure accounted for 9.7% of total National Health Expenditure (NHE) in the USA. Cancer survivors are delaying or avoiding necessary care due to costs. Medication non-adherence is one of the important aspects of deferred treatment.
Objectives: This study was conducted to estimate the prevalence of cost-related medication non-adherence (CRN) by race and ethnicity and factors associated with CRN among cancer survivors.
Methods: Using the 2006-2013 National Health Interview Survey (NHIS), we examine self-reported CRN among cancer survivors. In total, 472,941 adults, including 11,185 (2.36 %) cancer survivors, were identified who reported CRN in the past 12 months. Descriptive statistics and multiple logistic regression models were used to identify factors affecting CRN among cancer survivors.
Results: In a nationally representative sample of 11,185 cancer survivors, 1,423 (12.72 %) reported CRN. Cancer was reported by 84.1% of younger non-Hispanic white survey participants, 9.2% of younger African-Americans, and 6.7% of younger Hispanics. Cancer was reported by 88.0% of older non-Hispanic white, 7.4% of older African-Americans, and 4.6% of older Hispanics. Among older cancer survivors, African-Americans were 2.65 times more likely (95% CI, 1.75 to 4.03) and Hispanics were 1.97 times more likely (95% CI, 1.26 to 3.07) than whites to report CRN. Among younger cancer survivors, Hispanics were 1.61 times more likely (95% CI, 1.23 to 2.09) than whites to report CRN.
Conclusion: Significant racial and ethnic disparities in CRN were evident among cancer survivors. Given increasing prescription drug expenditure, it is important to closely monitor CRN in high-risk subgroups. Further studies are warranted to establish effective interventions in this vulnerable population.
49. Health risk behaviors and its relationship with medical care delay for a child among adults who have a child with a disability: Findings from the 2013 National Health Interview Survey
Meenhye Lee and Dr. Chang Park
Background: Little attention has been given to the impact of adults’ health risk behaviors on the health care of their child with a disability. National survey data such as National Health Interview Survey (NHIS) has not been used to explore intra-household effects even though indicators are available.
Objective: To compare the prevalence of selected health risk behaviors – alcohol use, smoking, physical activity, and hours of sleep – of the adults having a child with and without a disability as well as to examine whether the adults’ health risk behaviors predict the medical care delay of the child using nationally representative sample in the United States.
Methods: Children with a disability were defined from the person file of 2013 NHIS data for the child experiencing any of limitations due to a chronic condition. We merged four linked 2013 NHIS data sets – household, person, family, and adult – to obtain the study sample (n=10,160). Multi-behavior variable was created by summing the values of the four health risk behaviors. Explanatory variables included marital status, poverty, and the four health risk behaviors. A medical care delay of the child in 12 months was the outcome predicted in a logistic regression model, adjusted with sampling weights.
Results: Three health risk behaviors were significantly different between adults having a child with and without a disability: physical activities (OR=.80), hours of sleep (OR=.71), and multi-behavior (OR=.63). Only poverty (OR=.74) was associated with medical care delay for adults having a child with a disability.
Conclusion: Adults living with a child having a disability were less likely to engage in physical activities, sufficient sleep, and multiple healthy behaviors. The reasons for poverty affecting medical care delay among the children with a disability should be understood in order to address health disparities for adults having a child with a disability.
50. Examining the Relationship Between Food Security, Nutrition Intake, and Obesity among Native Hawaiians and Pacific Islanders, Latinos, and African Americans
Liki Porotesano, Christina Holub, PhD, MPH, & Adrian Bacong: San Diego State University Research Foundation, San Diego State University Institute for Behavioral and Community Health
Background: Previous studies have found positive associations between food insecurity and obesity. Low-income communities lack full-service grocery stores, are nutrient deprived, and exhibit greater food insecurity. Native Hawaiians and Pacific Islanders (NHPIs), Latinos, and African Americans (AA) have historically experienced greater income and food security disparity.
Objective: To examine the relationship between food security, nutrition intake, and obesity among NHPIs, Latinos, and AAs in California.
Methods: Data were analyzed from the 2011-2012, California Health Interview (CHIS) survey. Analysis included Latinos (n=6453), African Americans (n=2079), and Pacific Islanders (n=73). Chi-square tests, one-sided analysis of variance, and logistic regression were used to examine differences in food security, nutrition, and obesity.
Results: Fruit (F(2, 8602) =15.45, p less than 0.001), vegetable (F(2, 8602) = 187.91, p less than 0.001) and soda (F(2, 8602) =1074.24, p less than 0.001) consumption significantly differed among Latinos, PIs, and AAs. Post-hoc analysis revealed that Latinos consumed significantly more fruit (p less than 0.001) and soda (p less than 0.001), but less vegetables (p less than 0.001) than AAs. Fast food consumption did not differ by race. Obesity prevalence was highest among AAs (38.0%), followed by NHPI (37.0%) and Latinos (35.3%); differences between groups were not significant. Food security was highest among NHIP (66.7%) followed by AA (58.0%) and Latinos (54.9%). Differences were significant by racial group (chi-squared (4, N = 4937)=35.32, p less than 0.001). After controlling for age, gender, and income, food security differences by race remained significant (p=0.016). However, overweight/obesity prevalence differed became significant by race (p= 0.001).
Discussion: Obesity prevalence is high among Latinos, PI’s, and AA. These groups may experience similar food security and nutrition challenges that result in higher obesity. Future studies should examine factors that these populations have in common and address them accordingly to target health disparities among these minority groups.
51. Clinical Quality Indicators of Asian American, Native Hawaiian, and other Pacific Islander Patients Seen at HRSA-Supported Community Health Centers
Alek Sripipatana, PhD, MPH: Director, Division of Data and Evaluation: Health Resources and Services Administration: Bureau of Primary Health Care: Office of Quality and Data: U. S. Department of Health and Human Services: Quyen Ngo-Metzger, MD, MPH: Director, U.S. Preventive Services Task Force Program: Agency for Healthcare Research and Quality: U. S. Department of Health and Human Services
Background: Regional studies on Native Hawaiians and Pacific Islanders (NHPIs) indicate NHPIs have some of the worst health-related outcomes in comparison with other racial/ethnic groups, like infant mortality and co-morbidities of obesity. These findings are understated or hidden in national studies, because NHPI and Asian American (AA) data are aggregated due to small sample sizes. Therefore, national efforts to flag important NHPI health issues and identify opportunities for intervention are hampered.
Objective: Explore differences in selected health characteristics, access to care, and health outcome indicators between NHPI and AA patients served at HRSA-supported community health centers using nationally representative data.
Methods: Data come from two data sources collected by HRSA: 2009 Health Center Patient Survey and 2013 Uniform Data System. Key study variables included: overweight/obesity; controlled hypertension, controlled diabetes, low birth weight; and delay/non-receipt of needed medical care. ANOVA was performed to explore mean differences among race/ethnicities for each of the clinical indicators. Subsequent Scheffe’s Tests were performed to compare clinical indicator means between NHPIs and AAs.
Results: Nearly 70% of NHPI patients had hypertension, 20% had diabetes, and 81% were overweight/obese; whereas, 30% of AAs had hypertension, 15% had diabetes, and 30% were overweight/obese. Additionally, 50% of NHPI patients had controlled blood pressure and 23% of NHPIs had HbA1c less than 9% at their last clinic visit. In contrast, 62% of AA patients had controlled blood pressure and 45% had HbA1c less than 9%. The proportion of low birth weight babies was 7.2% for NHPI patients and 6.9% for AAs.
Conclusions: NHPI patients seen at community health centers generally experience worse health compared with AAs. Aggregating their data downplay NHPI health issues. Our findings suggest that targeted interventions around healthy eating and active living may improve the health, quality of life, and even birth outcomes of NHPI patients.
52. Physical activity and sedentary behavior among U.S. native and foreign-born adolescents using NEXT Generation Health Study data
Wynette Williams, B.A., Denise Haynie, Ph.D., & Kaigang Li, Ph.D. Eunice Kennedy Shriver National Institute of Child Health and Human Development: Division of Intramural Population Health Research, Health Behavior Branch
Background: Excessive sedentary behavior (SB) and physical inactivity are common among immigrant adolescents and may increase their risk of obesity and chronic diseases.
Objective: To examine association of physical activity (PA) and SBs among U.S. native and foreign adolescents.
Methods: Participants (n=2,475) were included from Wave I of the NEXT Generation Health Study, a nationally representative, 10th grade cohort. Moderate to vigorous activity (MVPA) was defined as did or did not meet recommended 60 minutes daily at least 5 days. Vigorous physical activity (VPA) was defined as did or did not meet recommended weekly 2-hour vigorous exercise. SBs were reported on typical weekday and weekend, using (1) internet and cell phones and (2) viewing TV, DVDs, and videos. Immigrant status included: (1) adolescent (and parents) not born in the U.S., (2) adolescent born in U.S. but 1 or 2 parents not born in the U.S., and (3) adolescent (and parents) all born in the U.S. (used as referent). Logistic regression (for MVPA and VPA) and linear regression (for SB) were analyzed using SAS controlling for demographics and complex survey variables.
Results: Adolescents who were not born in the U.S. were less likely to meet recommended VPA compared to referent group (adjusted odds ratio [AOR] =0.60, 95% confidence interval [CI] 0.37 to 0.99). Adolescents whose parents had high school (AOR=0.39, 95% CI 0.27 to 0.55) or some college (AOR=0.45, 95% CI 0.35 to 0.56) education compared to those with college-level educated parents were less likely to meet VPA recommendations. Compared to females, males were more likely to meet MVPA (AOR=1.94, 95% CI 1.53 to 2.46) and VPA (AOR=1.85, 95% CI 1.31 to 2.63) recommendations. No significant associations between immigrant status and SBs were found.
Conclusion: Underlying cultural mechanisms may influence adolescent VPA engagement. Cultural assimilation may be considered to tailor PA intervention programs to immigrant adolescents.
53. The Relationship Between Access to Health Care and Race in Loss to Follow-up After Newborn Hearing Screenings
Charles Auerbach, PhD; Lynn Spivak, PhD; Wendy Zeitlin, PhD; Susan Mason, PhD
Background: Poor hearing in children can result in long-term deficits in cognitive and language development, intelligibility, social adjustment and behavior. About 95.9% of infants born in the US are screened annually in Universal Newborn Hearing Screening (UNHS) programs. Despite the success of UNHS, there is a long-standing problem of lack of follow-up among those needing additional evaluation. The latest figures show 36.9% of those not passing initial screenings are lost to follow-up (LTF) or lost to documentation.
Objective: The objective of this research was to identify biopsychosocial factors associated with LTF between screenings and diagnosis of hearing loss.
Methods: This longitudinal study included telephone interviews with 200 parents in a large metropolitan area whose children were referred for additional testing after initial hospital screenings at birth. Measures included an assessment of social supports and depression with regard to parenting a child with a health impairment or disability. Data were obtained six to nine months after initial screening to determine follow-up status. Eleven out of 41 infants ultimately referred for diagnosis were LTF. Logistic regression was conducted to identify a constellation of risk factors related to LTF at diagnosis.
Results: The best fitting model (Pseudo R2=0.337; p=0.00) indicated that the greatest risk factors for being LTF were having fewer health professionals with whom to consult (OR=0.17, p=0.16) and being Black (OR=12.02, p=0.05). Post-estimation testing for collinearity and goodness-of-fit indicated a good fitting model.
Conclusions: LTF appears to be a complex problem related to access to health care, and social work interventions may play an integral role in connecting families, particularly minorities, with needed care. More research is needed to better understand the role race plays in LTF. Replication with a larger sample is necessary to tease out additional factors related to LTF.
54. Burden of Mental Illness on Hospital and Patient Outcomes among Asthma Hospitalizations
Benjamin J. Becerra, MS, MPH, DrPH(c)*: Doctoral Student: School of Public Health, Loma Linda University: Jim E. Banta, PhD, MPH: Associate Professor: School of Public Health, Loma Linda University: Mark Ghamsary, PhD: Associate Professor: School of Public Health, Loma Linda University: Leslie R. Martin, PhD, MA: Professor: Department of Psychology, La Sierra University: Nasia Safdar, MD, PhD: Associate Professor: Department of Medicine; University of Wisconsin, Madison: *Corresponding author: Benjamin J. Becerra: 24951 N Circle Drive,: Loma Linda, CA 92350: Phone: 951-552-5927. Email: firstname.lastname@example.org.
Background: Current empirical evidence demonstrates the comorbidity of asthma and mental illness, though limited studies have evaluated the patient and hospital outcomes associated with such conditions.
Objective: Evaluate the burden of this comorbidity on health resource utilization and patient disposition among asthma hospitalizations.
Methods: A secondary analysis of the Nationwide Inpatient Sample (2009-2011) was conducted, with study population of asthma hospitalizations limited to those 18 years of age and older. ICD-9-CM codes were utilized to identify asthma and mental illness discharges. Length of stay (LOS) was defined as number of days stayed in the hospital, total charges were inflation-adjusted, and patient disposition was defined as routine versus not routine. All analyses were survey-weighted and adjusted for patient and hospital characteristics.
Results: Approximately 29% of the asthma hospitalizations reported mental illness. Any mental illness was associated with increased LOS in the hospital (10% increase), charges (9% increase), and lower odds of routine disposition (21% decrease). Substance-related disorder also increased LOS in the hospital (4% increase), charges (7% increase), and lower odds of routine disposition (29% decrease). Age-stratified analyses further demonstrated similar trends among most age groups.
Conclusions: The Results of this study complement the extant literature by demonstrating the burden of the asthma-mental health nexus on health resource utilization and patient outcomes. The increased LOS, total charges, and decreased likelihood of routine disposition associated with mental illness highlight the need for integrated care to address mental illness during primary care visits.
55. Ambulatory Care Utilization by Provider Type: A Baseline to the Patient Protection and Affordable Care Act
Heather Brom, MS, RN, CNP & Pamela Salsberry, PhD, RN
Background: Nurse practitioners (NPs) and physician assistants (PAs) are in demand to fill the gap of primary care providers (PCPs) brought about by an aging population, increasing access to care through the Patient Protection and Affordable Care Act (PPACA), and a decreasing physician supply. It is anticipated that major changes in ambulatory practice will be required in the coming years to meet these demands.
Objective: To describe provider utilization patterns (NPs, PAs, physicians) in the ambulatory setting as a baseline prior to the implementation of the PPACA.
Methods: The 2010 public files of the National Ambulatory Health Care Survey and Outpatient Department file of the National Hospital Ambulatory Health Care Survey were used in these analyses. Only visits where the patient was seen solely by the NP, PA, or physician were included. Weighting and variance guidelines were employed. Descriptive statistics and comparisons with chi-square and t-test analyses were conducted.
Results: 92.71% of patient visits were to a physician, 4.71% to a NP, and 2.58% to a PA. NPs and PAs were more likely to care for patients in non-metropolitan statistical areas compared to physicians (F=402.05, p=0.0001). NPs cared for a higher percentage of patients on Medicaid, while physicians cared for a higher percentage of patients with private insurance. (F=347.93, p=0.001). Younger patients were more likely to see a NP (F=51.49, p=0.0001). NPs saw a higher percentage of female patients (X2 = 50.81, p=0.0002). Visits to NPs were more frequently for preventative care compared to physicians and PAs (X2 = 120.84 p=0.0035). There was no difference in the average number of chronic conditions patients had based on provider seen.
Conclusion: In 2010 NPs and PAs are providing primary care services in more rural settings, to younger patients and to those without private insurance. These results provide a baseline for evaluation of practice changes post-PPACA implementation.
56. National Survey of Prison Health Care: An Overview
Karishma Chari1, Carol DeFrances1, Alan E. Simon1, and Laura Maruschak2: 1 – National Center for Health Statistics: 2 – Bureau of Justice Statistics:
Background: National data on the capacity of state and federal prison systems to deliver health care to inmates are lacking. To address this gap, the National Center for Health Statistics and the Bureau of Justice Statistics developed and conducted the National Survey of Prison Health Care (NSPHC). The purpose of NSPHC was to identify how health care information is maintained within prison systems, gather data on the structure and provision of health care services, and determine the appropriate methods for future correctional health surveys.
Objective: This poster presents information collected in the NSPHC, highlighting intake testing procedures conducted in prisons.
Methods: Key respondents were identified within all 50 state Department of Corrections and the Federal Bureau of Prisons (BOP), and semi-structured telephone interviews were conducted which focused on various health care topics such as, contracting and provision of services, intake testing for infectious diseases and health risks, and admissions/custody numbers for calendar year (CY) 2011.
Results: The majority of states (45) participated in NSPHC but the BOP did not. For CY 2011, the number of responding states that tested prisoners for the following specific health conditions upon intake were: Hepatitis A (30), B (32) and C (36); mental health conditions (45), suicidality (45) and traumatic brain injury (TBI) (23); and electrocardiogram (29), lipids (30) and tuberculosis (45).
Conclusions: Although all participating states conducted mental health, suicidality, and tuberculosis testing upon intake, some, but not all states tested for Hepatitis A, B or C; TBI; and cardiovascular risk using an electrocardiogram and lipid profile. The results also show that collecting data on medical and mental health services from state prison systems by mail survey is feasible for future data collections.
57. Physician Assistants and Advanced Practice Registered Nurses Work for California
Kristine A Himmerick, PA-C, PhD-candidate; Jill G Joseph, MD, PhD; Janice Bell, PhD; Perri Morgan, PA-C, PhD
Background: Federal and state funded programs seek to increase the availability of primary care resources to populations in need, but they often develop programs without data that adequately reveal the local differences in primary care resources and need. Physician assistants (PAs) and advanced practice registered nurses (APRNs) may be one solution to the complex problem of filling the demand capacity gap in primary care.
Objective: This study describes the staffing distribution of PAs and APRNs and tests the association between measures of patient need for health services with the variability of PA and APRN distribution in California’s licensed community health centers.
Methods: A secondary data analysis of California’s Office of Statewide Health Planning and Development (OSHPD) data was performed on 1000 individual community health clinics licensed in 2012. Descriptive statistics, pairwise correlation, and ordinary least squares regression analysis determine the associations between clinic level patient characteristics (race, ethnicity, age, sex, income, and insurance status) and the percentage of PA and APRN Full Time Equivalents (FTE) employed by clinics.
Results: On average PA and APRNs comprised half of the California community health clinic workforce. Clinics with a higher percentage of PA and APRN FTEs were significantly more likely to serve patient populations that were more black, female, young adult, uninsured, and low income.
Conclusions: Results suggest that California primary care clinics employ a higher percentage of PAs and APRNs than national averages (48% vs 29%). In addition PAs and APRNs may be employed in clinics where patient need for health services is greatest. Training and policy interventions rooted in knowledge of existing PA/APRN use may help meet the primary care needs of California and the nation.
58. Procedures Performed in Primary Care
Kristine A Himmerick, PA-C, PhD-candidate: Richard Dehn, MPA, PA-C: Bettie Coplan, MPAS, PA-C: Roderick Hooker, MBA, PhD, PA-C
Background: Primary care accounts for 45% of all outpatient encounters, yet little is known about the role and extent of provider activities. Economists view physician assistants (PAs), advanced practice registered nurses (APRNs), and physician roles somewhat interchangeable in primary care, though others argue differences exist in the types of visits and procedures performed.
Objective: This study explores the range and relative frequency of procedures performed by PAs, APRNs, and physicians in primary care to determine if differences exist between the three provider types.
Methods: Billing data from 52 community healthcare clinics were reviewed for a 12-month period in 2013. Descriptive and correlative analysis determine the association between patient characteristics, clinic characteristics, and procedures performed, with the provider of care (physician, PA, or APRN). Procedures were identified by an a priori criteria list of 112 CPT codes including suturing, casting, IUD insertion, and dermatologic procedures.
Results: Half of all providers in the CHCs studied were physicians and half were physician assistants (PAs) or advanced practice registered nurses (APRNs). Half of all procedures were performed by PAs and APRNs. PAs performed the more procedures per provider than physicians or APRNs. The most common procedures performed were respiratory procedures and fetal monitoring. Ob/Gyn procedures were more common for APRNs than for PAs and physicians, while dermatology procedures were more common for PAs and physicians than APRNs.
Conclusions: Results demonstrate that physicians, PAs and NPs all perform procedures in primary care ambulatory settings, and some differences exist in the quantity and type of procedures performed by the three types of providers. Results add to the growing body of evidence that PAs, NPs, and physicians may be fulfilling distinct roles in the primary care setting. The results also have implications for efficiently allocating resources for primary care needs.
59. Where do they go? Participants who stopped using adult day services centers and residents moving out of residential care communities: 2012 National Study of Long-Term Care Providers (NSLTCP)
Adrienne L. Jones
Background: Adult Day Service Centers (ADSCs) and Residential Care Communities (RCCs) are two sources of community based health care organizations that provide long-term care services. ADSCs and RCCs provide care to persons who have limited capacities and for those who are unable to live alone. ADSCs and RCCs enable social engagement, health care delivery and caregiver respite.
Objective: The purpose of this analyses is to establish the whereabouts of ADSC and RCC participants once they discontinue the use of these community based health care delivery organizations; what type of care are they most likely to transition to, and whether cost was a factor in making the decision to withdraw from their current health care delivery systems.
Methods: This is a descriptive analyses of long-term care participants who utilize ADSCs and RCCs to manage health care. Estimates were derived from the 2012 National Study of Long-Term Care Providers (NSLTCP), conducted by the Center for Disease Control and Prevention’s National Center for Health Statistics. ADSCs and RCC data was collected using mail questionnaires.
Results: Of those participant who stopped using ADSCs, collectively 70 % either went to another ADSC or RCC/Assisted Living place; nursing home (48.3) or private residence (44.4). One-quarter of ADSC participants were located in some other place and 14% discontinued service due to cost. Forty-one percent of RCC participants transitioned to a nursing home; more than one-third was admitted to another RCC; 34% in private residence, 12% in some other place and 19% discontinued care due to cost.
Conclusions: ADSC participant and RCC participants had similar transitions after discontinuing use of these health care delivery systems. ADSC and RCC transitional care revealed that most participants transitioned to either another ADSC or RCC, nursing home or private residence when leaving their current health care delivery organization. Collectively, more than one-third of all care was discontinued because of cost.
60. Effects of sensory difficulties on healthcare expenditure among community-dwelling older adults in the United States
Szu-Hsuan Lin, MPH; Omolola Adepoju, PhD
Background: Sensory difficulties are common among older adults, and the prevalence is shown to increase with age. Understanding how much older adults with sensory difficulties spend on health care services is important to better plan for future care of this demographic.
Objective: To assess the changes in sensory status over a two-year period and to examine whether the development of sensory difficulties in Year 2 is associated with greater healthcare expenditures among community-dwelling older adults over age of 65.
Methods: Six two-part models examined the association between sensory difficulties and associated healthcare expenditures related to six different type of health services. Multivariate logistic regressions (first part) estimated the probability of having [abstract] expenses as a function of sensory difficulty status, followed by multivariate linear regressions (second part) to assess the relationship between medical expenditures and sensory difficulties among respondents who reported greater than [abstract] medical expenditures.
Results: Of 5,856 respondents, 12.5% reported having sensory difficulties in Year 2, which accounted for 16 million older adults in the United States. In the first part logistic regression models, older adults who reported having sensory difficulties in Y2 were more likely to have non-zero expenditure for total health services and outpatient visit (p=0.020 and p=0.002). Older adults who reported having sensory difficulties in Y2 had higher levels of expenditures for total health services, hospital outpatient visit and other medical equipment/supplies (p=0.017, p=0.006, and p=0.006, respectively).
Conclusion: Older adults who reported having sensory difficulties in Year 2 are more likely to have higher health care expenditures in selected services compared to those who did not reported having sensory difficulties in Year 2. Future studies should consider medical expenditures in subsequent years as well as changes in sensory difficulty status.
61. Federally Qualified Health Centers in Urban and Rural Settings
Brandon Rose, Dr. Alek Sripipatana.
Background: Providing equitable access to healthcare is one of the Health Resources and Services Administration’s (HRSA) overarching objectives. There are many factors that lead to inequitable access to healthcare services. HRSA’s approach to improving health equity is through ensuring access to quality health services for the uninsured, geographically isolated and medically vulnerable populations. Objective: To determine the impact of healthcare services in urban and rural health centers (HCs). The researchers sought to investigate potential differences in Clinical Quality Measures (CQMs) for HCs in rural settings with their urban counterparts, and identify potential factors that facilitate or inhibit health equity. Methods: Data were from the 2013 Uniform Data System (UDS), an annual reporting requirement of all HRSA-supported HCs, under Section 330 of the Public Health Service Act. Eight CQMs collected in UDS (i.e., Access to Prenatal Care, Low Birth Weight (LBW), Tobacco Use Assessment, Tobacco Cessation Counseling, Colorectal Cancer Screening, Cervical Cancer Screening, Blood Pressure Control, and Diabetes Control) were compared between rural and urban HCs using SAS 9.3. Results: Of 1,202 HCs that reported to the 2013 UDS, 620 (52%) self-identified as urban and 582 (48%) as rural. Preliminary results do not show any statistically differences between Blood Pressure Control, and LBW in urban and rural HCs at the crude level. Further research is being conducted that will include statistical modeling to account for contextual factors to better identify areas for intervention. Conclusion: There is a longstanding belief that rural populations are at a disadvantage when it comes to health equity. HRSA has successful reduced that gap through their commitment to ensuring access to quality health services to geographically isolated populations. The findings from this and future research will assist BPHC’s Office of Quality and Data in understanding the key characteristics of HCs in unique geographical settings and to better identify areas for intervention.
62. Who Said What: Patient Complaints Leading To an Opioid Prescription
Matt Yuen, MPH; Janice Probst, PhD.
Background: In 2010, the Centers for Disease Control and Prevention reported that enough prescription painkillers were prescribed to medicate every American adult for 1 month. It is estimated that 39% of all opioids are prescribed at emergency departments (ED).
Objective: We sought the complaints and factors associated with an actual opioid prescription.
Methods: We conducted weighted cross-sectional analysis of the nationally representative 2007-2010 National Hospital Ambulatory Medical Care Survey-ED visits for adult patients. The most commonly abused opioids were identified from the National Institute of Drug Abuse. All patient visit complaints associated with an opioid prescription were determined. Regardless of whether an opioid prescription was given, any patient visit with such complaints was defined as the study population (n=104,110). Final analysis compared visits that did and did not receive an opioid prescription.
Results: Top complaints that resulted in an opioid prescription were injury (16.2%), abdominal pain (11.8%), and back pain (9.8%). Compared to torso pain, visits for mouth related complaints had higher odds of receiving an opioid prescription (OR: 2.744; 95%CI 2.739-2.749), individuals aged 36-50 years versus those over 65 years (OR: 2.506; 95%CI 2.503-2.509), visits occurring in the West versus the Northeast (OR: 1.825; 95%CI 1.823-1.826), and those privately insured versus publicly insured. A non-metropolitan visit was less likely to receive an opioid than its urban equivalent (OR: 0.956; 95%CI 0.955-0.957).
Conclusions: Prior research has examined the diagnosis leading to an opioid prescription. However, no research has been done on the complaints of the patient leading to an opioid prescription. More research is needed to determine where opioid use is most likely and appropriate.
63. Prevalence of Coal Mine Dust Lung Disease among Coal Miners from Appalachian and Interior Coal Fields
Cara N. Halldin, Anita L. Wolfe, A. Scott Laney
Background: Inhalation of coal mine dust (CMD) can cause pneumoconiosis, a chronic, occupational lung disease. In recent years, coal workers’ pneumoconiosis prevalence and severity have increased among working central Appalachian (Kentucky, Virginia, and West Virginia) miners. Historically, former miners have not been systematically surveyed for pneumoconiosis, therefore little is known about this population.
Objective: To investigate prevalent lung disease among former coal miners in central Appalachia and other states.
Methods: During 2012 and 2013, the National Institute for Occupational Safety and Health’s mobile surveillance unit traveled to Appalachian and Interior coal mining regions* and offered former miners a chest radiograph and spirometry test. Radiographs and spirometry were classified according to International Labour Office standards for pneumoconiosis and American Thoracic Society lung function interpretative strategies, respectively. We calculated prevalence of pneumoconiosis and abnormal lung function and compared prevalences among former miners in central Appalachia to former miners from other states, using a modified Poisson regression estimating prevalence ratios (PR).
Results: We evaluated 764 former miners’ chest radiographs who had greater than 10 years of tenure; 61 (8.0%) had pneumoconiosis. Of the 347 former miners who performed spirometry; 97 (28.0%) had abnormal lung function. Prevalence of pneumoconiosis and abnormal spirometry was significantly higher among central Appalachian former miners compared to former miners from other states (pneumoconiosis: 9.8% vs. 5.1%, PR=2.3, 95% CI: 1.4-4.0, tenure adjusted; abnormal spirometry: 33.2% vs. 18.2%, PR=1.8, CI: 1.2-2.7, tenure, smoking, and BMI adjusted).
Conclusion: Compared to former miners in non-central Appalachian states, pneumoconiosis and impaired lung function prevalence in former central Appalachian miners was significantly elevated. Pneumoconiosis is a progressive disease that can develop or be identified after a miner has left employment. Fully characterizing the scope of CMD-related respiratory morbidity requires ongoing surveillance of both actively working and former miners.
* Coal Mining Regions Defined: http: //www.eia.gov/coal/review/html/fig1.cfm.
64. Acute Joint Pain among Adults Employed in U.S. Green Collar Jobs: Linking the National Health Interview Survey to the Occupational Information Network (O*NET)
Samuel R. Huntley, BS; Charles Chen, BS; Kevin J. Moore, BA; William G. LeBlanc, PhD; David Lee, PhD; Manuel Cifuentes, MD, ScD; Kristopher Arheart, EdD; Cristina Fernandez, MSEd; Laura A. McClure, MSPH; Sharon Christ, PhD; Lora E. Fleming, MD, PhD; Alberto J. Caban-Martinez, DO, PhD, MPH, CPH.
Background: Musculoskeletal disorders (MSDs) have well-documented associations with occupational ergonomic stressors such as repetitive motion, heavy lifting, and vibration. The introduction of new technologies and job requirements into the work environment may present new physical stressors contributing to MSDs. “Green collar” occupations are relatively new to the U.S. and focus on reducing the environmental impacts of economic enterprises. Despite the rapid growth of this new jobs sector, little attention has been given to the safety and health of this rapidly emerging workforce.
Objective: We describe the socio-demographic and work characteristics of U.S. green and non-green collar workers. We also estimate and compare the prevalence of self-reported acute joint pain between green and non-green collar workers.
Methods: Pooled data from the 2004-2012 National Health Interview Survey (NHIS) were linked to the Occupational Information Network (O*NET) to classify NHIS respondents as either “green” or “non-green collar” workers. Estimates of acute joint pain (i.e., symptoms of pain, aching, or stiffness at a joint in the 30 days prior to survey administration) were adjusted for the complex survey design and stratified by socio-demographic (i.e., age, gender, race, ethnicity, and geographic region) and workplace (i.e., number of employees at work) characteristics.
Results: Compared to non-green collar workers, green collar workers reporting acute joint pain were more likely to use special equipment (57.4% vs. 54.6%; p-value=0.02) and be state employees (33.5% vs. 29.1%; p-value=0.00). They were less likely to report visual impairment (43.8% vs. 47.1%; p-value=0.00) and hold more than one job (30.5% vs. 33.2%; p-value=0.03). No significant difference in acute joint pain was noted between green and non-green collar workers (28.0% vs. 27.5%; p-value=0.50).
Conclusions: Linkage of the NHIS and O*NET data provides unique occupational health surveillance opportunities and insights into potential disparities of occupational exposures and related conditions among evolving worker group duties.
65. Prevalence of Work-related Upper Extremity Joint Symptoms in U.S. Workers: National Health Interview Survey, 2006 and 2009
CC Ma1, JK Gu1, CM Burchfiel1, RG Dong2, LE Charles1
1 Biostatistics and Epidemiology Branch, Health Effects laboratory Division, National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention, Morgantown, WV.
2 Engineering and Control Technology Branch, Health Effects laboratory Division, National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention, Morgantown, WV
Background: Upper extremity musculoskeletal disorders have a negative impact on the economy and on workers’ wellbeing. Workers with these conditions require a median of 13 days to recuperate before returning to work, compared with 8 days for all work-related illnesses. However, national prevalence of these disorders is not known.
Objective: This study aimed to estimate the prevalence of upper extremity joint symptoms in U.S. workers by occupation and industry.
Methods: Data were obtained from arthritis supplements to the National Health Interview Survey (NHIS) conducted in 2006 and 2009. Participants were 30,865 workers who were at least 18 years old and employed during the previous week. Upper extremity joint symptoms were defined as symptoms of pain, aching, or stiffness in or around a joint of the shoulder, elbow, wrist, or hand/finger in the previous 30 days that were not due to arthritis, gout, lupus, fibromyalgia, or injury. Prevalence and 95% confidence intervals (CI) were estimated across major occupations and industries using SUDAAN software that accounts for the complex sampling design of the NHIS.
Results: Among 30,865 workers, 2,341 were identified as having upper extremity joint symptoms. The overall prevalence of the symptoms was 7.7% in the U.S. workers. The adjusted prevalence was 7.5% (95% CI: 7.0-7.9) in white-collar and 9.5% (8.6-10.6) in blue-collar workers. Among all occupations, the adjusted prevalence was highest in Construction workers (11.4%; 7.4-17.2) followed by Life, Physical and Social Science workers (11.1%; 7.4-16.4). Among all industries, adjusted prevalence was highest in Agriculture, Forestry, Fishing, and Hunting industry (10.4%; 6.9-15.5) followed by Other Services industry (e.g. Repair and Maintenance, except Public Administration) (10.2%; 8.5-12.1).
Conclusions: Our results provided evidence that 7.7% (estimated 11 million cases) of U.S. workers had work-related upper extremity joint symptoms in 2006 and 2009. Prevalence of these symptoms varied by occupation and industry.
66. Unmet Need for Dental Care among US Children with and without Disabilities in 2013
Walid A. Al-Soneidar, BDS, MHPA Candidate: Department of Health Policy and Administration, Washington State University, Spokane
Background: There is widespread recognition of special health care needs of children with disabilities in the medical literature, including oral public health and pediatric dentistry. However, there are no recent national estimates of unmet need for dental care among children with disabilities, and existing research does not control for important factors like health insurance coverage.
Objective: To evaluate dental care access disparities among children enrolled in special education.
Methods: The study uses data from the Child Supplement of the 2013 National Health Interview Survey (NHIS). A Multiple logistic regression model was developed using the SAS v9.4 SURVEYLOGISTIC procedure to assess the effect of special education on reported unmet need for dental care, simultaneously controlling for factors such as race, ethnicity, age, gender, and health insurance coverage.
Results: Approximately 3.2 million children (or their surrogates) reported that they could not afford needed dental care in 2013. Controlling for race, ethnicity, age, gender, and health insurance coverage, children aged 2-17 with disabilities enrolled in special education were 1.5 times more likely to report unmet need than those children who were not enrolled in special education (AOR=1.5; 95% CI 1.1-2.2).
Conclusion: Children with disabilities appear to face financial barriers to obtaining needed dental services, and may also face other access barriers including special equipment, training, and service needs. Preliminary analyses also suggest that children enrolled in special education are likely to use dental services more frequently, leading to higher costs.
67. Oral health problems in a Uruguayan population of young people and adults: Findings from the 2011 National Uruguayan Oral Health Survey in a population of young people and adults
Susana M. Lorenzo Erro1, Ramón Alvarez2, Anunzziatta Fabruccini3, Fernando Massa4, Patricia Olmos5, Mariana Musto6
1Service of Epidemiology and Statistics. Faculty of Dentistry. Department of Public Health. University of the Republic., Montevideo, Uruguay.
2Institute of Statistics. Faculty of Economics. University of the Republica. Service of Epidemiology. Department of Public Health. Faculty of Dentistry. University of the Republic. Montevideo, Uruguay.
3Service of Epidemiology and Statistics. Department of Public Health – Department of Poediatric Dentistry. Faculty of Odontology. Montevideo, Uruguay
4Service of Epidemiology and Statistics. Faculty of Dentistry. Department of Public Health. Institute of Statistics. Faculty of Economics. University of the Republic, Montevideo, Uruguay.
5Service of Epidemiology and Statistics. Department of Public Health. Faculty of Odontology, University of the Republic, Montevideo, Uruguay.
6Service of Epidemiology and Statistics. Faculty of Odontology. Department of Public Health. University of the Republic, Montevideo, Uruguay.
Background: Uruguay is a South American country located between Argentina and Brazil, with an area of 176,215 km2 and 3,334,052 inhabitants. Its economy is based on agriculture and services. The health epidemiological profile of Uruguayans is similar to that of developed countries: a high proportion of chronic degenerative diseases and a low level of infectious diseases. The survey was conducted in the framework of the National Health System, as there were no oral health data available.
Objective: To describe the oral health condition of a Uruguayan population of young people and adults.
Methods: Descriptive cross-sectional study, following the World Health Organization (WHO) guidelines for oral health surveys (1997). A stratified, double-phase cluster sampling design was adopted. For the first phase, a sample design from the National Statistics Institute was used. The following age groups were considered: 15-24, 35-44, 65-74. Participants: 563 from Montevideo and 922 from outside the capital.
Results: 54.2% (95% CI: 49%-60%) of the 35-44 and 65-74 groups have lost 10 or more teeth. Considering adults and the elderly, there was moderate/severe periodontal disease in 21.8% (CI: 17.9% – 26.3%) of cases and severe periodontal disease in 12% (CI: 6.8% – 12.1%) of cases. Caries: Decayed, Missed, Filled Teeth (DMFT) mean values were: 4.1 for young people, 15.2 for adults and 24.1 for the elderly.
Conclusions: The Uruguayan oral health condition of young adults and adults was described for the first time at a national level. Dental caries was the most prevalent disease in young people; in adults, periodontal disease was also found. Tooth loss was the main oral condition among the elderly. These findings support the need to implement oral health dental care for young people and adults, the inclusion of oral health in national health programs and the implementation of a national oral surveillance system.
68. Perinatal Oral Health Education: Impact on Maternal Dental Services Usage in Pregnancy
Monique J. Williams, DDS, MBA, PhD student
Background: Annually, more than 6 million women become pregnant in the United States resulting in approximately 4 million live births. There is a strong correlation between oral inflammatory conditions and poor perinatal outcomes. These oral disease processes may contribute to the risk of premature labor, low birth-weight, infection or gestational diabetes in the newborn. Accessing timely oral health care is critical to achieving desired health outcomes for women and their unborn children.
Objectives: To determine the association between reported perinatal oral health counseling by a dentist or other health care worker during pregnancy and oral health services utilization during pregnancy; and to explore the sociodemographic related disparities concomitant to this association using data from the Pregnancy Risk Assessment Monitoring System (PRAMS).
Methods: Using survey data from South Carolina collected from black and white mothers who had recently given birth to a live-born infant in the 2009, 2010, and 2011 PRAMS survey (n=2849), bivariate analyses were conducted. Multivariable logistic regression models provided estimates of odds ratios (OR), 95% confidence intervals (CI) and Chi-Squared p-values.
Results: There were 749 participants (white n=373; black n=376) which reported a dental problem during pregnancy of which 52% of white and 48% of black women reported dental utilization during pregnancy. In contrast, there were 2,060 women who reported no dental complications during pregnancy (white n=1226; black n=834) where 41% of white and 28% of black mothers received services. There was a statistically significant positive association between reported dental talk and dental utilization during pregnancy (OR: .045 CI: .030, .069, Pr ChiSq < .0001).
Conclusion: There was a significant association between oral health counseling by a dentist or other health care worker and dental utilization during pregnancy. Oral health counseling should be supported as an effective and low-cost intervention to increase dental utilization during pregnancy.
69. Comparison of self-reported diagnosed prevalence rates of Chronic Obstructive Pulmonary Disease (COPD) via nationwide, community-based surveys in the United States
Kristy Baumgart, MPH; Karen Skinner, MPH; Carey Strader, MPH
Background: Methodological differences among various population-based surveys can lead to varying population rates for the same condition. Prevalence estimates determined by using self-reported data are often subject to more scrutiny as results can vary across study populations depending upon how survey questions are worded or how study populations are sampled.
Objective: The purpose of this analysis was to determine whether self-reported prevalence rates of diagnosed COPD vary across several nationally representative surveys in the United States.
Methods: Custom analyses of the 2005-2010 National Health and Nutrition Examination Survey (NHANES), 2012 National Health Interview Survey, 2011 Behavioral Risk Factor Surveillance Survey (BRFSS), and 2010 National Health and Wellness Survey (NHWS) were conducted in order to determine the self-reported diagnosed prevalence rate of COPD among those age 40 years and older in the United States. In general, study participants were asked via questionnaire whether they had been diagnosed by a physician with chronic bronchitis or emphysema. Those who answered affirmatively to either question were considered to have COPD. Study questions and time frames varied slightly by questionnaire. The resulting prevalence rates from each survey were compared via one-way analysis of variance (ANOVA) using IBM SPSS, Version 19.0 (Armonk, NY).
Results: The self-reported diagnosed prevalence rate of COPD via NHANES, NHIS, BRFSS, and NHWS were determined to be 8.6%, 8.3%, 8.3%, and 7.9%, respectively. There was no statistically significant difference between surveys and survey results.
Conclusions: Prevalence estimates from the various surveys were very consistent, indicating that these surveys may be reliable in determining the prevalence of diagnosed COPD in the United States. Additionally, the results from this analysis indicate that patients will consistently recall whether a doctor has diagnosed them with either chronic bronchitis or emphysema.
70. Prevalence of Chronic Obstructive Pulmonary Disease among U.S. Working Adults: National Health Interview Survey Data 2009-2013
Brent Doney, Girija Syamlal
Background: COPD is a leading cause of morbidity and mortality in the United States. Fifteen million U.S. adults have been told by a health-care provider that they have COPD.
Objective: To estimate COPD prevalence by occupational groups among U.S. working adults.
Methods: The 2009-2013 National Health Interview Survey (NHIS) data was analyzed to estimate the COPD prevalence (self-reported doctor diagnosed emphysema or chronic bronchitis) among U.S. working adults. The associations between COPD and occupations were evaluated using logistic regression models.
Results: The estimated prevalence of COPD was 3.2% (95%CI, 3.1-3.4) among working adults 18 years or older. COPD prevalence was significantly higher (6.0%, 95%CI, 4.6-7.3) among older workers (70+ years) as compared to younger (18-39 years) workers (2.3%, 95%CI, 2.1-2.5) and females (4.2%, 95%CI 4.0-4.4) as compared with males (2.3%, 95% 2.1-2.5). COPD prevalence among non-Hispanic whites was 3.7% (95%CI, 3.5-3.9), followed by non-Hispanic blacks 2.9% (95%CI, 2.5-3.2), and Hispanics 2.0% (95%CI, 1.7-2.3). Among occupations, the estimated prevalence of COPD was high among workers in personal care and services (4.2%, 95% CI 3.4-5.0), healthcare support (4.0%, 95%CI, 3.3-4.8), office and administrative support (4.0%, 95%CI, 3.5-4.5), and building and grounds cleaning and maintenance (3.8%, 95%CI, 3.1-4.6) occupations. Workers with more than 15 years on the job had higher COPD prevalence (3.7%, 95%CI, 3.3-4.1) as compared with workers who had less than 5 years (2.9%, 95%CI, 2.7-3.2).
Conclusions: Findings show that an estimated 4.5 million U.S. working adults currently have COPD and the prevalence varied by occupation. Although COPD is more prevalent among older adults, an estimated 1.4 million adults 18-39 years of age have COPD. Future epidemiologic research is needed to identify risk factors associated with COPD among younger workers.
71. Asthma prevalence among Hispanic adults in Puerto Rico and Hispanic adults of Puerto Rican descent in the United States results from two National Surveys
Suad El Burai Felix, MPH; Cathy M. Bailey, MS; Hatice S. Zahran, MD, MPH
Background: Asthma prevalence has been shown to be higher among Hispanics of Puerto Rican descent compared with the prevalence among all Hispanics and non-Hispanics of the US population.
Objective: We aimed to assess whether asthma prevalence differs between Hispanic adults living in Puerto Rico and Hispanic adults of Puerto Rican descent living in the United States and whether the data source affects asthma prevalence estimation among these populations.
Methods: We used 2008-2010 Behavioral Risk Factor Surveillance System data, administered in Puerto Rico for Hispanic adults living in Puerto Rico (Hispanics in Puerto Rico), and 2008-2010 National Health Interview Survey data for Hispanic adults of Puerto Rican descent living in the United States (Puerto Rican Americans). We use 95% confidence intervals to compare asthma prevalence between corresponding subgroups; non-overlapping confidence intervals indicate statistical significance. Chi-square test and multivariate logistic regression were used to assess the association between current asthma status and socio-demographic factors and health risk behaviors within each Puerto Rican population.
Results: Current asthma prevalence among Hispanics in Puerto Rico (7.0% [6.4%-7.7%]) was significantly lower than the prevalence among Puerto Rican Americans (15.6% [13.0%-18.1%]). The prevalence among almost all socio-demographic and health risk subgroups of Hispanics in Puerto Rico was significantly lower than the prevalence among the corresponding subgroups of Puerto Rican Americans. Adjusting for potential confounders did not alter the results. Asthma prevalence was significantly associated with obesity among Puerto Rican Americans (aPR=1.5[1.1-2.0]), and among Hispanics in Puerto Rico was associated with obesity (aPR=1.6[1.3-1.9]), smoking (aPR=1.4[1.1-1.9]), and being female (aPR=1.9[1.5-2.4]).
Conclusions: Asthma was more prevalent among Puerto Rican Americans than Hispanics in Puerto Rico. Although the observed associations did not explain all variations in asthma prevalence between these two populations, they may lay the foundation for future research.
72. Chronic Obstructive Pulmonary Disease (COPD) exacerbation and factors contributing to Hospital Length of stay
Sara J Federman MSBS candidate
Background: COPD encompasses respiratory diseases including emphysema, chronic bronchitis, and other pulmonary diseases that make breathing difficult. Exacerbation of COPD contributes to significant proportion health care expenditure due to hospitalization. Factors affecting LOS (Length of Stay) for these patients are not well established. Hospital LOS for COPD patients is affected by many factors and comorbidities.
Objective: Identifying factors contributing to LOS for patients admitted with COPD exacerbation using a large hospital admission database.
Methods: Factors that affect hospitalization and severity of disease were identified based on COPD literature. Hospital discharge data was obtained from the Healthcare Cost and Utilization Project (HCUP) and analyzed using the SAS system. Patients admitted with primary discharge diagnosis of COPD were included. Included in this study was information from 1,476,171 national hospital discharges. Univariate and multivariate analyses were performed to determine the effects of the following on LOS: Age, sex, Race, Presence of Renal failure, Acquired Immune Deficiency Syndrome (AIDS), Anemia, Alcohol abuse, Smoking, Congestive heart failure (CHF), Depression, Drug abuse, Hypertension, and Peptic ulcer disease.
Results: All variables were statistically significant in univariate and multivariate analyses. The factors with the largest effect on LOS were associated peptic ulcer disease, smoking, anemia, CHF, and alcoholism. Maximum LOS was 365.0 days, and mean LOS for COPD patients was 5.199 days. Based on parameter estimates, peptic ulcer disease increased LOS for COPD patients by a factor of 2.689. Smoking decreased LOS by 0.784, anemia increased LOS by 1.605, CHF showed a 1.396 increase, and alcoholism showed a 1.210 increase.
Conclusions: All examined factors significantly affected LOS in patients with COPD, except drug abuse and AIDS. Most factors increased LOS, but smoking, depression, and female gender showed a paradoxical decrease in LOS. Further study and analysis is needed to explain the LOS, which may be not only clinical, but also social in nature.
73. Effect of Recent Smokeless Tobacco Use on the Fractional Exhaled Nitric Oxide Levels in US Adults
Chad Hines, Ngozi Enwerem, Alem Mehari, J. Ngwa, RF Gillum; Howard University, Washington, DC
Background: Asthma is an important cause of mortality and morbidity among tobacco users. Fractional exhaled nitric oxide (FeNO) is a non-invasive biomarker of eosinophilic airway inflammation. Identification of the effect of use of smokeless tobacco on airway inflammation in adults is needed.
Objectives: To estimate the association between use of smokeless tobacco and FeNO among US adults.
Methods: National Health and Nutrition Examination Survey (NHANES) 2007-2012, a stratified multistage probability sample of the civilian non-institutionalized population, was analyzed to assess association of use of smokeless tobacco and FeNO levels (ppb) in US adults. Participants were categorized by smoking status and use of snuff or chewing tobacco in past 5 days. FeNO was measured using a device that relies on an electrochemical sensor. Analyses took into account the complex survey design.
Results: In 2007-2012, NHANES interviewed 30,442 participants; 29,353 underwent examination; of those 18,619 were aged 18+. Of these, 14,293 had 2 reproducible FeNO measurements. Those 11,445 who were nonsmokers and also had complete data on demographics, and recent tobacco use formed the analytic sample. Weighted mean lnFeNO was 2.672281 in nonsmokers who recently used smokeless tobacco and 2.696776 (difference -0 .024495) in other nonsmokers with no history of asthma. In weighted linear regression analyses controlling for age, gender and black race among nonsmokers with no history of asthma, use of smokeless tobacco was associated with significantly lower FeNO (coefficient -.1019846, 95% Conf. Interval -.2030086 to -.0009606, P >|t|= 0.048.
Conclusions: Use of smokeless tobacco was associated with lower mean lnFeNO levels in nonsmokers with no asthma history. Interpretation of FeNO should consider all forms of tobacco use.
74. Change in Prevalence of Restrictive Lung Impairment and Associated Risk Factors in the U.S. Population: NHANES 1988-1994 and NHANES 2007-2010
Laura Kurth, Eva Hnizdo.
Background: Epidemiologic studies often use fixed ratio (forced expiratory volume in 1 second [FEV1]/ forced vital capacity [FVC] greater than 0.70 and FVC less than 80% predicted) for classifying restrictive pattern on spirometry rather than the age-defined American Thoracic Society (ATS)/European Respiratory Society (ERS) lower limit of normal (LLN) criteria, which may lead to misclassification.
Objective: The prevalence of ATS/ERS-defined restrictive pattern on spirometry and individual risk factors for restrictive lung impairment in two samples of the U.S. population was described using National Health and Nutrition Examination Survey (NHANES) data.
Methods: The age-standardized prevalence of a restrictive pattern for NHANES 1988-1994 and 2007-2010 was calculated using SAS procedure Proc SURVEYREG. A restrictive pattern was defined per ATS/ERS recommendations as FEV1/FVC greater than LLN and FVC less than LLN, and severity was further evaluated using FEV1 less than 70% predicted. Associations between a restrictive pattern and individual risk factors were evaluated using multivariable logistic regression models.
Results: The overall age-standardized prevalence of restrictive pattern decreased significantly from 7.2% in NHANES 1988-1994 to 5.4% in 2007-2010 (p=0.001) and moderate to more severe restrictive pattern decreased from 2.0% to 1.4% (p=0.023). Statistically significant decreases in prevalence were observed among participants aged 50-59, females, White participants, never smokers, and participants with less than a high school education, doctor-diagnosed diabetes, and a non-obese waist circumference. Factors positively associated with a restrictive pattern included older age, female sex, White race, lower education, current and former smoking, and comorbidities including cardiovascular disease, diabetes, and obese waist circumference.
Conclusions: The overall prevalence of ATS/ERS-defined restrictive pattern and moderate to more severe restrictive pattern decreased between the 1988-1994 and 2007-2010 survey periods despite a population increase in comorbidities that are associated with a restrictive pattern, including diabetes and obese waist circumference.
75. Predictors of Smoking Cessation among US Adolescent Users of E-Cigarettes
Duaa Aljabri1 and Dr. Ramzi Salloum2
1PhD Candidate, Health Services Policy and Management. University of South Carolina
2Assistant Professor, Health Services Policy and Management. University of South Carolina
Background: Electronic cigarette devices are recent innovations that promise modified risk among smokers and are marketed as a cessation aid. Its rising prevalence may hinder public health efforts to reduce the national prevalence of tobacco use.
Objective: To identify the characteristics of adolescent users of e-cigarettes, and predict the personal and environmental factors associated with their tobacco cessation behavior.
Methods: Data on e-cigarettes use were obtained from the 2012 National Youth Tobacco Survey (N=1351). Estimates of ever and current use were reported separately by demographic, personal, and environmental factors. Logistic regression was used to predict factors associated with the intention to quit all tobacco products by applying the social cognitive theory.
Results: The national prevalence was 6.8% for ever use and 2.1% for current use. E-cigarette current use was more prevalent among those who are male (65.5%), 13-16 years (56.5%), non-Hispanic (75.4%), white (80%), concurrent users of alternative tobacco products (96.0%), concurrent cigarette smokers (88.6%), and those living with a smoker (71.0%). Logistic regression predicted that current use of smokeless tobacco compared to other products, thinking that e-cigarettes are less harmful than cigarettes, agreeing that all tobacco products are dangerous, facing sale restrictions, having minimum exposure to tobacco advertisements, and seeing a warning label in smokeless tobacco are positively associated with quitting all tobacco products (p-value-less-than 0.001). Factor such as peer pressure, easiness to get tobacco, seeing actors using tobacco on TV, receiving information from tobacco companies, and receiving parental awareness were negatively associated with quitting all tobacco products (p-value-less-than 0.001).
Conclusions: There is a growing trend for using e-cigarettes. Understanding the personal and environmental factors that predict cessation among adolescent e-cigarettes users allows us to focus on these aspects when designing public health interventions. Awareness, regulations, and developing scientific evidence on e-cigarettes health effects are priorities for public health.
76. Annual Healthcare Spending Attributable to Cigarette Smoking
Xin Xu, PhD; Ellen E Bishop, MS; Sara Kennedy, MPH; Sean A Simpson, MA; Terry F Pechacek, PhD
Background: Fifty years after the first Surgeon General’s report, tobacco use remains the nation’s leading preventable cause of death and disease, despite declines in adult cigarette smoking prevalence. Smoking-attributable healthcare spending is an important part of overall smoking attributable costs in the U.S.
Objective: To update annual smoking-attributable healthcare spending in the U.S. and provide smoking attributable healthcare spending estimates by payer (e.g., Medicare, Medicaid, private insurance) or type of medical services.
Methods: Analyses used data from the 2006-2010 Medical Expenditure Panel Survey linked to the 2004-2009 National Health Interview Survey (NHIS). Estimates from two-part models were combined to predict the share of annual healthcare spending that could be attributable to cigarette smoking.
Results: By 2010, 8.7% of annual healthcare spending in the U.S. could be attributed to cigarette smoking, amounting to as much as $70 billion per year. More than 60% of the attributable spending was paid by public programs, including Medicare, other federally sponsored programs, or Medicaid.
Conclusions: These findings indicate that comprehensive tobacco control programs and policies are still needed to continue progress toward ending the tobacco epidemic in the U.S. 50 years after the release of the first Surgeon General’s report on smoking and health. NOTE to reviewers: We are currently updating this research using 1995, 1997-2009 NHIS linked with 2006-2009 Medicaid Analytic eXtract (MAX) data. If results are available by the time of the conference, we would like to include this research.
77. Use of Propensity Score Matching to Identify a Strong Association between Health Care Provider Advice Not to Smoke and Quit Attempts among Mid-Adolescents
Russell K. McIntire P.H.D., M.P.H.
Background: While studies show associations between receiving health care provider advice not to use tobacco products and quit attempts among smoking adolescents, others identify no association. These mixed results may arise from selection bias e.g. adolescent smokers receiving advice may be different from those who do not on factors including demographics, initiation age, attitudes toward smoking, smoking intensity, and smoking friends. Additionally, previous research suggestions that the influence of advice on quit attempts may be different between age categories.
Objective: The present study used propensity score matching (PSM) to reduce selection bias and, in effect, determine a robust measurement of associations between health care provider advice and past year quit attempts among early, mid, and late adolescent smokers.
Methods: This study used self-reported merged data from the 2011 and 2013 National Youth Tobacco Survey, which are nationally representative cross sectional surveys of U.S. middle and high school students (adolescents). For each age category, multivariate logistic regression models were used to examine associations between provider advice and quit attempts in the pre match samples and in smaller post match samples created by PSM.
Results: There was a significant association between receiving advice from a provider and quit attempts among mid (AOR= 1.65) and late adolescents (AOR=1.66) in the pre match samples; yet in the post-match samples, the association was only significant among mid adolescents (AOR= 2.08). For mid adolescents, the odds ratio increase after matching suggests that the pre match sample estimate underestimated the relationship between advice and quit attempts due to sample selection bias.
Conclusions: PSM reduced the bias in the post-match samples and provided more robust estimates of the influence of provider advice on adolescent quit attempts, compared to pre match estimates. Results suggest that among mid adolescent smokers health care provider advice not to use tobacco may promote quitting behavior.
78. Educating Children Today is likely to Decrease the Prevalence of Tobacco Smoke and Snuff Tomorrow
Raja M. Riaz1, 2, Syed M. A. Shah2, Gulshan Bano3, Hina Aziz4, Sartaj Alam5, Gul Nowshad6.
1Florida Institute of Technology, 150 W University Blvd Melbourne, Florida, 32901:
2Karakorum International University, Gilgit-Baltistan, Pakistan.
3Aga Khan University, Karachi, Pakistan.
4Dow University of Health Sciences, Karachi. Pakistan.
5Harvard School of Public Health, Boston, MA, USA.
6University of Texas, Houston; 7ALC Research Labs, Plano, Taxes, USA
Background: The mortality and morbidity due to non-communicable diseases is amongst highest in Pakistan. This cross sectional survey was conducted in Ghizar valley (North-West villages) located in Hindu Kush and Karakorum mountain region of Pakistan to assess the effect of education on tobacco and snuff usage.
Objective: To assess the association between levels of education and tobacco and snuff usage in North-West villages of Gilgit-Baltistan region.
Methods: Cross sectional survey was conducted in Ghizar valley. The total numbers of participants were 1134, of which 70% were females and 30% were males. The main outcome was to gather responses on self-reported daily use of tobacco in the form of smoke or snuff.
Results: The total numbers of participants were 1134, of which 70% were females and 30% were males. The mean age of the respondents were 39.88(20.576) (years). Smoking was reported by 9.5% of the participants which was more common than snuff (3.5%). Highest percentage of people (94.9%) who do not smoke were intermediate with 12 years of education, and 100% people who have 14 years of education (Bachelor’s) do not use tobacco as snuff. Highest percentages of people (11.8%) who smoke were non-educated, whereas 5.7% of the people who use snuff were also non-educated. A negative relationship exists between level of education and tobacco use as smoke or snuff.
Conclusion: A negative relationship exists between level of education and tobacco and snuff usage. Education on the whole and targeted training programs in this mountainous region of Gilgit-Baltistan between Hindu Kush and Karakorum will substantially reduce the tobacco usage in future. Keywords: level of education, tobacco, smoking, snuff, literacy.
79. Couples’ use of tobacco products and time-to-pregnancy in a preconception cohort
Sapra KJ, Barr DB, Maisog JM, Sundaram R, Buck Louis GM
Background: Smokeless tobacco has been touted as a harm reduction tool for smokers. However, no prior study has evaluated the risk of smokeless tobacco use on couple fecundity.
Objective: To evaluate the relationship between couple’s current use of tobacco products and fecundity.
Methods: 501 couples were followed from contraception cessation until positive pregnancy test or 12 months of trying. Partners reported current use of cigarettes, cigars, and chew or snuff (smokeless). Fecundity was measured by prospectively observed time-to-pregnancy (TTP) in cycles. Partners provided blood for quantification of heavy metals and serum cotinine, and data on demographics (race and ethnicity, education, income, age) and lifestyle (alcohol and caffeine use; measured BMI). Fecundability odds ratios (FOR) were estimated for current exclusive use of each tobacco type relative to never users of tobacco, adjusted for demographics and lifestyle. Partners were modeled separately and together. Geometric means of cotinine and metals were evaluated across tobacco type using non-parametric tests.
Results: 11 percent of females smoked. Male exclusive use was 10 percent for cigarettes, 9 percent for cigars, 6 percent for smokeless tobacco. Neither cigar (FOR: 0.74, 95CI: 0.48-1.14) nor smokeless tobacco use (FOR: 1.15, 95CI: 0.69-1.92) was associated with TTP. Cigarette use reduced fecundity in males (FOR: 0.43, 95CI: 0.24-0.75) and females (FOR: 0.55, 95CI: 0.34-0.87) modeled separately; modeled jointly only female use was significant (FOR: 0.28, 95CI: 0.10-0.77). Cotinine levels were significantly higher in cigarette and smokeless tobacco users than never users; however, cotinine was not associated with TTP. Cadmium levels were significantly higher in smokers than smokeless tobacco and never users; adjusting for cadmium attenuated the association between cigarette use and TTP.
Conclusions: Tobacco use is common among couples attempting pregnancy. While we cannot conclude smokeless tobacco does not alter fecundity, we do not observe an effect in our limited sample. We observe longer TTP in smokers, potentially due to high cadmium levels.
80. The tip of the iceberg: The problem of missing data in racial disparities of preterm birth
Sapra KJ, Ahrens KA, Chaurasia AK, Hutcheon JA
Background: Epidemiologic studies estimating the effects of preconception risk factors typically compare risks among pregnancies resulting in live births. Excluding pregnancies ending in induced termination may introduce selection bias.
Objective: We estimated the black-white disparity in preterm birth (PTB) among live births only and after imputing pregnancy outcomes (spontaneous termination, preterm birth, full-term birth) for induced terminations.
Methods: We used New York City registry records of 1.6 million live births and spontaneous and induced terminations to non-Hispanic white and black women, 2000-2012. We multiply imputed outcomes for induced terminations based on maternal characteristics (race, age, marital status, US/foreign born, parity, payer). Black-white odds of PTB (less than 37 weeks) was estimated using logistic regression among live births only (complete case analysis) and after multiple imputation.
Results: For black and white women, respectively, 56% and 19% of pregnancies ended in induced termination and 13% and 8% of births were preterm. Characteristics associated with PTB were also associated with induced termination (black race, unmarried, US-born). In the complete case analysis, PTB odds were higher in black than white women (OR: 1.73, 95% CI: 1.71-1.75). After imputation, PTB disparity was modestly higher (OR: 1.77, 95% CI: 1.75-1.80). When induced termination rate in white women was weighted to mirror that in black women, PTB disparity was similar to complete case analysis (OR: 1.72, 95% CI: 1.69-1.76).
Conclusions: While black-white disparity in PTB among live births is not strikingly different from results following imputation of pregnancy outcomes for induced terminations, our findings highlight PTB risk is influenced by who becomes and remains pregnant.
POSTER SESSION III: Wednesday, August 26, 2015 – 8:00AM-1:00PM
81. Hospital Readmissions within Residential Care Communities: Findings from the 2012 National Study of Long-Term Care Providers
Christine Caffrey, Lauren Harris-Kojetin, Eunice Park-Lee, and Vincent Rome; CDC’s National Center for Health Statistics
Background: Reducing hospital readmissions will potentially decrease health care costs, lessen trauma or complications resulting from medical treatment for residential care residents, and improve quality of care. In 2012, among the residential care communities with at least one 90-day overnight hospital discharge, the average 30-day hospital readmission rate was 17.3%. Few studies have looked at factors associated with hospital readmissions among residents living in assisted living and similar residential care communities, and most focus on resident characteristics.
Objective: This study aimed to measure the percentage of residential care communities with above average 30-day hospital readmissions, and examine community characteristics associated with having above average 30-day hospital readmissions, overall and by bed size.
Methods: This study used cross-sectional nationally representative data from NCHS’ 2012 National Study of Long-Term Care Providers to look at size-specific average 30-day hospital readmission rates among residential care communities. Selected operational, staffing, and resident case mix characteristics were included. Bivariate and regression analyses were done using SAS callable SUDAAN.
Results: Among residential care communities with any residents that had an overnight hospital discharge in the past 90 days, almost one-third (32%) had above average 30-day hospital readmissions. The prevalence of having above average 30-day hospital readmissions increased with bed size. The following characteristics were associated with a greater likelihood of having above average 30-day hospital readmissions among all residential care communities: having a larger bed size, being nonprofit, having a lower percentage of residents aged 85 or over, and having more LPN/VN HPRD hours.
Conclusions: Above average hospital readmissions increased with bed size, and the characteristics associated with a greater likelihood of having above average hospital readmissions varied by bed size. Findings could be used to inform targeting efforts for interventions to minimize potentially preventable readmissions, by taking into account bed size when identifying communities that may benefit the most.
82. Nutrient Intake Differences by Age among U.S. Adults: Estimates from What We Eat in America, National Health and Nutrition Examination Survey 2009-2012
Suruchi Mishra, Joseph D. Goldman, Nadine R. Sahyoun, Alanna J. Moshfegh
Background: As adults age, there is a decrease in energy requirement while the needs for nutrients remain the same or even increase.
Objective: This study compares nutrient intake across various age groups of US adults aged 19 years and over.
Methods: Nationally representative dietary intake data of adults aged 19 years and over (N=10,698) participating in What We Eat in America, NHANES 2009-2012 were analyzed. Dietary intake data were obtained from an in-person 24-hr recall. Mean daily energy intake and nutrient density measured as intake per 1000 calories were compared by age/gender groups (19-30 years, 31-50 years, 51-70 years, and 71+ years). Proportion of adults meeting their Estimated Average Requirements (EAR) from foods and beverages were estimated.
Results: Mean energy intakes decreased for older adults (over 50 years) compared to those 50 years and under (p less than 0.001). However, nutrient densities increased among older adults aged 71+ as compared to younger adults aged 19-30 and 31-50 years for dietary fiber (by at least 1.4 gm/1000 kcal), vitamins A (by at least 92 mcg/1000 kcal) and D (by at least 0.7 mcg/1000 kcal), and potassium (by at least 204 mg/1000 kcal) (p less than 0.001). In addition, adults regardless of gender aged 19-30 years had the lowest magnesium and potassium density per 1000 kcal as compared to other age groups (p less than 0.001). Nevertheless, at least 3 out of 10 adults were not meeting their EAR for vitamins A, C, D, E, and magnesium regardless of age and gender.
Conclusion: While nutrient densities are higher among adults aged 71+ years for some nutrients, at least 30% of adults are not meeting selected nutrient recommendations.
83. Impact of diagnostic code revisions on hospitalization trends associated with epilepsy and seizure diagnoses: U.S. Nationwide Inpatient Sample (NIS)/Healthcare Cost and Utilization Project (HCUP), 1993-2012
Yao-Hua Luo1, Matthew M Zack2
1. DB Consulting Group, Inc.
2. Centers for Disease Control and Prevention, Atlanta, GA.
Background: Diagnostic codes of the International Classification of Diseases, 9th Revision (Clinical Modification: ICD9-CM) for epilepsy diagnosis were revised twice: in 1996, one category of convulsions (ICD9-CM: 780.3X) was reclassified as seizure (ICD9-CM: 780.39), and in 2006, some nonspecific seizure diagnoses were reclassified under epilepsy (ICD9-CM: 345.XX).
Objective: To examine the impact of diagnostic code revisions for epilepsy and seizures on hospitalization trends.
Methods: We tracked the annual rate of hospitalizations (per 100,000 population) with epilepsy and seizure diagnoses, and determined the percent of first-listed (primary), later-listed (secondary), and any-listed diagnoses of epilepsy and seizure; calculated the proportion of primary to any-listed diagnoses to track attribution of seizure to epilepsy after reclassification.
Results: The annual hospitalization rate for seizure was zero before the first diagnostic code revision. By 1998, rates reached stable levels of 60 (primary), 290 (secondary), and 350 (any-listed). By 2010, four years after the second diagnostic code revision, these rates dropped to 30 (primary), 120 (secondary), and 150 (any-listed). From 1993 through 2006, hospitalization rates for epilepsy were 20 (primary), 24 (secondary), and 44 (any-listed), but afterwards increased to 65, 260, and 325 respectively. Before 2006, the proportion of primary diagnosis to any-listed diagnoses remained relatively stable, 17% for seizure and 50% for epilepsy. However, after 2006, this proportion increased for seizure but decreased for epilepsy, indicating that seizures were more likely, and epilepsy less likely, listed as the primary diagnosis. Combination of epilepsy with seizures would avoid these changes in hospitalization rates and proportions, but it would be impossible to do before 1997 and would not account for changes in rates for specific kinds of epilepsy.
Conclusion: Revisions in diagnostic codes for epilepsy and seizure have affected estimates of hospitalization with the diagnoses since 1993.
85. Use of Complementary and Alternative Medicine for Arthritis by Older Women
Elizabeth M. Tait, PhD; Marianne Hollis, PhD, RN.
Background: Half of adults ages 50+ in the United States use complementary and alternative medicine (CAM). Women with chronic health conditions are more likely than others to use CAM. Women with arthritis may be particularly likely to use CAM, because conventional medicine alone offers limited relief. Yet little is known about CAM use for arthritis by older women.
Objectives: The purpose of this study was to examine reasons for CAM use among older women who have arthritis using data from the 2012 National Health Interview Survey (NHIS).
Methods: Using 2012 NHIS data, a nationally representative, cross-sectional, multistage household survey, and its CAM supplement, we examined specific reasons why women ages 50+ said they used CAM for arthritis. Descriptive and logistic analyses accounted for the survey design, and were weighted for national representativeness. Controls included age, ethnicity, marital status, body mass index, health behaviors, and region.
Results: Participants who said they had arthritis represented about 26 million women ages 50+. In adjusted results, women with arthritis were more likely to say they used CAM specifically for their arthritis if: it was felt that the CAM therapy combined with conventional medical treatment would help (Odds Ratio, OR 5.77, CI 3.46-9.64, p 0.001) ; because CAM was recommended by friends (OR 2.31, CI 1.44-3.73, p 0.001); because CAM was recommended by a medical doctor (OR 1.86, CI 1-23-2.81, p 0.01) because CAM was recommended by family (OR 1.57, CI 1.0-2.47,p 0.05); or because conventional medicine was too expensive (OR 4.45, CI 1.0-6.0, p-value < .1).
Conclusions: A large number of older women use CAM for arthritis, particularly if they felt that a combination of CAM and conventional medicine would help, or if CAM were recommended by family, friends or a medical doctor. Women with arthritis could particularly benefit from targeted research on risks and potential efficacy of CAM.
86. Is HCV Infection Status Associated with Alcohol Use among US Adults: NHANES 2003-2010?
Amber L. Taylor, MPH1, Maxine Denniston, MSPH1, R. Monina Klevens, DDS, MPH1, Lela R. McKnight-Eily, PhD2, 3, Ruth B. Jiles, MS, MPH, PhD1
1Centers for Disease Control and Prevention, Division of Viral Hepatitis, NCHHSTP:
2Centers for Disease Control and Prevention, Division of Population Health, NCCDPHP (former affiliation for Lela R. McKnight-Eily):
3Centers for Disease Control and Prevention, Division of Birth Defects and Developmental Disabilities Population Health, NCBDDD
Background: Excessive alcohol use exacerbates morbidity and mortality among persons infected with hepatitis C virus (HCV).
Objective: To determine whether HCV infection status is associated with patterns of alcohol use among US adults.
Methods: Data from the National Health and Nutrition Examination Survey (NHANES) for the years 2003-2010 were analyzed for 20,042 participants. NHANES is a nationally representative household survey of the non-institutionalized civilian population. Estimates were derived for self-reported demographic characteristics, HCV-RNA (indicative of current HCV infection) status and alcohol use at 4 levels: lifetime abstainers, former drinkers, non-excessive current drinkers and excessive current drinkers.
Results: Former drinkers (2.2%; 95% CI 1.7, 2.8) and excessive current drinkers (1.5%; 1.1, 2.0) had a higher prevalence of HCV infection than never or current non-excessive drinkers. Specifically, HCV-infected adults were estimated to be former drinkers about 1.7 times as often (31.0%; 95% CI 25.1-37.6) as those never infected with HCV (17.0%; 15.7-18.3, p value less than 0.001). Among current drinkers, HCV-infected adults were estimated to be excessive current drinkers about 1.3 times as often (54.9%; 43.2-66.0) as those not infected (41.4%; 40.0-42.8, p value < 0.05). HCV-infected adults also were estimated to ever drink less than or equal to 5 drinks per day almost every day at some time during their lifetime about 3.3 times as often as uninfected adults (43.8%; 36.3-51.6 vs. 13.7%; 13.0-14.5, p value < 0.001). Controlling for age, sex, race/ethnicity, education and having a usual source of health care, HCV infection was significantly associated with both excessive current drinking (adjusted prevalence ratio (APR) =1.3; 1.1-1.6) and former drinking (APR = 1.3; 1.1-1.6).
Conclusions: Prevalence of chronic HCV infection was highest among former and excessive current drinkers. Public health strategies should implement interventions with emphasis on alcohol abuse, which negatively impacts disease progression among HCV infected persons.
87. Genomic Correlations to Childhood Health Outcomes: A Longitudinal Study
Kathi C. Huddleston, R.N., Ph.D
Background: Childhood outcomes are determined by genetic, epigenetic, social and environmental factors. Fetal development and early childhood may predict and determine patterns of adult onset disease and health status. The Longitudinal Childhood Genome Study is a multigenerational study of 5000 families with a goal of generating 20,000 whole genomes with first sample collection early in pregnancy and continuing through gestation to 18 years of age. Trio-based sequencing on all enrolled individuals makes this research unique, family centered, and scientifically more powerful.
Objective: Identify genomic, clinical, and environmental risk factors that may enhance our understanding of adverse health outcomes such as premature birth, asthma, obesity, and developmental disorders.
Methods: Families are approached prenatally from a diverse geographically and demographic. Medical data is accessed through a single electronic health record system that includes both inpatient and outpatient records. Additionally, enrollment questionnaires consisting of self-reported medical history, exposures, and lifestyle information. Web based surveys utilizing many NHANES and BRFSS questions are collected every 6 months after delivery through study participation. Biological sample analysis includes triad-based whole-genome sequencing and related RNA/protein/epigenetic analyses. All data storage and analysis is cloud-based for heightened security and patient confidentiality.
Results: Since 2012 our accrual rate is approximately 80 new families per month with an overall two year study retention rate of 94%. The survey compliance rate is 88%. We have genomic data on 2500 families. The race and ethnic background of the study cohort is diverse with parents from over 100 countries of origin, enabling us to construct novel, population-specific algorithms for the filtering of variants based on sub-population allele frequency.
Conclusion: This research represents the largest genomic longitudinal birth cohort to date, and will be a cornerstone of pediatric epidemiology for years, providing large scale models to identify genomic triggers of disease.
88. Maternal-Reported Compliance with American Academy of Pediatrics (AAP) Diet and Activity Recommendations at 12 Months
Kathleen Donnelly⊃2;, Teresa Lee⊃2;, Kathi Huddleston⊃1;, Sahel Hazrati⊃1;, John Niederhuber⊃1;
1 Inova Translational Medicine Institute
2 Inova Children’s Hospital
Objective: To compare the maternal-reported diet and activity of 12-month old infants with AAP recommendations.
Methods: Over 2000 families from various races or ethnicities have been recruited in prenatal stage, in the longitudinal study of genomics and child health, at Inova Translational Medicine Institute. Participants’ biological specimens were collected and their clinical and social data were documented. Families receive a survey every six months after birth. We included 728 participants in this analysis by reason of availability of 12 months survey data (Response Rate = 90%). Parametric statistical analysis was performed on variables reflecting AAP dietary and activity guidelines using Chi-square and t-test.
Results: Data analysis on 728 families revealed that 91% have introduced dairy products, 68% cow’s milk, and 72% eggs but only 28% peanut butter and other peanut foods. This is despite the AAP removing the restriction on peanut products after 6 months of age. French fries, used as a marker for fast food consumption, were ingested by 29% during the week of the survey. 100% fruit juice was consumed by 41% and sweetened juice drinks to include soda were consumed by 7% of the children. Despite the recommendation to avoid television and other entertainment media under the age of 2 years, 94% of the mothers reported screen time exposure. Thirty-two percent of mothers reported 60 minutes or more of outdoor activity (consistent with AAP recommendation).
Conclusion: The introduction of peanut butter/ peanut containing foods should be encouraged by pediatricians, as a majority of mothers have not heard the message that this is acceptable. The vast majority of mothers still allow television exposure at age one. Further studies on strategies to limit screen time and barriers to outdoor time are needed.
89. Health Insurance Coverage and Health Care Access and Affordability among Lesbian, Gay, and Bisexual Adults: Results from the Health Reform Monitoring Survey and the National Health Interview Survey
Laura L. Skopec, Sharon K. Long, and Genevieve M. Kenney.
Background: There is significant policy interest in the barriers to health care faced by lesbian, gay, and bisexual (LGB) adults, but few surveys collect data on sexual orientation. This study estimates changes in health insurance and health care access and affordability for LGB adults under the Affordable Care Act (ACA) using the Health Reform Monitoring Survey (HRMS) and the National Health Interview Survey (NHIS). Both surveys collect sexual orientation data, but use different questions.
Objective: Assess how coverage, access, and affordability changed for LGB adults under the ACA, and document the scope of any remaining gaps.
Methods: The HRMS is a quarterly, internet-based survey of about 7,500 adults each quarter, approximately 400 of whom self-identify as LGB. We examine changes in coverage, access, and affordability for LGB and non-LGB adults overall and by gender using March/June 2013 and December 2014/March 2015 data. The NHIS is an in-person national survey of approximately 35,000 households per year that began collecting sexual orientation in 2013. Given the effects of survey questions and mode on responses, we compare results from both surveys to estimate a range for the effects of the ACA on LGB adults.
Results: Based on the HRMS, the LGB uninsured rate fell from 21.7% in March/June 2013 to 11.1% in December 2014/March 2015. Preliminary HRMS estimates indicate that, while some gains in access and affordability for LGB adults have occurred since 2013, particularly for women, significant gaps remain between LGB and non-LGB adults. Specifically, LGB adults report more difficulty finding a doctor and more cost-related barriers to care than non-LGB adults. We will finalize our analyses and conduct comparisons to the 2013-2014 NHIS when 2014 NHIS data become available.
Conclusions: The uninsured rate for LGB adults has fallen by nearly half under the ACA, but gaps in access and affordability between LGB and non-LGB adults remain.
90. On Stigmatization Attitude to HIV/AIDS Patients in Botswana: A Generalized Additive Mixed Modelling Approach
Arnab, R1, Oluwayemisi Oyeronke Alaba2, J. O. Olaomi3
1 Department of Statistics, University of South Africa and Department of Statistics, University of Botswana.
2 Correspondence Author, Department of Statistics, University of Ibadan, Nigeria and Department of Statistics, University of South Africa.
3 Department of Statistics, University of South Africa
Background: Antiretroviral therapy has effectively changed Acquired Immunodeficiency Syndrome (AIDS) from a terminal to a manageable chronic illness. However, People Living With HIV/AIDS (PLWHA) still contend with stigmatization, hostility and gossip which take their toll on their health and psychological well-being.
Objective: This paper explores the factors responsible for stigmatization of PLWHA.
Methods: The 2008 Botswana Aids Impact Survey III (BAIS III) data was used to assess the attitude of people to HIV/AIDS patients in Botswana using the generalized additive mixed model. The model was used to simultaneously measure the fixed, nonlinear and random effects. The fixed effects of categorical covariates were modelled using the diffuse prior, P-spline with second-order random walk for the nonlinear effect of continuous variable while the exchangeable normal priors were used for the random effects of the district. The Binomial distribution was used to handle the dichotomous nature of the three dependent variables considered. The dependent variables are stigmatization towards family member sick with HIV/AIDS, teacher who has HIV/AIDS but is not sick and a shopkeeper or food seller who has HIV/AIDS.
Results: People who stay in the urban areas, who have secondary or higher education, age 25+, who are married, living together with partner and divorcees seem to be responsible for stigmatization of the people considered based on the three dependent variables.
Conclusion: We have scientifically modelled the stigmatization of PLWHA in Botswana and it was observed that the elite and adults behaviours should be looked into.
91. Identifying Hot-Spots of HIVAIDS using Geospatial Analysis in Washington D.C.
Suparna Das PHD
Washington, D.C. is one of the worst HIV/AIDS affected areas in the United States, with an epidemic comparable to developing countries. In D.C at the end of 2013 2.5 percent of the population were living with HIV/AIDS, which surpasses UNAIDS criteria of a “generalized” epidemic (greater than 1 percent of the population). D.C. also has the highest AIDS diagnosis rate in the United States. This study used HIV/AIDS Surveillance data from HAHSTA, DC DOH for a geospatial analysis of hot spot detection in Washington D.C. The data included all living HIV/AIDS cases in D.C. (2010 to 2013). The data was geocoded and aggregated by blocks. For geocoding purposes current addresses of the incidences were selected, however for missing current address, address at diagnosis were considered. The prevalence rates of HIV/AIDS were calculated using block population from US census 2010. Getis and Ord method of global and local cluster analysis was used for this analysis. The global measure of Getis and Ord generated a single value which explained the entire data set. The global results showed that HIV/AIDS in D.C is clustered at 99 percent confidence interval. Local Getis and Ord values were calculated for each block units to display that different patterns occurred in different parts of D.C. The block analysis map is overlaid with neighborhood map of D.C for easy of explanation. The local hotspots at 99 percent significance level were located in the downtown, Brightwood and Logan Circle. While cold spots were located in Friendship heights, Chevy Chase, Capitol Hill and Lincoln Park area. The study concluded that HIV/AIDS prevalence in D.C is clustered. Statistically, it i16s important to understand that the hotspots have a significant probability of affecting the prevalence in the adjacent block. The study is a major step towards identifying target areas of HIV/AIDS thus assisting in effective program implementation.
92. Utility of International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) Diagnosis Codes for Evaluating Population Impact of Human Papillomavirus (HPV) Vaccination on High-Grade Cervical Intraepithelial Lesions Using Healthcare Claims Data
Flagg EW, Torrone EA: Division of Sexually Transmitted Disease Prevention, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, Centers for Disease Control and Prevention
Background: HPV vaccine, which protects against oncogenic HPV strains associated with approximately 50% of high-grade cervical intraepithelial lesions, was introduced in the United States (US) in 2006. Healthcare claims contain ICD-9-CM diagnosis codes which may be useful for evaluating population effectiveness of HPV vaccination on cervical lesions; however, changes in these codes over time may preclude use of claims data for this purpose.
Objectives: To identify changes in ICD-9-CM diagnosis codes for cervical lesions during 2003-2013, and evaluate utility of these codes for detecting trends attributable to HPV vaccination.
Methods: We documented changes in ICD-9-CM codes for cytologically-detected cervical high-grade squamous intraepithelial lesions (HSIL) and high-grade histologically-detected cervical neoplasia 2 and 3 (CIN2 and CIN3, or carcinoma in situ (CIS)) during 2003-2013. Using MarketScan Commercial Claims and Encounters data from 11.3 million privately-insured females, we estimated annual prevalence of these diagnoses in women aged 10-24 and 25-39 years who received cervical cancer screening.
Results: Before 2003, no ICD-9-CM codes specifying HSIL, CIN2 or CIN3 existed. In 2003, a single code was published that included HSIL, CIN2, and low-grade and ungraded cervical lesions. In 2005, HSIL and CIN2 were changed to two separate codes specific for only these lesions, and CIN3 was added to an existing code specific for cervical CIS. Uptake of the 2005 code changes in claims was acceptably stable by 2007. During 2007-2013, diagnoses decreased over one third in younger women (those most likely to be vaccinated), but increased slightly in older women.
Conclusions: ICD-9-CM diagnosis codes for high-grade cervical lesions changed profoundly prior to HPV vaccine introduction in 2006. By 2007, claims uptake was sufficiently stable to use these data to evaluate population effectiveness of HPV vaccination. Declines in diagnoses among young, but not older, women during 2007-2013 suggest population-level impact of HPV vaccination in the US.
93. Epidemiological Characteristics and Treatment Outcome of Adult Tuberculosis Patients under Directly Observed Treatment Short Course (DOTS) in an Urban Indian Population
Sarit Sharma, Rupali Verma, RK Soni, Anurag Chaudhary, Shruti Sharma, Sangeeta Girdhar, Mahesh Satija, Priya Bansal Dept of Community Medicine, Dayanand Medical College & Hospital, Ludhiana, India.
Introduction: Tuberculosis (TB) is one of the leading cause of mortality and morbidity in India. The treatment behavior of patients is complex and dynamic. Several epidemiological factors may shape the behavior till the final outcome. Ensuring adherence to treatment for a favorable outcome has long been acknowledged as the weakest component of the TB programs in India.
Objective: The present study was planned to ascertain the various epidemiological characteristics of adult TB patients who were under Directly Observed Treatment Short course (DOTS) therapy under Revised national Tuberculosis Control Program (RNTCP) and to study any association of these characteristics with the treatment outcome in these patients.
Methodology: Prospective cohort study conducted in ten selected DOTS centers of Ludhiana city, Punjab, India. A total of 221 patients put on treatment for TB were enrolled and followed up but final home visit could be conducted for 200 subjects only. Socio-demographic characteristics, treatment seeking behavior, social stigma, financial implications and treatment outcomes were studied in these subjects.
Results: Cure rate was 76.3% among all sputum positive TB patients and 80.0% among New Sputum Positive (NSP) patients. Treatment success rate (favorable outcome) was 82.8%. Default rate, death rate and failure rate was 11.8%, 2.7% and 1.4% respectively. Unfavorable outcome was seen more in older patients, migrants, unemployed persons, patients suffering from diabetes, patients living more than 1.5 Km away from DOTS center, and patients who experienced side effects of treatment. More than one fourth (28.1%) subjects felt discriminated by the society and 68.0% subjects reported loss of wages before attending DOTS center, which was reduced to 40.6% after attending DOTS center.
Conclusions: TB continues to be a major public health problem which has a lot of social stigma attached to it. Programs devised for TB control should consider the treatment behavior of the patients to achieve the global target of Tuberculosis control.
94. Use different national surveys for the numerator and denominator in estimating the incidence rates and the confidence intervals of a variety of injuries
Tin-chi Lin, Helen Wellman, Santosh Verma
BACKGROUND: A common epidemiological research problem is that desired information often resides in different data sources. Few national surveys record injury case and exposure concurrently, making estimation of incidence rates difficult. Among the few which document both (e.g., the National Health Interview Survey (NHIS)), the available information is constrained by the questionnaires’ content. For example, the only one type of injury for which the NHIS has documented exposure and incidence is work-related injury; the lack of exposure data in NHIS for other types of injury renders impossible to estimate their incidence rates.
OBJECTIVE: To overcome these limitations, we developed a procedure that uses different national surveys for the numerator and denominator in estimating incidence rates and confidence intervals (C.I.) of a variety of injuries.
METHODS: The 2010 NHIS was used for the injury outcomes, and the 2010 American Time Use Survey (ATUS) for exposure. We used work-related and sport-related injuries as examples, and the exposure measurement was hours of work and time spent in sport, respectively. The incidence rate was estimated by dividing the number of population-level injuries (from NHIS) by population-level exposure (ATUS); the population totals were calculated with appropriate sampling weights. The standard errors were calculated using Taylor series approximation, a general method that can be applied to different sampling designs.
RESULTS: In 2010, the incidence rate for work-related injuries was 1.44 per 100,000 hours (C.I. = (1.10,1.77)). For sport related injuries, the incidence rate was 13.36 per 100,000 hours (C.I. = (11.38,15.38)).
CONCLUSIONS: This procedure makes it possible to estimate the incidence rates and 95% C.I. even when the numerator and denominator come from different data sources, offering an alternative approach for injury research and surveillance when data availability is constrained.
95. Trends in mean gestational age at time of pregnancy awareness
Amy M. Branum, Katherine A. Ahrens
Background: Early pregnancy detection is important for improving pregnancy outcomes as the first trimester is a critical window of development; however, there has been no description of national trends and characteristics of gestational age at time of pregnancy awareness among US women.
Objective: To assess time trends and differences in maternal characteristics in mean gestational age at time of pregnancy awareness.
Methods: We examined self-reported data on gestational age at time of pregnancy awareness from the 1995 through 2011-2013 National Survey of Family Growth among women 15-44 years who reported at least one pregnancy in the 5 years prior to interview. Gestational age at pregnancy awareness (continuous) was assessed by race/Hispanic origin, age at pregnancy, gravidity, pregnancy intendedness, education, income, and prenatal care.
Results: Among all women, age-adjusted mean gestational age at pregnancy awareness did not change linearly over time but decreased between 1995 and 2002 (5.8 to 5.4 weeks, p less than 0.01) and increased between 2002 and 2011-2013 (5.4 to 5.7 weeks, p less than 0.01). Mean gestational age at pregnancy awareness was greater among women 15-19 compared to women 20-24, 25-29, 30-44 (6.5, 5.9, 5.3, 5.0 weeks, respectively, all p less than 0.01), women who were non-Hispanic black and Hispanic compared to non-Hispanic white (6.2 and 5.8, respectively, vs. 5.2, all p less than 0.01), women having their first pregnancy compared to those having their second or higher (5.8 vs. 5.3, p less than 0.01), and for unwanted pregnancies versus those that were intended (6.1 vs. 5.1, p less than 0.01). These patterns were consistent over time.
Conclusions: In recent years, mean gestational age at awareness of pregnancy has increased slightly but remains later among certain groups of women who are more likely to have adverse birth outcomes.
96. Maternal Infections during Pregnancy and Neonatal Outcomes in the Upstate KIDS Study
Nikhita Chahal, Rajeshwari Sundaram, Akhgar Ghassabian, Kara A. Michels, Erin Bell, and Edwina Yeung
Background: Gynecological and urinary tract infections in pregnancy are associated with an increased risk of preterm birth. Few studies have examined multiple types of infections in relation to neonatal outcomes in population-based cohorts. Furthermore, twins remain understudied despite higher rates of preterm birth.
Objective: The aim was to examine the associations between maternal infections during pregnancy and neonatal outcomes in a U.S. birth cohort.
Methods: Designed originally to investigate childhood development and mode of conception, Upstate KIDS (2008-2010) sampled on live births conceived with infertility treatment at 1: 3 ratio to infants not conceived with treatment using New York State birth certificates. Maternal urinary tract infections were self-reported at 4 months postpartum, while gynecological infections, gestational age, and birth size measures were obtained from birth certificates. Logistic regression was used to test associations between maternal infections and adverse outcomes after adjusting for maternal age, race, education, marital status, and smoking and alcohol use during pregnancy. Generalized estimating equations (GEE) were used for twins.
Results: Urinary tract infections during pregnancy were associated with increased risk of preterm birth (less than 37 weeks) (OR=1.40, 95%CI: 1.08-1.83), but not very preterm birth (less than 34 weeks) (OR=0.76, 95%CI: 0.49-1.19) among singletons. Gynecological infections during pregnancy were associated with increased risk of very preterm birth (OR=2.08, 95%CI: 1.07-4.05), but not preterm birth (OR=1.17, 95%CI: 0.77-1.78) among singletons. No associations between either infection type and low birth weight (less than 2500 grams) or small for gestational age (less than 10th percentile) were identified. Neither gynecological nor urinary tract infections during pregnancy were associated with adverse outcomes in twins.
Conclusions: Similar to previous findings, urinary tract infections during pregnancy increase the risk of preterm birth in singletons. However, infections were not associated with earlier delivery among twins. Future studies with detailed information on timing of infection are needed.
97. Are the Rates of Adolescent Obesity Related to Geographical Region? Results from NHIS 2008
Frank D’Amico1, PhD; Matthew Joseph2, PharmD; Nicole Payette2, PharmD
1Duquesne University, UPMC St. Margaret Hospital Family Medicine, Pittsburgh PA; 2Pharmacy Fellow UPMC St. Margaret Hospital, Pittsburgh, PA
Background: In 2010, First Lady Michelle Obama launched the ‘Let’s Move’ campaign (www.letsmove.gov), aimed at eliminating childhood obesity. Recently, as part of the fifth anniversary of Let’s Move, the First Lady is now challenging Americans to “GimmeFive” things they are doing to lead healthier life.
Objective: But just how widespread is this problem? This study was undertaken to determine if the rates of adolescent obesity vary by region and explore differences by gender.
Methods: To answer this question we used data collected from the National Health Interview Survey (NHIS 2008) sample child core segment. The interviewed sample of children (n=8,815) was done by proxy response from a knowledgeable adult. Of these children, 2,974 had data on age, sex, height and weight and were used in the analysis. Definitions for normal, overweight and obese were determined using CDC growth charts for sex-age BMI distributions. The rates of obesity and their 95% confidence intervals were calculated using SAS callable SUDAAN.
Results: Male adolescents in the Northeast and West regions are slightly more likely to be overweight (26.9% and 25.6%, respectively) than obese (23.6% and 21.6%). However, male adolescents in the South and Midwest regions are slightly more likely to be obese (26.6% and 27.3%) than overweight (23.0% and 24.0%). Female adolescents in all regions are most likely to be normal weight (74.5% Northeast, 70.2% Midwest, 70.8% South, and 68.8% West). Overall, 49.5% of male adolescents are either overweight or obese compared to 29.2% of female adolescents.
Conclusion: These values show that there are differences in the adolescent rates of overweight or obese when examined by region or by gender. The findings agree with Mrs. Obama’s concern about the high rate of obesity and further highlight the problem with this country’s youth.
98. Ten-Year Trends, Socio-demographic and Maternal Characteristics Associated with Smoking during Pregnancy among Tennessee Women
Kimberly Glenn, PhD, MPH; Allysceaeioun B. Spears, PhD, MPH; Angela Miller, PhD, MPH, Krisden Ingram, MS, John Brown, Lori B. Ferranti, PhD, MBA, MSN
Background: Smoking during pregnancy (pregnancy smoking) increases the risk for pregnancy complications and adverse fetal and infant outcomes. Previous research suggests pregnancy smoking may negatively impact offspring throughout development.
Objective: To describe the trend of pregnancy smoking among Tennessee mothers over a ten-year time period and examine the association between maternal and socio-demographic characteristics and pregnancy smoking.
Methods: We analyzed the smoking behaviors as reported in the birth statistical file of Tennessee women whose most recent live birth occurred during 2004 through 2013. The proportion of women who smoked prior to pregnancy and those who smoked during any trimester of pregnancy were assessed for each year by race and age group. Statistical significance of the trend over the study period was assessed using 95% confidence intervals (CI). Multivariate logistic regression models generated odds ratios (OR) and 95% CI for the association between maternal and socio-demographic characteristics and pregnancy smoking.
Results: During the study period, 508,475 eligible live births occurred among Tennessee resident women. We observed a statistically significant decrease in pregnancy smoking among these mothers over the course of ten years (20.8% to 17.4%). Approximately 1 in 5 women smoked before pregnancy and only 22% of those women reported quitting prior to pregnancy. Women who had previous pregnancies [OR: 1.55, 95% CI: (1.50-1.61)], low educational attainment [OR: 1.52, 95% CI: (1.49-1.56)], and were unmarried [OR: 1.89, 95% CI: (1.85-1.93)] were the most likely to smoke during pregnancy. Non-Hispanic black and Hispanic women were at least 75% less likely than non-Hispanic white women to engage in pregnancy smoking.
Conclusions: While further analyses should investigate the significant racial and socioeconomic associations observed in this study, Tennessee is experiencing a decline in pregnancy smoking which may be a result of supportive prenatal programs and evidence-based cessation interventions.
99. Redesigning the National Survey of Children’s Health: Results from Cognitive Interviews and Mode Effects Experiments
Michael Kogan, PhD: Reem Ghandour, DrPH: Catherine Vladutiu, PhD: Jessica Jones, MPH
Background: There has been a decline in responses to telephone surveys. In response, HRSA’s Maternal and Child Health Bureau redesigned the National Survey of Children’s Health (NSCH) and the National Survey of Children with Special Health Care Needs (NS-CSHCN) into a single survey and shifted from a telephone-based to an address-based sampling frame.
Objectives: To assess respondents’ interpretation of survey items, response times and navigation of questionnaires, and differences in responses across three modes of survey administration.
Methods: A qualitative assessment of respondents’ interpretation and navigation of questionnaires was conducted via cognitive interviews among parents of children aged 0-17 years (n=64). The mode effects experiment included a sample of households randomly assigned to one of three mode treatments: web (n=1,033), paper (n=466), and telephone (n=440). Unadjusted percentages and 95% confidence intervals were calculated to assess differences in key outcome measures by mode of administration. Further analyses were conducted to isolate mode effects by controlling for differential nonresponse bias and sampling variation.
Results: Content issues observed during cognitive interviews included difficulty with: recall and interpretation of selected items, skip patterns, and instructions. In the mode effects experiment, unadjusted estimates of key measures were similar across each of the three modes, with the largest difference between any pair of modes equaling 7.2 percentage points. Web respondents (93.5+/- 1.5) were more likely than mail (89.0 +/- 2.8) or telephone (87.3 +/- 3.1) respondents to report their child was in excellent or good health (p=0.05). After controlling for socio-demographic characteristics, differences by mode did not change considerably.
Conclusions: The content and structure of questionnaires are important to consider when transitioning from a telephone-based to self-administered mode of administration. This transition may also result in changes to key survey outcomes due to differential nonresponse effects and mode effects that are unrelated to real changes in the population.
100. Short and extremely short interpregnancy intervals: Differences by maternal demographic characteristics
Marie E. Thoma and Sharon E. Kirmeyer.
Background: Short interpregnancy intervals (S-IPI), or IPI less than 18 months, are associated with an increased risk of adverse birth outcomes, such as preterm birth. At the highest risk for these consequences are women with extremely short IPI (ES-IPI), or IPI less than 6 months.
Objective: To examine differences in S-IPI and ES-IPI by selected maternal demographic characteristics using 2013 birth certificate data.
Methods: Data are based on 100% of births registered in 41 states and the District of Columbia that adopted the 2003 revised birth certificate in 2013 (90% of 2013 U.S. births). Live birth intervals (months) were generated from the “Date of last live birth” item and the date of birth occurring in 2013. IPI was calculated by subtracting the gestational age (months) of the birth occurring in 2013 from the live birth interval to get time from a previous birth to conception of the current birth. IPI was compared by race and Hispanic origin, nativity, age at previous birth, and education using z-tests.
Results: S-IPI and ES-IPI occurred in 29% and 4.9% of births, respectively, and were significantly higher for women born in the U.S. compared to elsewhere across race and Hispanic origin groups, except for non-Hispanic black women. ES-IPI was highest among U.S.-born non-Hispanic black (7.6%) and U.S.-born Hispanic (6.8%), ages less than 20 (5.3%), 20-24 (5.6%), and 35+ (5.6%), and no high school (HS) diploma (5.3%). S-IPI was highest for U.S.-born Hispanic (32.1%) and U.S.-born non-Hispanic white (31.0%), ages 35+ (43.2%), college degree or higher (30.6%), and no HS diploma (30.3%).
Conclusion: Demographic patterns differed for ES-IPI and S-IPI. While ES-IPI occurred less frequently, these differences suggest they may be a distinct group and should be examined separately. The birth certificate provides a sufficiently large data source to examine these less frequently occurring IPI lengths.
101. Intrauterine exposure to hyperglycemia and growth and obesity in infancy and childhood: A prospective cohort study
Yeyi Zhu1, Pauline Mendola1, Katherine Bowers2, Wei Bao1, Shanshan Li1, Edwina Yeung1, Aiyi Liu1, Kelly J. Martin3, Sjurdur Olsen4, Cuilin Zhang1: 1 Division of Intramural Population Health Research, National Institute of Child Health and Human Development, NIH, Rockville, MD, USA; 2 Division of Biostatistics and Epidemiology, Department of Pediatrics, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA; 3 ICF International Corporation, Rockville, MD, USA; 4 Center for Fetal Programming, Department of Epidemiology Research, Statens Serum Institute, Copenhagen, Denmark:
Background: Gestational diabetes (GDM) is related to excessive fetal growth and unfavorable cardio-metabolic risk factors in childhood, but data on long-term impact of gestational hyperglycemia on offspring growth and obesity are limited.
Objective: To investigate the association between maternal fasting plasma glucose (FPG) concentrations in mid-pregnancy and offspring growth and obesity in infancy and childhood.
Methods: We identified 1,379 women with a history of fist-time GDM in the Danish National Birth Cohort (1996-2002) as part of ongoing Diabetes & Women’s Health Study. After excluding women with pre-gestational diabetes, missing FPG concentrations, multiple-/still-births, and medication treatment for GDM, we included 633 mother-offspring dyads. Maternal FPG concentrations measured in mid-pregnancy were extracted from medical records. Offspring’s age- and sex-specific body mass index z-scores (BMIZ) at birth, 5 months, 12 months, and 7 years were calculated based on WHO reference data. Relationships between maternal FPG and offspring obesity were assessed by linear and poisson regression with robust standard errors after adjustment for maternal socio-demographic and perinatal factors, prepregnancy BMI, and child’s birthweight.
Results: Maternal FPG was significantly and positively associated with ponderal index at birth and BMIZ at 7 years (beta=0.496 and 0.205 per 1-mmol/L increase in FPG; both P less than 0.05) but not at 5 or 12 months, after adjustment for covariates. Per 1-mmol/L increase in maternal FPG was related to 23% elevated risk for offspring’s macrosomia (more than 4 kg; 95% CI: 1.08, 1.40; P=0.002) and 24% elevated risk for overweight/obesity (above 85 percentile; 95% CI: 1.01, 1.54; P less than 0.05) at 7 years. Moreover, the association appears stronger among non-obese women than their obese counterparts.
Conclusions: Maternal FPG in mid-pregnancy were significantly and positively associated with offspring’s birth size and obesity risk at 7 years, independent of maternal prepregnancy BMI and child’s birth weight.
102. Unmet need for therapy service among children with autism spectrum disorder: Findings from the 2005-06 and 2009-10 NS-CSHCN using imputed and non-imputed files
Henry J. Carretta, PhD, MPH
Assistant Professor, Department of Family Medicine & Rural Health, College of Medicine, Florida State University
Teal W. Benevides, PhD, MS, OTR/L
Assistant Professor, Department of Occupational Therapy, School of Health Professions, Thomas Jefferson University
Shelly J. Lane, PhD, OTR/L, FAOTA
Professor, Department of Occupational Therapy, School of Allied Health, Virginia Commonwealth University
Background: Children with autism spectrum disorder (ASD) represent a growing category of children who have special health care needs. CDC estimates suggest that 1 in 68 children in the U.S. have an ASD. Children with ASD require significant medical, mental health, and therapeutic supports that contribute to greater heath care utilization and costs than persons without ASD. The National Survey of Children with Special Health Care Needs (NS-CSHCN) provides an opportunity to examine problems with access to care in this population using two survey waves for the standard public use file as well as the multiple imputation (MI) file to estimate missing values.
Objectives: 1) Examine whether children with ASD received all needed occupational, physical, and speech therapy using the Andersen Behavioral Model (ABM) as a conceptual model and 2) Examine differences between the standard public use and MI NS-CSHCN files.
Methods: The NS-CSHCN for 2005-06 and 2009-10 were concatenated. Models included ABM domains for predisposing, enabling, and need characteristics of sample respondents were used to predict whether the ASD child received all needed therapy services. Logistic regression models using the standard and MI files were compared using Stata V12 accounting for population weights and survey design. Imputed values for race, ethnicity & income were used in the MI model.
Results: In both models, ASD children and survey year were predictive of not receiving all needed care. Insurance type, need indicators, and well child visits were also predictive of receiving needed therapy. Point estimates and magnitude were similar in the standard and MI models.
Conclusions: Concatenation for use with MI files was challenging. Registering variables in preparation for the MI required careful study of the Stata documentation. Use of the imputed data for this study added only a few hundred additional observations and did not influence the substantive interpretation of results.
103. The 2012 National Sample Survey of Nurse Practitioners: Study Design and Outcomes
Arpita Chattopadhyay, Chief, Workforce Analysis Branch, Health Resources Services Administration, National Center for Health Workforce Analysis
Background: Nurse practitioners (NPs) are an integral part of the health care system providing services in primary and specialty care. Lack of reliable estimates on workforce size, specialty, and employment characteristics impede workforce planning and policy development. HRSA conducted the first National Sample Survey of Nurse Practitioners (NSSNP) in 2012 to develop national estimates of NP supply and practice characteristics.
Objective: to describe NSSNP and understand some of the data limitations
Methods: A single national sampling frame was developed from the licensure data files of all 50 states and DC. A sample of 22,000 NPs with probability proportional to state NP population was drawn. The questionnaire was designed in consultation with stakeholder groups and finalized through cognitive tests. Weights were assigned to individual respondents to obtain a nationally representative sample.
Results: About 13,000 NPs completed the survey achieving a 60% response rate. The survey reports many important features of NP employment and practice characteristics. These include billing practices, income, the range of services provided by NPs, the extent of physician supervision, and satisfaction related to specific elements of NP work. The survey is also unique because it targets licensed NPs who work in NPs. While the estimates from NSSNP are comparable to other nurse surveys for most common data elements, difference in the target populations, varying response rates, and non-response bias, resulted in some differences being observed between NSSNP and other surveys. Bias in self-reported measures may have resulted in inaccurate incomes in a small percentage of NSSNP respondents.
Conclusion: NSSNP is overall, a rich and accurate source of NP workforce data that is available to the public through the HRSA data warehouse website and the Research data Center at CDC.
104. Dietary Supplement Ingredient Database Release 3.0 (DSID-3): Applications to NHANES Dietary Supplement Data Files
Andrews Karen W, Han Fei, Gusev Pavel A, Dang Phuong-Tan, Savarala Sushma, Pehrsson Pamela R, Dwyer Johanna T, Saldanha Leila G, Betz Joseph M, Costello Rebecca, Bailey Regan L, Douglass Larry; Nutrient Data Laboratory (NDL), BHNRC, USDA; Office of Dietary Supplements (ODS), NIH; Consulting Statistician, Boulder, CO:
Background: Dietary supplement (DS) consumption plays an important role in achieving recommended intakes for a large part of the US population. To better assess the contribution of nutrients from DS to total intake, the Nutrient Data Laboratory at the U.S. Department of Agriculture, in collaboration with the Office of Dietary Supplements at the National Institutes of Health and other federal agencies, developed and maintains the Dietary Supplement Ingredient Database (DSID). DSID provides analytically-derived estimates of ingredient content in nationally representative DS.
Objective: To measure DS ingredient content and link analytically-derived estimates to content indicated by labels reported in the National Health and Nutrition Examination Survey (NHANES) data files.
Methods: Representative DS are identified and sampled by NDL and sent for analysis by laboratories experienced in performing chemical tests on DS. In order to obtain accurate results, quality assurance and quality control plans are established for each study, with standard reference materials, in-house control materials and product duplicate results assessed before data are accepted as final. Relationships between label and percent difference from label are evaluated with regression analyses weighted by DS market share, if available.
Results: Mean analytical content and SE predicted by regression equations for 18 ingredients in adult multivitamin/minerals (MVM) are linked to NHANES 2003-08 files. The estimates for 16 ingredients in children’s MVM are linked to NHANES 2005-10. The estimates for 20 ingredients in non-prescription prenatal MVM are linked to NHANES 2007-10. Regression based analytical estimates for 3 major fatty acids in omega-3 fatty acid DS are linked to NHANES 2005-10.
Conclusions: The predicted analytical ingredient levels linked to label levels are not specific to any brand and therefore are applicable only to DS reported in large population surveys. The DSID-3 estimates can replace label information to more accurately assess ingredient intakes from DS in such studies.
105. The Great Recession and Hospital Cost Inefficiencies: A Stochastic Frontier Study in Washington State
Background: The recent slowdown in healthcare spending growth may be an indication of hospitals successfully increasing cost-efficiency. It remains unclear whether this trend is structural or predominately tied to the state of the national economy. Previous studies have looked at the association between the growth in healthcare spending and the Great Recession. Little is known about the relationship between providers’ cost efficiency and the recent recession.
Objective: Test the empirical question of whether the Great Recession has significantly affected not only hospitals’ costs but also their cost efficiency performance. Compare community hospitals’ cost inefficiency before and after the Great Recession, using stochastic frontier analysis (SFA) with controls for hospital quality of care and patient burden of illness.
Method: SFA to estimate cost inefficiency. Decomposes deviations from the best practice cost frontier into a random and a deterministic error. Cost inefficiency is the ratio of observed total costs to the best-practice total costs. Case-mix index and summary performance quality scores for acute myocardial infarction, congestive heart failure, and pneumonia are used to control for output heterogeneity. Data comes from Washington State Department of Health and Medicare Hospital Compare.
Results: Maximum likelihood regressions indicate cost inefficiency was lower before the Great Recession (2005-2007). Following the Great Recession providers’ experienced higher cost inefficiency. Improvements in summary performance scores increase inefficiency as well as costs.
Discussion: Hospitals have been generally considered recession-proof. However, the public policy environment today is starkly different than in past years. Thus, hospitals’ ability to cope with shocks in healthcare demand has significantly diminished. The impact of macroeconomic trends on cost efficiency may be playing a more important role in detriment of structural changes such as technological innovation and the advent of managed care.
106. Clusters and Sequences of Obesogenic Behavioral Choices in Early Adulthood: a market basket analysis of prospective national data
Wasantha Jayawardene (MD, PhD), David Lohrmann (PhD, MCHES), Mohammad Torabi (MPH, PhD)
Background: New environments and independent decision-making expose young adults to obesogenic behaviors (OB), which often persist throughout their lifespan and can lead to hold OB clustering. Knowledge of cross-sectional and sequential associations among OBs, which may depend on sociodemographic factors, is essential for fine-tuning of behavioral interventions.
Objective: To determine the clustering and sequencing of unhealthy fruit intake, vegetable intake, television viewing, and sleep patterns during 18 to 31, analyzing for sex and race/ethnicity differences.
Methods: Interviews at ages 18-22(T1), 23-27(T2), and 27-31(T3) in 2002-11 from 6399 persons were analyzed. Market basket analysis, a data-mining method utilized to understand purchasing behaviors, was applied using SAS Enterprise Miner. Association rules were determined using three measures: support greater than 5% (probability of multiple OBs occurring), confidence greater than 50% (probability of one OB following another), and lift greater than 1 (one OB increasing the occurrence of another).
Results: In general, OB clustering was higher among males and blacks/Hispanics and decreased with age. Clustering of four OBs in females significantly decreased in T1-T2 (supportT1=9.1%, supportT2=6.6%) and stabilized; no decrease was observed in males. Four OBs clustered more in blacks/Hispanics across all ages (p less than 0.001). Clustering of inadequate fruit/vegetable consumptions and excessive television-viewing was significantly higher for males across all ages and for blacks/Hispanics in T1 (p less than 0.01). Other OB clusters showed somewhat similar patterns. Those who practiced a given OB in T1-T2 were more likely to practice it and at least one more OB in T3 (confidence greater than 50%, lift greater than 1). Discontinuation of a given OB after T1 decreased its and other OBs’ likelihood of occurrence in T3, although initiation of any OB, except excessive television-viewing, in T2 did not increase this likelihood.
Conclusion: Age, gender, and race/ethnicity affect the OB clustering, while continuation of OBs throughout young adulthood increases the behavioral risk of chronic diseases.
107. Quality-Adjusted Life Years (QALY) for Respondents from the National Health and Nutrition Examination Survey (NHANES) Linked Mortality File
Haomiao Jia, PhD: Department of Biostatistics, Mailman School of Public Health and School of Nursing, Columbia University, New York, NY, USA (e-mail: email@example.com)
Background: Quality-adjusted life years (QALY) is a health outcome measurement that combines with both years of life lived and health-related quality of life (HRQOL). In a longitudinal study, a participant’s QALY is incomplete if this person does not die during the follow-up.
Objective: To estimates QALYs throughout remainder of expected life for respondents aged 65 years and older from the National Health and Nutrition Examination Survey (NHANES) Linked Mortality File.
Methods: We ascertained respondents’ HRQOL scores and mortality status from the 2004-2010 NHANES with mortality follow-up data through December 31, 2011 (n=3,680). QALY was estimated using a hybrid estimator that calculated QALY from two parts: QALY during the follow-up period and QALY beyond the follow-up period. The first part was estimated from a Kaplan-Meier method based estimator. The second part was estimated by extrapolating survival time beyond December, 31, 2011 using parametric survival models with Weibull distributions. We calculated QALY for those aged 65 years and older, overall and for those with each of selected chronic conditions.
Results: For those aged 65 and older, QALY throughout reminder of expected life was 12.3 years (10.2 years for men, 14.4 years for women). For those with depression, QALY was 4.5 years; for those with diabetes, 8.1 years; for those with hypertension, 11.9 years; for those with heart diseases, 9.0 years; for those with stroke, 6.8 years; for those with emphysema, 5.2 years; for those with asthma, 8.1 years; for those with arthritis, 12.1 years; and for those with cancer, 10.2 years.
Conclusions: This study presents a hybrid QALY estimator for respondents in a follow-up study. The analyses show good precision and relatively small bias in estimating QALY even with relatively small sample sizes. In this study, depression contributed the biggest QALY loss, indicating that mental health in the elderly warrants more attention.
108. Producing Synthetic Estimates of Children’s Health and Well-Being for Local Areas
Mark Mather, Beth Jarosz, Linda A. Jacobsen, Eva Hawes, Narangerel Gombojav, Christina Bethell
Background: While child health data are readily available for the nation and states, city- and county-level health departments have numerous constraints in obtaining reliable child health data for community-based needs assessments and population health improvement efforts.
Objective: The objective of this research is to determine the validity of a synthetic estimation model that combines prevalence data from the National Survey of Children’s Health and population data from the American Community Survey to estimate children’s health and well-being in cities and counties.
Methods: This project combines data from the National Survey of Children’s Health (2011-2012) and the Census Bureau’s American Community Survey (2010-2012) to produce synthetic local area estimates of children’s health and well-being. The synthetic estimates are constructed by applying estimated prevalence rates for a “parent geography” (state or combination of states) from the NSCH, broken down by race/ethnicity and family income, to race- and income-specific population estimates at the sub-state (city or county) level. Model performance is evaluated using Mean Absolute Percent Error (a measure of model accuracy) and Mean Algebraic Percent Error (a measure of model bias). The best-fit model is then applied to cities and counties with 2012 populations of 100,000 or more.
Results: We found that a model built upon four racial/ethnic categories and four income levels provided best-fit model for implementation. In addition, sensitivity testing for an appropriate scale of “parent geography” demonstrated that applying prevalence rates from a larger parent geography (e.g. nation, census region, state) to a smaller estimated geography produced reasonably accurate estimates of prevalence.
Conclusions: We demonstrate how models can be used to construct reasonable synthetic estimates of children’s health and well-being for local areas by combining state-level prevalence data with population data from the U.S. Census Bureau.
109. Modeling Obesity Associated Years of Life Lost: A Significance Test to Compare Predictive Accuracies of Non-Nested Models
Tapan Mehta, David B. Allison
Background: Estimating the years of life lost (YLL) associated with obesity (body mass index [BMI, kg/m2] greater than or equal to 30) is an important question for the policy makers. BMI categories based on BMI at the time of survey (baseline) are widely used to estimate obesity associated YLL. Recently, it has been suggested that maximum BMI, which may be robust to confounding by reverse causation, is a more appropriate measure when estimating obesity-mortality associations in older adults. Statistically, this reduces to evaluating the predictive accuracy of two non-nested models.
Objective: To develop a simple significance test to compare predictive accuracies of competing non-nested models for censored data.
Methods: We used an approach analogous to leave one out cross-validation and estimated errors defined by a quadratic loss function in a test sample. These errors are predictive accuracy measures of a parametric survival regression model and describe the variation explained by the model. A significance test was proposed to test for differences in the predictive accuracies of two competing non-nested models. The validity of the proposed test was evaluated by estimating the type 1 error rates under null at three levels of censoring (0%, 16% and 50%). Finally, we illustrated the method with an analysis of comparing the predictive accuracies of maximum BMI versus baseline BMI using the National Health and Nutrition Examination Survey data.
Results: Simulation results indicate that our proposed test that evaluates differences in predictive accuracies, based on a quadratic loss function, is valid even in presence of up to 50% random censoring.
Conclusions: Our proposed significance test provides a simple and valid framework to evaluate competing non-nested models. Analysts can use this approach to compare which of the two (or more) survival models better characterizes the observed data even when neither model is nested within the other.
110. Two Approaches to Cognitively Testing Questions for Federal Surveys
Rikki Welch, Senior Research Analyst, ICF International, and Arlen Rosenthal, Principal, ICF International
Background: Cognitive interviewing is a qualitative method of evaluating survey questions. It examines the respondent’s question-response processes to ensure questions are clear and understood as intended by the questions’ authors. Testing is frequently conducted through in-depth interviews with a purposive sample of respondents similar to those taking the final survey. The process can uncover problems with questions and reduce sources of response error before the fully survey is fielded.
Objective: The objective of this study is to explore two approaches for testing questions for federal surveys and analyze the appropriateness of each.
Methods: Cognitive testing was done for two federal agencies (names masked for confidentiality).
For Agency A: 1) Questions were tested with the general public; 2) Eight participants were recruited by a professional recruitment firm; 3) Participants received an incentive of 75 dollars; 4) Interviews were held in a professional facility with client observers. 5) The think-aloud and concurrent probing methods of cognitive testing were used; participants read the questions aloud and worked through thought-processes and the researchers’ probes verbally. For Agency B: 1) Questions were tested with medical specialists; 2) Nine participants were recruited from a list provided by the Agency; 3) Participants were not paid; 4) Interviews took place at the participants’ offices; 5) Concurrent probing methods and participant volunteering of information were used; selected probes asked about specific words or phrases and asked how participants arrived at answers.
Results: Both approaches yielded quality results, demonstrating that interviews with hard-to-recruit professionals can be successfully completed at the participants’ worksites. When testing with the general public, however, projects gain credibility by holding interviews in a facility. Professional recruitment firms and facilities are more costly but may be necessary depending on the participant profile.
Conclusions: Differing approaches to cognitive testing can yield quality outcomes; the approach should be driven by participant and agency needs.
111. Operationalizing National Health Objectives at the State Level in Hawaii
Julia Chosy, Ranjani Starr, Tonya Lowery St. John, and Dulce Belen
Background: The U.S. Department of Health and Human Services provides a national framework for improving the health of all Americans in the form of Healthy People 2020 (HP2020). HP2020 is a collection of important health measures, each with a target value to be achieved by the year 2020. Although measured at the national level, states can also use this framework to compare and track the health of their residents, provided a data source is available.
Objective: Create and maintain an open-access tracker for the state of Hawaii based on the HP2020 objectives.
Methods: A comprehensive review of 1,202 HP2020 objectives was conducted to determine the national data source, the definition of the measure, and whether state-level data were available. Next, we engaged our program partners at the Hawaii State Department of Health to ask which objectives had the highest priority within each of their programs. The Hawaii State Healthy People 2020 Tracker was created on an existing website, Hawaii Health Matters.
Results: Currently, Hawaii’s HP2020 tracker contains 316 objectives, including 179 that were identified as high priority by the Department of Health. The tracker pulls from 35 data sources, with the top 3 being the Behavioral Risk Factor Surveillance System (BRFSS), Vital statistics, and the Youth Risk Behavior Survey (YRBS). In 2014, the landing page for the tracker was viewed 4,700 times by almost 1,900 unique visitors.
Conclusions: Although we faced some challenges finding state-level data and measures that exactly matched the HP2020 definitions, we created a state-level tracker that shows Hawaii’s progress toward nationally-established health objectives. The tracker is regularly updated and continues to grow as we add new measures.
112. Surveillance of Non-Fatal Overdoses in Maryland: Drug and Alcohol-related Emergency Department Visit Trends, 2008-2013
Roengrudee Patanavanich, MD, PhD, Andrea Bankoski, MPH, Isabelle Horon, DrPH
Background: Unintentional drug intoxication deaths have been increasing in Maryland since 2010, reflecting the rising trend in the United States. Drug intoxication deaths are potentially preventable with substance abuse interventions and outreach. Surveillance on non-fatal overdoses informs outreach and intervention efforts to target Marylanders in need of substance abuse prevention and treatment services.
Objective: The Maryland Department of Health and Mental Hygiene’s Vital Statistics Administration (VSA) examined non-fatal drug and alcohol-related overdose trends among Maryland residents during the years 2008 to 2013.
Methods: VSA developed a methodology to identify drug and alcohol-related emergency department (ED) visits using outpatient discharge data files obtained from the Health Services Cost Review Commission (HSCRC). HSCRC outpatient files contain discharge medical record abstract and billing data from 46 acute care and five specialty hospitals. International Classification of Diseases (ICD-9) codes were used to identify drug and alcohol-related visits. Discharges related to attempted suicide, self-injury, and assault codes were excluded from analysis. Age-adjusted rates of drug and alcohol-related ED visits for Maryland residents were examined from 2008 to 2013 by age, race/ethnicity, sex, county of residence, and type of substance.
Results: During this 6-year period, the rate of all drug and alcohol-related ED visits showed a 37% increase. The rate of heroin-related ED visits nearly quadrupled from 5.3 per 100,000 population in 2008 to 19.2 per 100,000 population in 2013 – an average increase of 44% per year. Between 2011 and 2013, the average increase was 53%. Rates were highest among non-Hispanic White, males ages 25-44 years. Rates in five jurisdictions were significantly higher than the State average.
Conclusions: Trends in drug and alcohol-related ED visits mirror the upward trend in drug and alcohol-related intoxication deaths in Maryland. Data findings can be used to illustrate the progression of an epidemic of drug overdoses and prevent fatal overdoses.
113. Potential Gains in Life Expectancy from Reductions in Leading Causes of Death, Los Angeles County
Alex Ho, Heena Hameed, Alice W. Lee, Margaret Shih
Background: Despite gains in life expectancy at birth across the United States (US), significant disparities persist among racial/ethnic groups.
Objective: This study quantifies theoretical race/ethnicity-specific gains in life expectancy from full or partial elimination of leading causes of death in Los Angeles County (LAC), the most populous county and one of the most diverse populations in the US.
Methods: Complete annual LAC life tables for 2000-2010 were created using death registration records and population estimates for residents, and by applying the same method used for the National Center of Health Statistics US life tables published in 1999. We examined race/ethnicity-specific life expectancy trends, and also calculated potential gains in life expectancy using scenarios of 10%, 20%, 50%, and 100% elimination of leading causes of death for 2010.
Results: LAC residents regardless of race/ethnicity or gender enjoyed steady improvements in life expectancy over the period. In 2010, Asians/Native Hawaiian or other Pacific Islanders had the longest life expectancy, while blacks had the shortest, representing a ten-year difference. Coronary heart disease, the leading cause of death, was found to exert the greatest impact on life expectancy. A complete elimination would result in life expectancy gains ranging from 2.2 years among white females to 3.7 years among black males. Elimination of lung cancer had the second greatest impact with close to one year gained for both males and females. However, marked disparities in life expectancy gains were noted for the elimination of several other causes of death, particularly for homicide, where the gain among black males would exceed 13 times that of their white counterparts.
Conclusions: Quantifying the impact of specific causes of death on life expectancy can be useful in formulating local health policies and prioritizing public health efforts. Reducing mortality from certain causes, such as homicide, could help more effectively narrow racial/ethnic disparities in life expectancy.
114. Mortality Risk Prediction Using Biomarkers Versus Participant-Reported Outcomes: A Comparison of Leukocyte Telomeres with Self-Rated Health
Steven D. Barger, Ph.D. Department of Psychological Sciences, Northern Arizona University.
Background: Leucocyte telomeres are nucleotide sequences that cap the end of chromosomes. Telomeres shorten as cells divide, indicating cellular level aging (Sanders & Newman, 2013). Although telomere length has been associated with cardiovascular disease (CVD) mortality and all-cause mortality, the clinical utility of a biomarker is established not in isolation but by demonstrating health risk prediction beyond that captured by other health indicators (Ioannidis & Panagiotou, 2011). Therefore, this study compared the prognostic value of telomeres with self-rated health, a participant-reported assessment of health status.
Objective: To compare the relative predictive strength of telomere length with self-rated health for all-cause mortality and mortality from CVD in a diverse probability sample of adults 20 years and older.
Methods: Prospective observational study of participants in the 1999-2002 National Health and Nutrition Examination Survey who consented to DNA analysis of stored blood samples (N = 7803). Vital status was ascertained in 2011 providing 9-12 years of follow up. Cox regression models were used to estimate mortality risk. All models included telomere length and self-rated health. These models incorporated the complex survey design, were stratified by age and gender, and simultaneously adjusted for race/ethnicity, education, and diagnosed chronic diseases.
Results: No association between telomere length and CVD mortality or all-cause mortality was observed (hazard ratios [HR] = 0.99 [95% CI 0.50-1.95] and 0.86 [95% CI 0.60-1.21], respectively). In contrast, self-rated health showed a consistent graded association with death from CVD (very good/excellent versus poor health HR = 0.13 [95% CI 0.07-0.23]) and death from all causes (HR = 0.26 [95% CI 0.19-0.34]).
Conclusions: Leukocyte telomere length was unrelated to CVD mortality or all-cause mortality in this large prospective study. In contrast, self-rated health, a simple and inexpensive participant-reported health assessment, strongly predicted mortality risk over 9-12 years.
115. Death Certification Inaccuracies and the Validity of Public Health Statistics
Kirk Bol, MSPH, Colorado Department of Public Health and Environment
Dylan Norton, MD, University of Colorado School of Medicine
Philip Boyer, MD, PhD, Eastern Carolina University, Brody School of Medicine
Robert Low, MD, PhD, University of Colorado School of Medicine
Background: Death certificate data form the basis for mortality statistics. Errors in certificate completion threaten the accuracy of these statistics. Reliance on flawed public health statistics may lead to misdirection of public health resources and policies.
Objective: The goal of this project was to assess the accuracy of death certificate-derived mortality data.
Methods: Clinical and autopsy records for deaths in a single tertiary care institution in the Denver metropolitan area were examined for hospitalized patients. A senior pathologist, blinded to the original cause of death determination, completed a mock death certificate for each case based on clinical history and post-mortem findings. Mock certificates were processed by the National Center for Health Statistics (NCHS) for cause-of-death coding. The revised and original causes of death were compared and analyzed for discrepancies. Certificates were examined to see if the original certifying physician was aware an autopsy had been performed and whether autopsy findings were used when completing the death certificate.
Results: The records reviewed consisted of 227 adult patients who died in a natural manner and of medical causes during 2010 and 2011 and for whom an autopsy was performed. Among these deaths, 70% of underlying causes of death changed from one ICD-10 code to another based on the mock death certificates. In 35% of cases, the change resulted in a shift from one ICD major chapter to another. In only two instances (0.9%) did the certifying physician consult autopsy results and amend the original death certificate based on autopsy findings.
Conclusions: These results suggest that public vital statistics mortality data may be flawed. Measures should be taken to reduce errors in death certification and to improve the quality of vital statistics data.
116. Data-driven community health assessments rely on vital statistics to identify health problems
Deborah Lischwe, M.S. Michelle Bunyer, M.A. University of Illinois College of Medicine-Rockford
Background: Health departments must understand and address the most significant local health problems in order to improve the community health. In Illinois, all local health departments are mandated to identify their top health problems every five years. Most community health assessments use both primary and secondary data. A leading source of secondary (already collected) data is Vital Statistics, particularly birth and death data from the National Center for Health Statistics CDC WONDER database which typically form the foundation of these assessments. CDC WONDER includes death data for every county in the nation and birth data for counties with more than 100,000 population. CDC WONDER calculates rates for birth and death events, plus age-adjusts death rates.
Objective: To identify the most important health problems in McHenry County, Illinois.
Methods: Detailed and historic data about population, socioeconomic, health status and health behaviors were collected and analyzed for McHenry County, Illinois in 2014. Time trends, demographic breakdowns, state and national comparisons complemented the most current county data. Through CDC WONDER, relatively recent and complete data about births and deaths were gathered and then analyzed.
Results: Using these data, McHenry County Healthy Community partners, a collaborative effort headed up by the McHenry County Department of Health, identified as top health problems: 1. Ethnic disparities in birth outcomes and infant death 2. Lung cancer mortality 3. Poisoning deaths (accidental death due to noxious substances and drug overdose) 4. Premature death among Hispanics.
Conclusions: Analysis of Vital Statistics data from CDC WONDER enabled the McHenry County Healthy Community Collaborative to identify serious health problems for their community’s attention.
117. An X-linkage may Explain the Under 5-year Death Rate of Males that is 25% Higher than that of Females
David T. Mage and E. Maria Donner
Background: We noticed the male fraction of post-neonatal sudden infant death syndrome (SIDS) was relatively constant at 0.61 +/- 0.01 in global data sets. We hypothesized that this male fraction was caused by an X-linkage. Subsequently, we have been unable to reject this hypothesis. Furthermore, we discovered all infant deaths from respiratory failure have the same approximate 50% male excess rate but infant deaths from cardiac failure have no (0%) male excess rate.
Objective: To find an X-linked dominant allele in Hardy-Weinberg Equilibrium with frequency p = 1/3 predicting a 50% male excess susceptibility to SIDS [recessive frequency q = 1 – p = 2/3; q/(q x q) = 1.5] if and when an infant attains a potentially fatal acute anoxic encephalopathy.
Methods: wonder.cdc.gov provides U.S. mortality data to test the X-linkage hypothesis for 50% male excess rate of infant respiratory deaths and 0% male excess cardiac infant death rate. Given all natural deaths are either respiratory (breathing stops first) or cardiac (heart stops first), we assume females have equal risk of respiratory and cardiac failure, that predicts an overall average 25% male excess total death rate for all children under 5 years.
Results: For 1968-2013 cdc.wonder.gov reports m = 1,206,923 male and f = 920,109 female deaths under 5, and M = 434,495,355 male and F = 415,309,285 female years at risk. The ratio of male to female death rate = (m/M)/(f/F) = 1.2538 vs 1.2500 predicted (ChiSq1df = 4.8, p = 0.0285).
Conclusions: The hypothesis of an X-linkage as the cause for the under 5 year 50% higher male rate of respiratory mortality and the 25% higher rate or all infant mortality is consistent with these data. Discovery of such a protective dominant allele can lead to prophylaxis against SIDS and other respiratory failures caused by acute anoxic encephalopathy.
118. The cause of Sudden Infant Death Syndrome (SIDS) may be found in cdc.wonder.gov
David T. Mage and E. Maria Donner
Background: In Sudden Unexpected Infant Death (SUID) cases there is complete absence of prodromal symptoms or departure from normality to justify the parent in seeking medical advice. When autopsy and/or scene investigation findings are negative (SIDS, Unknown, Suffocation) the causal factors are invisible or unmeasured, but they may be revealed by epidemiology applied to CDC’s vital statistics compendia for 10ICD R95, R99 and W75, respectively.
Objective: To develop a hypothesis for SUID and SIDS causation factors that is consistent with cdc.wonder.gov vital statistics data.
Methods: SIDS has a unique 4-parameter lognormal age distribution with a consistent 50% male infant excess similar to that for deaths by respiratory failures, implying similar terminal events (acute anoxic encephalopathies). We develop a probability model to fit the epidemiology for an infant death caused by an occult-prodromal respiratory infection that fulminates during sleep into bacteremia without complications, that leads to death before an appreciable amount of lung parenchyma has been involved.
Results: Given cdc.wonder.gov SUID Live-Birth-Order (LBO), from one to six-or-more, we assume the infant’s number of cohabiting family members (CFM) = 2 parents + (LBO – 1) siblings = LBO + 1. If the probability of a CFM not having a respiratory infection (symptomatic or asymptomatic) = 0.9 the infant will be exposed to at least one carrying family member with probability [1 – 0.9^(LBO+1)]. A plot of SUID rate as a function of increasing LBO is fit virtually perfectly by modeling SUID rate = 0.00345*[1 – 0.9^(LBO+1)] We relate the constant 0.00345 to an X-linked q = 2/3 recessive allele in Hardy-Weinberg-Equilibrium that provides the 50% male excess susceptibility to SUID, and physiological anemia that places only the most anemic (below -2.5 sigma hemoglobin) at risk of anoxemia.
Conclusion: Given that all SUID rates are predicted by LBO a respiratory infection may be a factor in all SUID.
119. Contributory causes of death among adult sickle cell disease (SCD) patients
Seyed Mehdi Nouraie, Naveed Chaudhry, Victor Gordeuk
Background: The average lifespan of those with SCD is shorter than normal, reflecting an increase in mortality due to the complications of the disease process. Current information about the causes of death in adults patients are scarce, and did not account for the underlying causes of death in African-American population.
Objective: In this study we aimed to compare the major contributory causes of death in adult SCD mortality with all other African-American non-SCD mortality.
Methods: Multiple Cause-of-Death Mortality Data from the National Center for Health Statistics were used for this study. Data from 1999 to 2007 were extracted. All African-American death >20 years old were selected. Mortality due to external causes were excluded. Standard Mortality ratio (SMR) was calculated based on number of observed death from any contributory cause in SCD and number of expected death in each age category from other cause.
Results: Among 2,246,567 adult African-American deaths, 52% were female. SCD was the primary cause of death in n = 3,767 (0.17%). The most common causes of death in these patients were renal failure, pulmonary circulation, cerebrovascular accident, liver disease and influenza. Compared to people who did not die primarily of SCD, genitourinary (SMR=1.91), digestive (SMR=1.87), circulatory (SMR=1.84), musculoskeletal (SMR=1.51), respiratory (SMR=1.48) and infectious disease (SMR=1.45) were more frequent in patients who died of SCD while neoplasm, endocrine, nervous and mental disease were less frequent. Men with SCD were more prone to skeletal disorders (SMR=2.23 vs. 1.16 in women), otherwise there were not significant differences between two genders.
Conclusions: SCD affects multi-organ in patients. Renal, cardiopulmonary and liver disease contributed to most of death in adult patients with SCD. Causes of mortality in SCD should be assessed in compared other African-American population.
120. Do Pregnant Women Prefer CNMs in the District?
Rowena Samala, MPH and Nikhil Roy, MSc.
Background: Of the 67,000 singleton births occurring in the District of Columbia from 2009 to 2013, about 7 percent were attended by Certified Nurse Midwives (CNMs). The proportion of CNM-attended births in the District has more than tripled during this 5-year period.
Objective: To study characteristics of CNM-attended births in the District of Columbia
Methods: Birth data were obtained from the District of Columbia Electronic Birth Registration System (EDRS) which yielded 58,730 singleton live births from 2009-2013. Logistic regression was employed to see maternal risk factors and characteristics of labor and delivery associated with CNM-attended births.
Results: Only 1 percent of singleton babies in the District were delivered outside a hospital setting, however, 83.5 percent of these were delivered by CNMs. White mothers were 63 percent more likely to have been attended by CNMs compared to non-white (OR:1.6, p 0.0001); women with Medicaid or other government payment source were 57 percent more likely seen by CNMs. Low risk pregnancies – first-time mothers, and women with no pre-pregnancy or gestational diabetes were more likely to receive care from CNMs at delivery (OR:1.40, p 0.0001), (OR:5.46, p 0.0001), and (OR:1.42, p 0.0001), respectively. Interestingly, women who had a previous C-section were seen by a CNM at current delivery 1.65 times compared to those who did not (OR: 1.65, p 0.0001). Women who did not have an epidural or spinal anesthesia during labor were 3 times more likely to have been seen by a CNM (OR: 2.89, p 0.0001), while those who delivered vaginally were 11 times more likely (OR: 11.0, p 0.0001). Term and normal weight babies were twice and 1.4 times more likely to have been attended by CNMs.
Conclusion: With the multitude of decisions associated with having a baby, it is unclear whether pregnant women have a real choice in selecting their caregiver at the time of delivery. More research is warranted in understanding the healthcare-seeking patterns among this population.
121. Prevalence of Hyperhomocysteinemia and its correlation with Vit B12 deficiency: Data from the National Health and Nutrition Examination Survey (NHANES), 2005-2006
Nowshad G, Bangash A.M, Garza N.J, Ayass M.A
Background: Hyperhomocysteinemia is directly associated with cardiovascular disease risk by irritating the lining of the blood vessels, increased smooth cell proliferation, and thrombus formation. This condition is seen in about 12 percent of the general population but in more prevalent in VitB12 deficient patients. Genetic abnormalities in enzymes and a deficiency in B vitamins lead to elevated blood concentrations of total homocysteine (tHcy).
Objective: The objective of this study was to present the prevalence of Hyperhomocysteinemia in a national representative sample of adults in the United States during 2005-2006.
Methods: Data from the National Health and Nutrition Examination Survey (NHANES), 2005-2006) were used to study the prevalence (n=31,336). Demographic, socioeconomic, dietary, and health-related data were collected in participants’ homes as part of the household interview. Respondents were inquired about diseases and blood levels were measured for plasma homocysteine, nutrients and vitamins. Hyperhomocysteinemia was defined as if the levels of homocysteine were (greater than 11.1 micromol/L) and person is considered VitB12 deficient if the blood levels were (lesser than 0.15 mcg). Descriptive estimates were generated and significance tests were used to test for statistically significant differences.
Results: Three thousand seven hundred subjects (12.8 percent) had Hyperhomocysteinemia were mostly older, male, diabetic. VitB12 deficiency was in 0.2 percent (56/31,336) of the study population but among those with high level of homocysteine the deficiency was 0.45 percent compared to 0.14 percent with normal level of homocystein. Similarly, multivariate analysis indicated that VitB12 deficiency (OR 3.21, P=0.001) was the statistically significant determinants of Hyperhomocysteinemia after controlling for demographic.
Conclusions: VitB12 deficiency’s association with Hyperhomocysteinemia give us hope that nutritional supplementation in vitamin deficient population can reduce the thrombotic risk. Utilization of these information in the routine care by healthcare providers will play vital role in early detection and prevention of heart diseases and stroke.