964 resultados para 871-1.09
Resumo:
BACKGROUND: In equine laminitis, the deep digital flexor muscle (DDFM) appears to have increased muscle force, but evidence-based confirmation is lacking. OBJECTIVES: The purpose of this study was to test if the DDFM of laminitic equines has an increased muscle force detectable by needle electromyography interference pattern analysis (IPA). ANIMALS AND METHODS: The control group included six Royal Dutch Sport horses, three Shetland ponies and one Welsh pony [10 healthy, sound adults weighing 411 ± 217 kg (mean ± SD) and aged 10 ± 5 years]. The laminitic group included three Royal Dutch Sport horses, one Friesian, one Haflinger, one Icelandic horse, one Welsh pony, one miniature Appaloosa and six Shetland ponies (14 adults, weight 310 ± 178 kg, aged 13 ± 6 years) with acute/chronic laminitis. The electromyography IPA measurements included firing rate, turns/second (T), amplitude/turn (M) and M/T ratio. Statistical analysis used a general linear model with outcomes transformed to geometric means. RESULTS: The firing rate of the total laminitic group was higher than the total control group. This difference was smaller for the ponies compared to the horses; in the horses, the geometric mean difference of the laminitic group was 1.73 [geometric 95% confidence interval (CI) 1.29-2.32], and in the ponies this value was 1.09 (geometric 95% CI 0.82-1.45). CONCLUSION AND CLINICAL RELEVANCE: In human medicine, an increased firing rate is characteristic of increased muscle force. Thus, the increased firing rate of the DDFM in the context of laminitis suggests an elevated muscle force. However, this seems to be only a partial effect as in this study, the unchanged turns/second and amplitude/turn failed to prove the recruitment of larger motor units with larger amplitude motor unit potentials in laminitic equids.
Resumo:
AIMS Proprotein convertase subtilisin kexin 9 (PCSK9) is an emerging target for the treatment of hypercholesterolaemia, but the clinical utility of PCSK9 levels to guide treatment is unknown. We aimed to prospectively assess the prognostic value of plasma PCSK9 levels in patients with acute coronary syndromes (ACS). METHODS AND RESULTS Plasma PCSK9 levels were measured in 2030 ACS patients undergoing coronary angiography in a Swiss prospective cohort. At 1 year, the association between PCSK9 tertiles and all-cause death was assessed adjusting for the Global Registry of Acute Coronary Events (GRACE) variables, as well as the achievement of LDL cholesterol targets of <1.8 mmol/L. Patients with higher PCSK9 levels at angiography were more likely to have clinical familial hypercholesterolaemia (rate ratio, RR 1.21, 95% confidence interval, CI 1.09-1.53), be treated with lipid-lowering therapy (RR 1.46, 95% CI 1.30-1.63), present with longer time interval of chest pain (RR 1.29, 95% CI 1.09-1.53) and higher C-reactive protein levels (RR 1.22, 95% CI 1.16-1.30). PCSK9 increased 12-24 h after ACS (374 ± 149 vs. 323 ± 134 ng/mL, P < 0.001). At 1 year follow-up, HRs for upper vs. lower PCSK9-level tertiles were 1.13 (95% CI 0.69-1.85) for all-cause death and remained similar after adjustment for the GRACE score. Patients with higher PCSK9 levels were less likely to reach the recommended LDL cholesterol targets (RR 0.81, 95% CI 0.66-0.99). CONCLUSION In ACS patients, high initial PCSK9 plasma levels were associated with inflammation in the acute phase and hypercholesterolaemia, but did not predict mortality at 1 year.
Resumo:
BACKGROUND The presence of prodromal transient ischemic attacks (TIAs) has been associated with a favorable outcome in anterior circulation stroke. We aimed to determine the association between prodromal TIAs or minor stroke and outcomes at 1 month, in the Basilar Artery International Cooperation Study, a registry of patients presenting with an acute symptomatic and radiologically confirmed basilar artery occlusion. METHODS A total of 619 patients were enrolled in the registry. Information on prodromal TIAs was available for 517 patients and on prodromal stroke for 487 patients. We calculated risk ratios and corresponding 95% confidence intervals (CIs) for poor clinical outcome (modified Rankin Scale score ≥4) according to the variables of interest. RESULTS Prodromal minor stroke was associated with poor outcome (crude risk ratio [cRR], 1.26; 95% CI, 1.12-1.42), but TIAs were not (cRR, .93; 95% CI, .79-1.09). These associations remained essentially the same after adjustment for confounding variables. CONCLUSIONS Prodromal minor stroke was associated with an unfavorable outcome in patients with basilar artery occlusion, whereas prodromal TIA was not.
Resumo:
L’objectif principal de cet article est de montrer que certains emplois des connecteurs pragmatiques nécessitent la construction d’une métareprésentation. Nous verrons d’abord que les capacités humaines de métareprésentation sont diverses et montrerons ensuite que cette diversité se retrouve dans les différents types d’emplois des connecteurs. Nous proposerons une esquisse de modèle qui rend compte de cette propriété des connecteurs dans le cadre de la théorie de la pertinence. Enfin, nous testerons la validité de notre modèle en présentant des données dans le domaine de l’acquisition du langage, plus particulièrement sur la production de parce que chez des enfants entre deux et quatre ans.
Resumo:
Introduction: Although it seems plausible that sports performance relies on high-acuity foveal vision, it could be empirically shown that myoptic blur (up to +2 diopters) does not harm performance in sport tasks that require foveal information pick-up like golf putting (Bulson, Ciuffreda, & Hung, 2008). How myoptic blur affects peripheral performance is yet unknown. Attention might be less needed for processing visual cues foveally and lead to better performance because peripheral cues are better processed as a function of reduced foveal vision, which will be tested in the current experiment. Methods: 18 sport science students with self-reported myopia volunteered as participants, all of them regularly wearing contact lenses. Exclusion criteria comprised visual correction other than myopic, correction of astigmatism and use of contact lenses out of Swiss delivery area. For each of the participants, three pairs of additional contact lenses (besides their regular lenses; used in the “plano” condition) were manufactured with an individual overcorrection to a retinal defocus of +1 to +3 diopters (referred to as “+1.00 D”, “+2.00 D”, and “+3.00 D” condition, respectively). Gaze data were acquired while participants had to perform a multiple object tracking (MOT) task that required to track 4 out of 10 moving stimuli. In addition, in 66.7 % of all trials, one of the 4 targets suddenly stopped during the motion phase for a period of 0.5 s. Stimuli moved in front of a picture of a sports hall to allow for foveal processing. Due to the directional hypotheses, the level of significance for one-tailed tests on differences was set at α = .05 and posteriori effect sizes were computed as partial eta squares (ηρ2). Results: Due to problems with the gaze-data collection, 3 participants had to be excluded from further analyses. The expectation of a centroid strategy was confirmed because gaze was closer to the centroid than the target (all p < .01). In comparison to the plano baseline, participants more often recalled all 4 targets under defocus conditions, F(1,14) = 26.13, p < .01, ηρ2 = .65. The three defocus conditions differed significantly, F(2,28) = 2.56, p = .05, ηρ2 = .16, with a higher accuracy as a function of a defocus increase and significant contrasts between conditions +1.00 D and +2.00 D (p = .03) and +1.00 D and +3.00 D (p = .03). For stop trials, significant differences could neither be found between plano baseline and defocus conditions, F(1,14) = .19, p = .67, ηρ2 = .01, nor between the three defocus conditions, F(2,28) = 1.09, p = .18, ηρ2 = .07. Participants reacted faster in “4 correct+button” trials under defocus than under plano-baseline conditions, F(1,14) = 10.77, p < .01, ηρ2 = .44. The defocus conditions differed significantly, F(2,28) = 6.16, p < .01, ηρ2 = .31, with shorter response times as a function of a defocus increase and significant contrasts between +1.00 D and +2.00 D (p = .01) and +1.00 D and +3.00 D (p < .01). Discussion: The results show that gaze behaviour in MOT is not affected to a relevant degree by a visual overcorrection up to +3 diopters. Hence, it can be taken for granted that peripheral event detection was investigated in the present study. This overcorrection, however, does not harm the capability to peripherally track objects. Moreover, if an event has to be detected peripherally, neither response accuracy nor response time is negatively affected. Findings could claim considerable relevance for all sport situations in which peripheral vision is required which now needs applied studies on this topic. References: Bulson, R. C., Ciuffreda, K. J., & Hung, G. K. (2008). The effect of retinal defocus on golf putting. Ophthalmic and Physiological Optics, 28, 334-344.
Resumo:
BACKGROUND There has been little research on bathroom accidents. It is unknown whether the shower or bathtub are connected with special dangers in different age groups or whether there are specific risk factors for adverse outcomes. METHODS This cross-sectional analysis included all direct admissions to the Emergency Department at the Inselspital Bern, Switzerland from 1 January 2000 to 28 February 2014 after accidents associated with the bathtub or shower. Time, age, location, mechanism and diagnosis were assessed and special risk factors were examined. Patient groups with and without intracranial bleeding were compared with the Mann-Whitney U test.The association of risk factors with intracranial bleeding was investigated using univariate analysis with Fisher's exact test or logistic regression. The effects of different variables on cerebral bleeding were analysed by multivariate logistic regression. RESULTS Two hundred and eighty (280) patients with accidents associated with the bathtub or shower were included in our study. Two hundred and thirty-five (235) patients suffered direct trauma by hitting an object (83.9%) and traumatic brain injury (TBI) was detected in 28 patients (10%). Eight (8) of the 27 patients with mild traumatic brain injuries (GCS 13-15), (29.6%) exhibited intracranial haemorrhage. All patients with intracranial haemorrhage were older than 48 years and needed in-hospital treatment. Patients with intracranial haemorrhage were significantly older and had higher haemoglobin levels than the control group with TBI but without intracranial bleeding (p<0.05 for both).In univariate analysis, we found that intracranial haemorrhage in patients with TBI was associated with direct trauma in general and with age (both p<0.05), but not with the mechanism of the fall, its location (shower or bathtub) or the gender of the patient. Multivariate logistic regression analysis identified only age as a risk factor for cerebral bleeding (p<0.05; OR 1.09 (CI 1.01;1.171)). CONCLUSION In patients with ED admissions associated with the bathtub or shower direct trauma and age are risk factors for intracranial haemorrhage. Additional effort in prevention should be considered, especially in the elderly.
Resumo:
Treatment of chronic myeloid leukemia (CML) has been profoundly improved by the introduction of tyrosine kinase inhibitors (TKIs). Long-term survival with imatinib is excellent with a 8-year survival rate of ∼88%. Long-term toxicity of TKI treatment, especially carcinogenicity, has become a concern. We analyzed data of the CML study IV for the development of secondary malignancies. In total, 67 secondary malignancies were found in 64 of 1525 CML patients in chronic phase treated with TKI (n=61) and interferon-α only (n=3). The most common malignancies (n⩾4) were prostate, colorectal and lung cancer, non-Hodgkin's lymphoma (NHL), malignant melanoma, non-melanoma skin tumors and breast cancer. The standardized incidence ratio (SIR) for all malignancies excluding non-melanoma skin tumors was 0.88 (95% confidence interval (0.63-1.20)) for men and 1.06 (95% CI 0.69-1.55) for women. SIRs were between 0.49 (95% CI 0.13-1.34) for colorectal cancer in men and 4.29 (95% CI 1.09-11.66) for NHL in women. The SIR for NHL was significantly increased for men and women. An increase in the incidence of secondary malignancies could not be ascertained. The increased SIR for NHL has to be considered and long-term follow-up of CML patients is warranted, as the rate of secondary malignancies may increase over time.Leukemia advance online publication, 26 February 2016; doi:10.1038/leu.2016.20.
Resumo:
All forms of Kaposi sarcoma (KS) are more common in men than in women. It is unknown if this is due to a higher prevalence of human herpesvirus 8 (HHV-8), the underlying cause of KS, in men compared to women. We did a systematic review and meta-analysis to examine the association between HHV-8 seropositivity and gender in the general population. Studies in selected populations like for example, blood donors, hospital patients, and men who have sex with men were excluded. We searched Medline and Embase from January 1994 to February 2015. We included observational studies that recruited participants from the general population and reported HHV-8 seroprevalence for men and women or boys and girls. We used random-effects meta-analysis to pool odds ratios (OR) of the association between HHV-8 and gender. We used meta-regression to identify effect modifiers, including age, geographical region and type of HHV-8 antibody test. We included 22 studies, with 36,175 participants. Men from sub-Saharan Africa (SSA) (OR 1.21, 95% confidence interval [CI] 1.09-1.34), but not men from elsewhere (OR 0.94, 95% CI 0.83-1.06), were more likely to be HHV-8 seropositive than women (p value for interaction=0.010). There was no difference in HHV-8 seroprevalence between boys and girls from SSA (OR 0.90, 95% CI 0.72-1.13). The type of HHV-8 assay did not affect the overall results. A higher HHV-8 seroprevalence in men than women in SSA may partially explain why men have higher KS risk in this region. This article is protected by copyright. All rights reserved.
Resumo:
Background. Clostridium difficile is the leading cause of hospital associated infectious diarrhea and colitis. About 3 million cases of Clostridium difficile diarrhea occur each year with an annual cost of $1 billion. ^ About 20% of patients acquire C. difficile during hospitalization. Infection with Clostridium difficile can result in serious complications, posing a threat to the patient's life. ^ Purpose. The aim of this research was to demonstrate the uniqueness in the characteristics of C. difficile positive nosocomial diarrhea cases compared with C. difficile negative nosocomial diarrhea controls admitted to a local hospital. ^ Methods. One hundred and ninety patients with a positive test and one hundred and ninety with a negative test for Clostridium difficile nosocomial diarrhea, selected from patients tested between January 1, 2002 and December 31, 2003, comprised the study population. Demographic and clinical data were collected from medical records. Logistic regression analyses were conducted to determine the associated odds between selected variables and the outcome of Clostridium difficile nosocomial diarrhea. ^ Results. For the antibiotic classes, cephalosporins (OR, 1.87; CI 95, 1.23 to 2.85), penicillins (OR, 1.57; CI 95, 1.04 to 2.37), fluoroquinolones (OR, 1.65; CI 95, 1.09 to 2.48) and antifungals (OR, 2.17; CI 95, 1.20 to 3.94), were significantly associated with Clostridium difficile nosocomial diarrhea Ceftazidime (OR, 1.95; CI 95, 1.25 to 3.03, p=0.003), gatifloxacin (OR, 1.97; CI 95, 1.31 to 2.97, p=0.001), clindamycin (OR, 3.13; CI 95, 1.99 to 4.93, p<0.001) and vancomycin (OR, 1.77; CI 95, 1.18 to 2.66, p=0.006, were also significantly associated with the disease. Vancomycin was not statistically significant when analyzed in a multivariable model. Other significantly associated drugs were, antacids, laxatives, narcotics and ranitidine. Prolong use of antibiotics and an increased number of comorbid conditions were also associated with C. difficile nosocomial diarrhea. ^ Conclusion. The etiology for C. difficile diarrhea is multifactorial. Exposure to antibiotics and other drugs, prolonged antibiotic usage, the presence and severity of comorbid conditions and prolonged hospital stay were shown to contribute to the development of the disease. It is imperative that any attempt to prevent the disease, or contain its spread, be done on several fronts. ^
Resumo:
Globally, dengue is an emerging disease resulting in an estimated 50 million new cases and 22, 000 deaths each year. Anecdotally, depression has been reported as a possible sequelae of dengue virus infection. To test the association, we performed a cross-sectional analysis in a selected sub-set of participants from the Cameron County Hispanic Cohort (CCHC) in South Texas. All study subjects in the analysis had Center for Epidemiological Studies Depression scale (CES-D) scores and were tested for dengue antibodies using stored plasma. We found that 5.0% of participants tested either positive or equivocal for anti-dengue IgG antibodies using the capture antibody test, which detects acute secondary infections. Logistic regression identified that evidence of acute secondary dengue infection was not associated with depression (Odds Ratio [OR] = 0.97, 95%Confidence Interval [CI] 0.47-1.98); however, both being female (OR = 1.53, 95%CI 1.09-2.15) and obese body mass index (BMI > 30) (OR = 1.84, 95%CI 1.19-2.84) were associated with depression. ^
Resumo:
Diethylstilbestrol (DES) exposed women are well known to be at increased risk of gynecologic cancers and infertility. Infertility may result from DES associated abnormalities in the shape of women's uteri, yet little research has addressed the effect of uterine abnormalities on risk of infertility and reproductive tract infection. Changes in uterine shape may also influence the risk of autoimmune disease and women's subsequent mental health. A sample of consenting women exposed in utero to hormone who were recruited into the DESAD project, underwent hysterosalpingogram (HSG) from 1978 to 1984. These women also completed a comprehensive health questionnaire in 1994 which included women's self-reports of chronic conditions. HSG data were used to categorize uterine shape abnormalities as arcuate shape, hypoplastic, wide lower segment, and constricted. Women were recruited from two of the four DESAD study sites in Houston (Baylor) and Minnesota (Mayo). All women were DES-exposed. Adjusted relative risk estimates were calculated comparing the range of abnormal uterine shaped to women with normal shaped uteri for each of the four outcomes: infertility, reproductive tract infection, autoimmune disease and depressive symptoms. Only the arcuate shape (n=80) was associated with a higher risk of infertility (relative risk [RR]= 1.53, 95% CI = 1.09, 2.15) as well as reproductive tract infection (RR= 1.74, 95% CI = 1.11, 2.73). In conclusion, DES-associated arcuate shaped uteri appeared to be associated with the higher risk of a reproductive tract infection and infertility while no other abnormal uterine shapes were associated with these two outcomes.^
Resumo:
The purpose of this dissertation was to estimate HIV incidence among the individuals who had HIV tests performed at the Houston Department of Health and Human Services (HDHHS) public health laboratory, and to examine the prevalence of HIV and AIDS concurrent diagnoses among HIV cases reported between 2000 and 2007 in Houston/Harris County. ^ The first study in this dissertation estimated the cumulative HIV incidence among the individuals testing at Houston public health laboratory using Serologic Testing Algorithms for Recent HIV Seroconversion (STARHS) during the two year study period (June 1, 2005 to May 31, 2007). The HIV incidence was estimated using two independently developed statistical imputation methods, one developed by the Centers for Disease Control and Prevention (CDC), and the other developed by HDHHS. Among the 54,394 persons who tested for HIV during the study period, 942 tested HIV positive (positivity rate=1.7%). Of these HIV positives, 448 (48%) were newly reported to the Houston HIV/AIDS Reporting System (HARS) and 417 of these 448 blood specimens (93%) were available for STARHS testing. The STARHS results showed 139 (33%) out of the 417 specimens were newly infected with HIV. Using both the CDC and HDHHS methods, the estimated cumulative HIV incidences over the two-year study period were similar: 862 per 100,000 persons (95% CI: 655-1,070) by CDC method, and 925 per 100,000 persons (95% CI: 908-943) by HDHHS method. Consistent with the national finding, this study found African Americans, and men who have sex with men (MSM) accounted for most of the new HIV infections among the individuals testing at Houston public health laboratory. Using CDC statistical method, this study also found the highest cumulative HIV incidence (2,176 per 100,000 persons [95%CI: 1,536-2,798]) was among those who tested in the HIV counseling and testing sites, compared to the sexually transmitted disease clinics (1,242 per 100,000 persons [95%CI: 871-1,608]) and city health clinics (215 per 100,000 persons [95%CI: 80-353]. This finding suggested the HIV counseling and testing sites in Houston were successful in reaching high risk populations and testing them early for HIV. In addition, older age groups had higher cumulative HIV incidence, but accounted for smaller proportions of new HIV infections. The incidence in the 30-39 age group (994 per 100,000 persons [95%CI: 625-1,363]) was 1.5 times the incidence in 13-29 age group (645 per 100,000 persons [95%CI: 447-840]); the incidences in 40-49 age group (1,371 per 100,000 persons [95%CI: 765-1,977]) and 50 or above age groups (1,369 per 100,000 persons [95%CI: 318-2,415]) were 2.1 times compared to the youngest 13-29 age group. The increased HIV incidence in older age groups suggested that persons 40 or above were still at risk to contract HIV infections. HIV prevention programs should encourage more people who are age 40 and above to test for HIV. ^ The second study investigated concurrent diagnoses of HIV and AIDS in Houston. Concurrent HIV/AIDS diagnosis is defined as AIDS diagnosis within three months of HIV diagnosis. This study found about one-third of the HIV cases were diagnosed with HIV and AIDS concurrently (within three months) in Houston/Harris County. Using multivariable logistic regression analysis, this study found being male, Hispanic, older, and diagnosed in the private sector of care were positively associated with concurrent HIV and AIDS diagnoses. By contrast, men who had sex with men and also used injection drugs (MSM/IDU) were 0.64 times (95% CI: 0.44-0.93) less likely to have concurrent HIV and AIDS diagnoses. A sensitivity analysis comparing difference durations of elapsed time for concurrent HIV and AIDS diagnosis definitions (1-month, 3-month, and 12-month cut-offs) affected the effect size of the odds ratios, but not the direction. ^ The results of these two studies, one describing characteristics of the individuals who were newly infected with HIV, and the other study describing persons who were diagnosed with HIV and AIDS concurrently, can be used as a reference for HIV prevention program planning in Houston/Harris County. ^
Resumo:
Identifying accurate numbers of soldiers determined to be medically not ready after completing soldier readiness processing may help inform Army leadership about ongoing pressures on the military involved in long conflict with regular deployment. In Army soldiers screened using the SRP checklist for deployment, what is the prevalence of soldiers determined to be medically not ready? Study group. 15,289 soldiers screened at all 25 Army deployment platform sites with the eSRP checklist over a 4-month period (June 20, 2009 to October 20, 2009). The data included for analysis included age, rank, component, gender and final deployment medical readiness status from MEDPROS database. Methods.^ This information was compiled and univariate analysis using chi-square was conducted for each of the key variables by medical readiness status. Results. Descriptive epidemiology Of the total sample 1548 (9.7%) were female and 14319 (90.2%) were male. Enlisted soldiers made up 13,543 (88.6%) of the sample and officers 1,746 (11.4%). In the sample, 1533 (10.0%) were soldiers over the age of 40 and 13756 (90.0%) were age 18-40. Reserve, National Guard and Active Duty made up 1,931 (12.6%), 2,942 (19.2%) and 10,416 (68.1%) respectively. Univariate analysis. Overall 1226 (8.0%) of the soldiers screened were determined to be medically not ready for deployment. Biggest predictive factor was female gender OR (2.8; 2.57-3.28) p<0.001. Followed by enlisted rank OR (2.01; 1.60-2.53) p<0.001. Reserve component OR (1.33; 1.16-1.53) p<0.001 and Guard OR (0.37; 0.30-0.46) p<0.001. For age > 40 demonstrated OR (1.2; 1.09-1.50) p<0.003. Overall the results underscore there may be key demographic groups relating to medical readiness that can be targeted with programs and funding to improve overall military medical readiness.^
Resumo:
The development of nosocomial pneumonia was monitored in 96 head-trauma patients requiring mechanical ventilation for up to 10 days. Pneumonia occurred in 28 patients (29.2%) or 53.9 cases per 1,000 admission days. The incidence of nosocomial pneumonia was negatively correlated with cerebral oxygen metabolic rate (CMRO$\sb2$) measured during the first five days. The relative risk of nosocomial pneumonia for patients with CMRO$\sb2$ less than 0.6 umol/gm/min is 2.08 (1.09$-$3.98) times those patients with CMRO$\sb2$ greater than 0.6 umol/gm/min. The association between cerebral oxygen metabolic rate and nosocomial pneumonia was not affected by adjustment of potential confounding factors including age, cimetidine and other infections. These findings provide evidences underlying the CNS-immune system interaction. ^
Resumo:
The purpose of this study was to evaluate the adequacy of computerized vital records in Texas for conducting etiologic studies on neural tube defects (NTDs), using the revised and expanded National Centers for Health Statistics vital record forms introduced in Texas in 1989.^ Cases of NTDs (anencephaly and spina bifida) among Harris County (Houston) residents were identified from the computerized birth and death records for 1989-1991. The validity of the system was then measured against cases ascertained independently through medical records and death certificates. The computerized system performed poorly in its identification of NTDs, particularly for anencephaly, where the false positive rate was 80% with little or no improvement over the 3-year period. For both NTDs the sensitivity and predictive value positive of the tapes were somewhat higher for Hispanic than non-Hispanic mothers.^ Case control studies were conducted utilizing the tape set and the independently verified data set, using controls selected from the live birth tapes. Findings varied widely between the data sets. For example, the anencephaly odds ratio for Hispanic mothers (vs. non-Hispanic) was 1.91 (CI = 1.38-2.65) for the tape file, but 3.18 (CI = 1.81-5.58) for verified records. The odds ratio for diabetes was elevated for the tape set (OR = 3.33, CI = 1.67-6.66) but not for verified cases (OR = 1.09, CI = 0.24-4.96), among whom few mothers were diabetic. It was concluded that computerized tapes should not be solely relied on for NTD studies.^ Using the verified cases, Hispanic mother was associated with spina bifida, and Hispanic mother, teen mother, and previous pregnancy terminations were associated with anencephaly. Mother's birthplace, education, parity, and diabetes were not significant for either NTD.^ Stratified analyses revealed several notable examples of statistical interaction. For anencephaly, strong interaction was observed between Hispanic origin and trimester of first prenatal care.^ The prevalence was 3.8 per 10,000 live births for anencephaly and 2.0 for spina bifida (5.8 per 10,000 births for the combined categories). ^