984 resultados para Population Surveillance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oesophageal adenocarcinoma, a highly fatal cancer, has risen in incidence in Western societies, but it is unclear whether this is due to increasing incidence of its pre-cursor condition, Barrett's oesophagus (BO) or whether the proportion of BO patients undergoing malignant progression has increased in the face of unchanged BO incidence. Data from population-based studies of BO incidence is limited, with equivocal results to date difficult to distinguish from changes in endoscopic practices. The aim of this study was to assess population trends in Barrett's oesophagus (BO) diagnoses in relation to endoscopy and biopsy rates over a 13 year period. The Northern Ireland Barrett's oesophagus Register (NIBR) is a population-based register of all 9,329 adults diagnosed with columnar epithelium of the oesophagus in Northern Ireland between 1993 and 2005, of whom 58.3% were male. European age-standardised annual BO incidence rates were calculated per 100,000 of the population, per 100 endoscopies and per 100 endoscopies including an oesophageal biopsy. Average annual BO incidence rates rose by 159% during the study period, increasing from 23.9/100,000 during 1993-1997 to 62.0/100,000 during 2002-2005. This elevation far exceeded corresponding increases in rates of endoscopies and oesophageal biopsies being conducted. BO incidence increased most markedly in individuals aged <60 years, and most notably amongst males aged <40 years. This study points towards a true increase in the incidence of BO which would appear to be most marked in young males. These findings have significant implications for future rates of oesophageal adenocarcinoma and surveillance programmes. © 2011 Springer Science+Business Media B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND & AIMS: The risk of progression of Barrett's esophagus (BE) to esophageal adenocarcinoma (EAC) is low and difficult to calculate. Accurate tools to determine risk are needed to optimize surveillance and intervention. We assessed the ability of candidate biomarkers to predict which cases of BE will progress to EAC or high-grade dysplasia and identified those that can be measured in formalin-fixed tissues. METHODS: We analyzed data from a nested case-control study performed using the population-based Northern Ireland BE Register (1993-2005). Cases who progressed to EAC (n = 89) or high-grade dysplasia =6 months after diagnosis with BE were matched to controls (nonprogressors, n = 291), for age, sex, and year of BE diagnosis. Established biomarkers (abnormal DNA content, p53, and cyclin A expression) and new biomarkers (levels of sialyl Lewis(a), Lewis(x), and Aspergillus oryzae lectin [AOL] and binding of wheat germ agglutinin) were assessed in paraffin-embedded tissue samples from patients with a first diagnosis of BE. Conditional logistic regression analysis was applied to assess odds of progression for patients with dysplastic and nondysplastic BE, based on biomarker status. RESULTS: Low-grade dysplasia and all biomarkers tested, other than Lewis(x), were associated with risk of EAC or high-grade dysplasia. In backward selection, a panel comprising low-grade dysplasia, abnormal DNA ploidy, and AOL most accurately identified progressors and nonprogressors. The adjusted odds ratio for progression of patients with BE with low-grade dysplasia was 3.74 (95% confidence interval, 2.43-5.79) for each additional biomarker and the risk increased by 2.99 for each additional factor (95% confidence interval, 1.72-5.20) in patients without dysplasia. CONCLUSIONS: Low-grade dysplasia, abnormal DNA ploidy, and AOL can be used to identify patients with BE most likely to develop EAC or high-grade dysplasia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Several surveillance definitions of influenza-like illness (ILI) have been proposed, based on the presence of symptoms. Symptom data can be obtained from patients, medical records, or both. Past research has found that agreements between health record data and self-report are variable depending on the specific symptom. Therefore, we aimed to explore the implications of using data on influenza symptoms extracted from medical records, similar data collected prospectively from outpatients, and the combined data from both sources as predictors of laboratory-confirmed influenza. Methods: Using data from the Hutterite Influenza Prevention Study, we calculated: 1) the sensitivity, specificity and predictive values of individual symptoms within surveillance definitions; 2) how frequently surveillance definitions correlated to laboratory-confirmed influenza; and 3) the predictive value of surveillance definitions. Results: Of the 176 participants with reports from participants and medical records, 142 (81%) were tested for influenza and 37 (26%) were PCR positive for influenza. Fever (alone) and fever combined with cough and/or sore throat were highly correlated with being PCR positive for influenza for all data sources. ILI surveillance definitions, based on symptom data from medical records only or from both medical records and self-report, were better predictors of laboratory-confirmed influenza with higher odds ratios and positive predictive values. Discussion: The choice of data source to determine ILI will depend on the patient population, outcome of interest, availability of data source, and use for clinical decision making, research, or surveillance. © Canadian Public Health Association, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global amphibian declines are a major element of the current biodiversity crisis. Monitoring changes in the distribution and abundance of target species is a basic component in conservation decision making and requires robust and repeatable sampling. For EU member states, surveillance of designated species, including the common frog Rana temporaria, is a formal requirement of the 'EC Habitats & Species Directive'. We deployed established methods for estimating frog population density at local water bodies and extrapolated these to the national and ecoregion scale. Spawn occurred at 49.4% of water bodies and 70.1% of independent 500-m survey squares. Using spawn mat area, we estimated the number of adult breeding females and subsequently the total population assuming a sex ratio of 1:1. A negative binomial model suggested that mean frog density was 23.5 frogsha [95% confidence interval (CI) 14.9-44.0] equating to 196M frogs (95%CI 124M-367M) throughout Ireland. A total of 86% of frogs bred in drainage ditches, which were a notably common feature of the landscape. The recorded distribution of the species did not change significantly between the last Article 17 reporting period (1993-2006) and the current period (2007-2011) throughout the Republic of Ireland. Recording effort was markedly lower in Northern Ireland, which led to an apparent decline in the recorded distribution. We highlight the need to coordinate biological surveys between adjacent political jurisdictions that share a common ecoregion to avoid apparent disparities in the quality of distributional information. Power analysis suggested that a reduced sample of 40-50 survey squares is sufficient to detect a 30% decline (consistent with the International Union for Conservation of Nature Category of 'Vulnerable') at 80% power providing guidance for minimizing future survey effort. Our results provin assessments for R. temporaria and other clump-spawning amphibians. 2013 The Zoological Society of London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Endoscopic surveillance of Barrett's oesophagus (BO) provides an opportunity to detect early stage oesophageal adenocarcinoma (OAC). We sought to determine the proportion of OAC patients with a prior diagnosis of BO on a population basis and to evaluate the influence of a prior diagnosis of BO on survival, taking into account lead and length time biases.

Design: A retrospective population-based study of all OAC patients in Northern Ireland between 2003 and 2008. A prior BO diagnosis was determined by linkage to the Northern Ireland BO register. Stage distribution at diagnosis and histological grade were compared between patients with and without a prior BO diagnosis. Overall survival, using Cox models, was compared between patients with and without a prior BO diagnosis. The effect of adjusting the survival differences for histological grade and estimates of lead and length time bias was assessed.

Results: There were 716 OAC cases, 52 (7.3%) of whom had a prior BO diagnosis. Patients with a prior BO diagnosis had significantly lower tumour stage (44.2% vs 11.1% had stage 1 or 2 disease; p<0.001), a higher rate of surgical resection (50.0% vs 25.5%; p<0.001) and had a higher proportion of low/intermediate grade tumours (46.2% vs 26.5%; p=0.011). A prior BO diagnosis was associated with significantly better survival (HR for death 0.39; 95% CI 0.27 to 0.58), which was minimally influenced by adjustment for age, sex and tumour grade (adjusted HR 0.44; 95% CI 0.30 to 0.64). Correction for lead time bias attenuated but did not abolish the survival benefit (HR 0.65; 95% CI 0.45 to 0.95) and further adjustment for length time bias had little effect.

Conclusions: The proportion of OAC patients with a prior diagnosis of BO is low; however, prior identification of BO is associated with an improvement in survival in OAC patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The incidence of nonmelanomatous skin cancer (NMSC) is substantially higher among renal transplant recipients (RTRs) than in the general population. With a growing RTR population, a robust method for monitoring skin cancer rates in this population is required.
Methods: A modeling approach was used to estimate the trends in NMSC rates that adjusted for changes in the RTR population (sex and age), calendar time, the duration of posttransplant follow-up, and background population NMSC incidence rates. RTR databases in both Northern Ireland (NI) and the Republic of Ireland (ROI) were linked to their respective cancer registries for diagnosis of NMSC, mainly squamous cell carcinoma (SCC) and basal cell carcinoma (BCC).
Results: RTRs in the ROI had three times the incidence (P<0.001) of NMSC compared with NI. There was a decline (P<0.001) in NMSC 10-year cumulative incidence rate in RTRs over the period 1994–2009, which was driven by reductions in both SCC and BCC incidence rates. Nevertheless, there was an increase in the incidence of NMSC with time since transplantation. The observed graft survival was higher in ROI than NI (P<0.05) from 1994–2004. The overall patient survival of RTRs was similar in NI and ROI.
Conclusion: Appropriate modeling of incidence trends in NMSC among RTRs is a valuable surveillance exercise for assessing the impact of change in clinical practices over time on the incidence rates of skin cancer in RTRs. It can form the basis of further research into unexplained regional variations in NMSC incidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of bacteria and viruses as aetiological agents in the pathogenesis of cancer has been well established for several sites, including a number of haematological malignancies. Less clear is the impact of such exposures on the subsequent development of multiple myeloma (MM). Using the population-based U.S. Surveillance Epidemiology and End Results-Medicare dataset, 15,318 elderly MM and 200,000 controls were identified to investigate the impact of 14 common community-acquired infections and risk of MM. Odds ratios (ORs) and associated 95% confidence intervals (CIs) were adjusted for sex, age and calendar year of selection. The 13-month period prior to diagnosis/selection was excluded. Risk of MM was increased by 5-39% following Medicare claims for eight of the investigated infections. Positive associations were observed for several infections including bronchitis (adjusted OR 1.14, 95% CI 1.09-1.18), sinusitis (OR 1.15, 95% CI 1.10-1.20) pneumonia (OR 1.27, 95% CI 1.21-1.33), herpes zoster (OR 1.39, 95% CI 1.29-1.49) and cystitis (OR 1.09, 95% CI 1.05-1.14). Each of these infections remained significantly elevated following the exclusion of more than 6 years of claims data. Exposure to infectious antigens may therefore play a role in the development of MM. Alternatively, the observed associations may be a manifestation of an underlying immune disturbance present several years prior to MM diagnosis and thereby part of the natural history of disease progression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Randomised controlled trials have demonstrated significant reductions in colorectal cancer (CRC) incidence and mortality associated with polypectomy. However, little is known about whether polypectomy is effective at reducing CRC risk in routine clinical practice. The aim of this investigation was to quantify CRC risk following polypectomy in a large prospective population-based cohort study.

Methods: Patients with incident colorectal polyps between 2000 and 2005 in Northern Ireland (NI) were identified via electronic pathology reports received to the NI Cancer Registry (NICR). Patients were matched to the NICR to detect CRC and deaths up to 31st December 2010. CRC standardised incidence ratios (SIRs) were calculated and Cox proportional hazards modelling applied to determine CRC risk.

Results: During 44,724 person-years of follow-up, 193 CRC cases were diagnosed amongst 6,972 adenoma patients, representing an annual progression rate of 0.43%. CRC risk was significantly elevated in patients who had an adenoma removed (SIR 2.85; 95% CI: 2.61 to 3.25) compared with the general population. Male sex, older age, rectal site and villous architecture were associated with an increased CRC risk in adenoma patients. Further analysis suggested that not having a full colonoscopy performed at, or following, incident polypectomy contributed to the excess CRC risk.

Conclusions: CRC risk was elevated in individuals following polypectomy for adenoma, outside of screening programmes.

Impact: This finding emphasises the need for full colonoscopy and adenoma clearance, and appropriate surveillance, after endoscopic diagnosis of adenoma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: HIV testing is a cornerstone of efforts to combat the HIV epidemic, and testing conducted as part of surveillance provides invaluable data on the spread of infection and the effectiveness of campaigns to reduce the transmission of HIV. However, participation in HIV testing can be low, and if respondents systematically select not to be tested because they know or suspect they are HIV positive (and fear disclosure), standard approaches to deal with missing data will fail to remove selection bias. We implemented Heckman-type selection models, which can be used to adjust for missing data that are not missing at random, and established the extent of selection bias in a population-based HIV survey in an HIV hyperendemic community in rural South Africa.

Methods: We used data from a population-based HIV survey carried out in 2009 in rural KwaZulu-Natal, South Africa. In this survey, 5565 women (35%) and 2567 men (27%) provided blood for an HIV test. We accounted for missing data using interviewer identity as a selection variable which predicted consent to HIV testing but was unlikely to be independently associated with HIV status. Our approach involved using this selection variable to examine the HIV status of residents who would ordinarily refuse to test, except that they were allocated a persuasive interviewer. Our copula model allows for flexibility when modelling the dependence structure between HIV survey participation and HIV status.

Results: For women, our selection model generated an HIV prevalence estimate of 33% (95% CI 27–40) for all people eligible to consent to HIV testing in the survey. This estimate is higher than the estimate of 24% generated when only information from respondents who participated in testing is used in the analysis, and the estimate of 27% when imputation analysis is used to predict missing data on HIV status. For men, we found an HIV prevalence of 25% (95% CI 15–35) using the selection model, compared to 16% among those who participated in testing, and 18% estimated with imputation. We provide new confidence intervals that correct for the fact that the relationship between testing and HIV status is unknown and requires estimation.

Conclusions: We confirm the feasibility and value of adopting selection models to account for missing data in population-based HIV surveys and surveillance systems. Elements of survey design, such as interviewer identity, present the opportunity to adopt this approach in routine applications. Where non-participation is high, true confidence intervals are much wider than those generated by standard approaches to dealing with missing data suggest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The risks associated with zoonotic infections transmitted by companion animals are a serious public health concern: the control of zoonoses incidence in domestic dogs, both owned and stray, is hence important to protect human health. Integrated dog population management (DPM) programs, based on the availability of information systems providing reliable data on the structure and composition of the existing dog population in a given area, are fundamental for making realistic plans for any disease surveillance and action system. Traceability systems, based on the compulsory electronic identification of dogs and their registration in a computerised database, are one of the most effective ways to ensure the usefulness of DPM programs. Even if this approach provides many advantages, several areas of improvement have emerged in countries where it has been applied. In Italy, every region hosts its own dog register but these are not compatible with one another. This paper shows the advantages of a web-based-application to improve data management of dog regional registers. The approach used for building this system was inspired by farm animal traceability schemes and it relies on a network of services that allows multi-channel access by different devices and data exchange via the web with other existing applications, without changing the pre-existing platforms. Today the system manages a database for over 300,000 dogs registered in three different Italian regions. By integrating multiple Web Services, this approach could be the solution to gather data at national and international levels at reasonable cost and creating a traceability system on a large scale and across borders that can be used for disease surveillance and development of population management plans. © 2012 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and AimsTo compare endoscopy and pathology sizing in a large population-based series of colorectal adenomas and to evaluate the implications for patient stratification into surveillance colonoscopy.MethodsEndoscopy and pathology sizes available from intact adenomas removed at colonoscopies performed as part of the Northern Ireland Bowel Cancer Screening Programme, from 2010 to 2015, were included in this study. Chi-squared tests were applied to compare size categories in relation to clinicopathological parameters and colonoscopy surveillance strata according to current American Gastroenterology Association and British Society of Gastroenterology guidelines.ResultsA total of 2521 adenomas from 1467 individuals were included. There was a trend toward larger endoscopy than pathology sizing in 4 of the 5 study centers, but overall sizing concordance was good. Significantly greater clustering with sizing to the nearest 5 mm was evident in endoscopy versus pathology sizing (30% vs 19%, p<0.001), which may result in lower accuracy. Applying a 10-mm cut-off relevant to guidelines on risk stratification, 7.3% of all adenomas and 28.3% of those 8 to 12 mm in size had discordant endoscopy and pathology size categorization. Depending upon which guidelines are applied, 4.8% to 9.1% of individuals had differing risk stratification for surveillance recommendations, with the use of pathology sizing resulting in marginally fewer recommended surveillance colonoscopies.ConclusionsChoice of pathology or endoscopy approaches to determine adenoma size will potentially influence surveillance colonoscopy follow-up in 4.8% to 9.1% of individuals. Pathology sizing appears more accurate than endoscopy sizing, and preferential use of pathology size would result in a small, but clinically important, decreased burden on surveillance colonoscopy demand. Careful endoscopy sizing is required for adenomas removed piecemeal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-02

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction:Women with antiphospholipid syndrome(APS) may suffer from recurrent miscarriage, fetal death, fetal growth restriction (FGR), pre-eclampsia, placental abruption, premature delivery and thrombosis. Treatment with aspirin and low molecular weight heparin (LMWH) combined with close maternal-fetal surveillance can change these outcomes. Objective: To assess maternal and perinatal outcome in a cohort of Portuguese women with primary APS. Patients and Methods: A retrospective analysis of 51 women with primary APS followed in our institution (January 1994 to December 2007). Forty one(80.4%) had past pregnancy morbidity and 35.3%(n=18) suffered previous thrombotic events. In their past they had a total of 116 pregnancies of which only 13.79 % resulted in live births. Forty four patients had positive anticardiolipin antibodies and 33 lupus anticoagulant. All women received treatment with low dose aspirin and LMWH. Results: There were a total of 67 gestations (66 single and one multiple). The live birth rate was 85.1%(57/67) with 10 pregnancy failures: seven in the first and second trimesters, one late fetal death and two medical terminations of pregnancy (one APS related). Mean (± SD) birth weight was 2837 ± 812 g and mean gestational age 37 ± 3.3 weeks. There were nine cases of FGR and 13 hypertensive complications(4 HELLP syndromes). 54.4% of the patients delivered by caesarean section. Conclusions: In our cohort, early treatment with aspirin and LMWH combined with close maternal-fetal surveillance was associated with a very high chance of a live newborn.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to quantify the prevalence and types of rare chromosome abnormalities (RCAs) in Europe for 2000-2006 inclusive, and to describe prenatal diagnosis rates and pregnancy outcome. Data held by the European Surveillance of Congenital Anomalies database were analysed on all the cases from 16 population-based registries in 11 European countries diagnosed prenatally or before 1 year of age, and delivered between 2000 and 2006. Cases were all unbalanced chromosome abnormalities and included live births, fetal deaths from 20 weeks gestation and terminations of pregnancy for fetal anomaly. There were 10,323 cases with a chromosome abnormality, giving a total birth prevalence rate of 43.8/10,000 births. Of these, 7335 cases had trisomy 21,18 or 13, giving individual prevalence rates of 23.0, 5.9 and 2.3/10,000 births, respectively (53, 13 and 5% of all reported chromosome errors, respectively). In all, 473 cases (5%) had a sex chromosome trisomy, and 778 (8%) had 45,X, giving prevalence rates of 2.0 and 3.3/10,000 births, respectively. There were 1,737 RCA cases (17%), giving a prevalence of 7.4/10,000 births. These included triploidy, other trisomies, marker chromosomes, unbalanced translocations, deletions and duplications. There was a wide variation between the registers in both the overall prenatal diagnosis rate of RCA, an average of 65% (range 5-92%) and the prevalence of RCA (range 2.4-12.9/10,000 births). In all, 49% were liveborn. The data provide the prevalence of families currently requiring specialised genetic counselling services in the perinatal period for these conditions and, for some, long-term care.