902 resultados para Medical data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to best utilize the limited resource of medical resources, and to reduce the cost and improve the quality of medical treatment, we propose to build an interoperable regional healthcare systems among several levels of medical treatment organizations. In this paper, our approaches are as follows:(1) the ontology based approach is introduced as the methodology and technological solution for information integration; (2) the integration framework of data sharing among different organizations are proposed(3)the virtual database to realize data integration of hospital information system is established. Our methods realize the effective management and integration of the medical workflow and the mass information in the interoperable regional healthcare system. Furthermore, this research provides the interoperable regional healthcare system with characteristic of modularization, expansibility and the stability of the system is enhanced by hierarchy structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an intuitive geometric approach for analysing the structure and fragility of T1-weighted structural MRI scans of human brains. Apart from computing characteristics like the surface area and volume of regions of the brain that consist of highly active voxels, we also employ Network Theory in order to test how close these regions are to breaking apart. This analysis is used in an attempt to automatically classify subjects into three categories: Alzheimer’s disease, mild cognitive impairment and healthy controls, for the CADDementia Challenge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes the development and national trial of a methodology for collecting disability data directly from parents, enabling schools and local authorities to meet their obligations under the Disability Discrimination Act (DDA; 2005) to promote equality of opportunity for all children. It illustrates the complexities around collecting this information and also highlights the dangers of assuming that special educational needs (SENs) equate to disability. The parental survey revealed children with medical and mental health needs, but no SENs, who were unknown to schools. It also revealed children with a recorded SEN whose parents did not consider that they had a disability in line with the DDA definition. It identified a number of children whose disability leads to absences from school, making them vulnerable to underachievement. These findings highlight the importance of having appropriate tools with which to collect these data and developing procedures to support their effective use. We also draw attention to the contextual nature of children’s difficulties and the importance of retaining and respecting the place of subjective information. This is central to adopting a definition of disability that hinges on experience or impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the methodology of providing multiprobability predictions for proteomic mass spectrometry data. The methodology is based on a newly developed machine learning framework called Venn machines. Is allows to output a valid probability interval. The methodology is designed for mass spectrometry data. For demonstrative purposes, we applied this methodology to MALDI-TOF data sets in order to predict the diagnosis of heart disease and early diagnoses of ovarian cancer and breast cancer. The experiments showed that probability intervals are narrow, that is, the output of the multiprobability predictor is similar to a single probability distribution. In addition, probability intervals produced for heart disease and ovarian cancer data were more accurate than the output of corresponding probability predictor. When Venn machines were forced to make point predictions, the accuracy of such predictions is for the most data better than the accuracy of the underlying algorithm that outputs single probability distribution of a label. Application of this methodology to MALDI-TOF data sets empirically demonstrates the validity. The accuracy of the proposed method on ovarian cancer data rises from 66.7 % 11 months in advance of the moment of diagnosis to up to 90.2 % at the moment of diagnosis. The same approach has been applied to heart disease data without time dependency, although the achieved accuracy was not as high (up to 69.9 %). The methodology allowed us to confirm mass spectrometry peaks previously identified as carrying statistically significant information for discrimination between controls and cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Human immunodeficiency virus (HIV) is a serious disease which can be associated with various activity limitations and participation restrictions. The aim of this paper was to describe how HIV affects the functioning and health of people within different environmental contexts, particularly with regard to access to medication. Method Four cross-sectional studies, three in South Africa and one in Brazil, had applied the International Classification of Functioning, Disability and Health (ICF) as a classification instrument to participants living with HIV. Each group was at a different stage of the disease. Only two groups had had continuing access to antiretroviral therapy. The existence of these descriptive sets enabled comparison of the disability experienced by people living with HIV at different stages of the disease and with differing access to antiretroviral therapy. Results Common problems experienced in all groups related to weight maintenance, with two-thirds of the sample reporting problems in this area. Mental functions presented the most problems in all groups, with sleep (50%, 92/185), energy and drive (45%, 83/185), and emotional functions (49%, 90/185) being the most affected. In those on long-term therapy, body image affected 93% (39/42) and was a major problem. The other groups reported pain as a problem, and those with limited access to treatment also reported mobility problems. Cardiopulmonary functions were affected in all groups. Conclusion Functional problems occurred in the areas of impairment and activity limitation in people at advanced stages of HIV, and more limitations occurred in the area of participation for those on antiretroviral treatment. The ICF provided a useful framework within which to describe the functioning of those with HIV and the impact of the environment. Given the wide spectrum of problems found, consideration could be given to a number of ICF core sets that are relevant to the different stages of HIV disease. (C) 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conflicting results have been reported as to whether genetic variations (Val66Met and C270T) of the brain-derived neurotrophic factor gene (RDNF) confer susceptibility to Alzheimer`s disease (AD). We genotyped these polymorphisms in a Japanese sample of 657 patients with AD and 525 controls, and obtained weak evidence of association for Val66Met (P = 0.063), but not for C270T. After stratification by sex, we found a significant allelic association between Val66Met and AD in women (P = 0.017), but not in men. To confirm these observations, we collected genotyping data for each sex from 16 research centers worldwide (4,711 patients and 4,537 controls in total). The meta-analysis revealed that there was a clear sex difference in the allelic association; the Met66 allele confers susceptibility to AD in women (odds ratio = 1.14, 95% CI 1.05-1.24, P = 0.002), but not in men. Our results provide evidence that the Met66 allele of BDNF has a sexually dimorphic effect on susceptibility to AD. (C) 2009 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic.  The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Through a national policy agreement, over 167 million Euros will be invested in the Swedish National Quality Registries (NQRs) between 2012 and 2016. One of the policy agreement¿s intentions is to increase the use of NQR data for quality improvement (QI). However, the evidence is fragmented as to how the use of medical registries and the like lead to quality improvement, and little is known about non-clinical use. The aim was therefore to investigate the perspectives of Swedish politicians and administrators on quality improvement based on national registry data. Methods. Politicians and administrators from four county councils were interviewed. A qualitative content analysis guided by the Consolidated Framework for Implementation Research (CFIR) was performed. Results. The politicians and administrators perspectives on the use of NQR data for quality improvement were mainly assigned to three of the five CFIR domains. In the domain of intervention characteristics, data reliability and access in reasonable time were not considered entirely satisfactory, making it difficult for the politico-administrative leaderships to initiate, monitor, and support timely QI efforts. Still, politicians and administrators trusted the idea of using the NQRs as a base for quality improvement. In the domain of inner setting, the organizational structures were not sufficiently developed to utilize the advantages of the NQRs, and readiness for implementation appeared to be inadequate for two reasons. Firstly, the resources for data analysis and quality improvement were not considered sufficient at politico-administrative or clinical level. Secondly, deficiencies in leadership engagement at multiple levels were described and there was a lack of consensus on the politicians¿ role and level of involvement. Regarding the domain of outer setting, there was a lack of communication and cooperation between the county councils and the national NQR organizations. Conclusions. The Swedish experiences show that a government-supported national system of well-funded, well-managed, and reputable national quality registries needs favorable local politico-administrative conditions to be used for quality improvement; such conditions are not yet in place according to local politicians and administrators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Unsafe abortions are estimated to cause eight per-cent of maternal mortality in India. Lack of providers, especially in rural areas, is one reason unsafe abortions take place despite decades of legal abortion. Education and training in reproductive health services has been shown to influence attitudes and increase chances that medical students will provide abortion care services in their future practice. To further explore previous findings about poor attitudes toward abortion among medical students in Maharastra, India, we conducted in-depth interviews with medical students in their final year of education. METHOD: We used a qualitative design conducting in-depth interviews with twenty-three medical students in Maharastra applying a topic guide. Data was organized using thematic analysis with an inductive approach. RESULTS: The participants described a fear to provide abortion in their future practice. They lacked understanding of the law and confused the legal regulation of abortion with the law governing gender biased sex selection, and concluded that abortion is illegal in Maharastra. The interviewed medical students' attitudes were supported by their experiences and perceptions from the clinical setting as well as traditions and norms in society. Medical abortion using mifepristone and misoprostol was believed to be unsafe and prohibited in Maharastra. The students perceived that nurse-midwives were knowledgeable in Sexual and Reproductive Health and many found that they could be trained to perform abortions in the future. CONCLUSIONS: To increase chances that medical students in Maharastra will perform abortion care services in their future practice, it is important to strengthen their confidence and knowledge through improved medical education including value clarification and clinical training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pooled procurement has an important role in reducing acquisition prices of goods. A pool of buyers, which aggregates demand for its members, increases bargaining power and allows suppliers to achieve economies of scale and scope in the production. Such aggregation demand e ect lowers prices paid for buyers. However, when a buyer with a good reputation for paying suppliers in a timely manner is joined in the pool by a buyer with bad reputation may have its price paid increased due to the credit risk e ect on prices. This will happen because prices paid in a pooled procurement should refect the (higher) average buyers' credit risk. Using a data set on Brazilian public purchases of pharmaceuticals and medical supplies, we nd evidence supporting both e ects. We show that the prices paid by public bodies in Brazil are lower when they buy through pooled procurement than individually. On the other hand, federal agencies (i.e. good buyers) pay higher prices for products when they are joined by state agencies (i.e. bad buyers) in a pool. Such evidence suggests that pooled procurement should be carefully designed to avoid that prices paid increase for its members.