962 resultados para multiple data sources


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective. This study examines the structure, processes, and data necessary to assess the outcome variables, length of stay and total cost, for a pediatric practice guideline. The guideline was developed by a group of physicians and ancillary staff members representing the services that most commonly provide treatment for asthma patients at Texas Children's Hospital, as a means of standardizing care. Outcomes have needed to be assessed to determine the practice guideline's effectiveness.^ Data sources and study design. Data for the study were collected retrospectively from multiple hospital data bases and from inpatient chart reviews. All patients in this quasi-experimental study had a diagnosis of Asthma (ICD-9-CM Code 493.91) at the time of admission.^ The study examined data for 100 patients admitted between September 15, 1995 and November 15, 1995, whose physician had elected to apply the asthma practice guideline at the time of the patient's admission. The study examined data for 66 inpatients admitted between September 15, 1995 and November 15, 1995, whose physician elected not to apply the asthma practice guideline. The principal outcome variables were identified as "Length of Stay" and "Cost".^ Principal findings. The mean length of stay for the group in which the practice guideline was applied was 2.3 days, and 3.1 days for the comparison group, who did not receive care directed by the practice guideline. The difference was statistically significant (p value = 0.008). There was not a demonstrable difference in risk factors, health status, or quality of care between the groups. Although not showing statistical significance in the univariate analysis, private insurance showed a significant difference in the logistic regression model presenting an elevated odds ratio (odds ratio = 2.2 for a hospital stay $\le$2 days to an odds ratio = 4.7 for a hospital stay $\le$3 days) showing that patients with private insurance experienced greater risk of a shorter hospital stay than the patients with public insurance in each of the logistic regression models. Public insurance included; Medicaid, Medicare, and charity cases. Private insurance included; private insurance policies whether group, individual, or managed care. The cost of an admission was significantly less for the group in which the practice guideline was applied, with a mean difference between the two groups of $1307 per patient.^ Conclusion. The implementation and utilization of a pediatric practice guideline for asthma inpatients at Texas Children's Hospital has a significant impact in terms of reducing the total cost of the hospital stay and length of the hospital stay for asthma patients admitted to Texas Children's Hospital. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

CONTEXT Subclinical hypothyroidism has been associated with increased risk of coronary heart disease (CHD), particularly with thyrotropin levels of 10.0 mIU/L or greater. The measurement of thyroid antibodies helps predict the progression to overt hypothyroidism, but it is unclear whether thyroid autoimmunity independently affects CHD risk. OBJECTIVE The objective of the study was to compare the CHD risk of subclinical hypothyroidism with and without thyroid peroxidase antibodies (TPOAbs). DATA SOURCES AND STUDY SELECTION A MEDLINE and EMBASE search from 1950 to 2011 was conducted for prospective cohorts, reporting baseline thyroid function, antibodies, and CHD outcomes. DATA EXTRACTION Individual data of 38 274 participants from six cohorts for CHD mortality followed up for 460 333 person-years and 33 394 participants from four cohorts for CHD events. DATA SYNTHESIS Among 38 274 adults (median age 55 y, 63% women), 1691 (4.4%) had subclinical hypothyroidism, of whom 775 (45.8%) had positive TPOAbs. During follow-up, 1436 participants died of CHD and 3285 had CHD events. Compared with euthyroid individuals, age- and gender-adjusted risks of CHD mortality in subclinical hypothyroidism were similar among individuals with and without TPOAbs [hazard ratio (HR) 1.15, 95% confidence interval (CI) 0.87-1.53 vs HR 1.26, CI 1.01-1.58, P for interaction = .62], as were risks of CHD events (HR 1.16, CI 0.87-1.56 vs HR 1.26, CI 1.02-1.56, P for interaction = .65). Risks of CHD mortality and events increased with higher thyrotropin, but within each stratum, risks did not differ by TPOAb status. CONCLUSIONS CHD risk associated with subclinical hypothyroidism did not differ by TPOAb status, suggesting that biomarkers of thyroid autoimmunity do not add independent prognostic information for CHD outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conservation and monitoring of forest biodiversity requires reliable information about forest structure and composition at multiple spatial scales. However, detailed data about forest habitat characteristics across large areas are often incomplete due to difficulties associated with field sampling methods. To overcome this limitation we employed a nationally available light detection and ranging (LiDAR) remote sensing dataset to develop variables describing forest landscape structure across a large environmental gradient in Switzerland. Using a model species indicative of structurally rich mountain forests (hazel grouse Bonasa bonasia), we tested the potential of such variables to predict species occurrence and evaluated the additional benefit of LiDAR data when used in combination with traditional, sample plot-based field variables. We calibrated boosted regression trees (BRT) models for both variable sets separately and in combination, and compared the models’ accuracies. While both field-based and LiDAR models performed well, combining the two data sources improved the accuracy of the species’ habitat model. The variables retained from the two datasets held different types of information: field variables mostly quantified food resources and cover in the field and shrub layer, LiDAR variables characterized heterogeneity of vegetation structure which correlated with field variables describing the understory and ground vegetation. When combined with data on forest vegetation composition from field surveys, LiDAR provides valuable complementary information for encompassing species niches more comprehensively. Thus, LiDAR bridges the gap between precise, locally restricted field-data and coarse digital land cover information by reliably identifying habitat structure and quality across large areas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Fetal weight estimation (FWE) is an important factor for clinical management decisions, especially in imminent preterm birth at the limit of viability between 23(0/7) and 26(0/7) weeks of gestation. It is crucial to detect and eliminate factors that have a negative impact on the accuracy of FWE. DATA SOURCES In this systematic literature review, we investigated 14 factors that may influence the accuracy of FWE, in particular in preterm neonates born at the limit of viability. RESULTS We found that gestational age, maternal body mass index, amniotic fluid index and ruptured membranes, presentation of the fetus, location of the placenta and the presence of multiple fetuses do not seem to have an impact on FWE accuracy. The influence of the examiner's grade of experience and that of fetal gender were discussed controversially. Fetal weight, time interval between estimation and delivery and the use of different formulas seem to have an evident effect on FWE accuracy. No results were obtained on the impact of active labor. DISCUSSION This review reveals that only few studies investigated factors possibly influencing the accuracy of FWE in preterm neonates at the limit of viability. Further research in this specific age group on potential confounding factors is needed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE The objective was to determine the risk of stroke associated with subclinical hypothyroidism. DATA SOURCES AND STUDY SELECTION Published prospective cohort studies were identified through a systematic search through November 2013 without restrictions in several databases. Unpublished studies were identified through the Thyroid Studies Collaboration. We collected individual participant data on thyroid function and stroke outcome. Euthyroidism was defined as TSH levels of 0.45-4.49 mIU/L, and subclinical hypothyroidism was defined as TSH levels of 4.5-19.9 mIU/L with normal T4 levels. DATA EXTRACTION AND SYNTHESIS We collected individual participant data on 47 573 adults (3451 subclinical hypothyroidism) from 17 cohorts and followed up from 1972-2014 (489 192 person-years). Age- and sex-adjusted pooled hazard ratios (HRs) for participants with subclinical hypothyroidism compared to euthyroidism were 1.05 (95% confidence interval [CI], 0.91-1.21) for stroke events (combined fatal and nonfatal stroke) and 1.07 (95% CI, 0.80-1.42) for fatal stroke. Stratified by age, the HR for stroke events was 3.32 (95% CI, 1.25-8.80) for individuals aged 18-49 years. There was an increased risk of fatal stroke in the age groups 18-49 and 50-64 years, with a HR of 4.22 (95% CI, 1.08-16.55) and 2.86 (95% CI, 1.31-6.26), respectively (p trend 0.04). We found no increased risk for those 65-79 years old (HR, 1.00; 95% CI, 0.86-1.18) or ≥ 80 years old (HR, 1.31; 95% CI, 0.79-2.18). There was a pattern of increased risk of fatal stroke with higher TSH concentrations. CONCLUSIONS Although no overall effect of subclinical hypothyroidism on stroke could be demonstrated, an increased risk in subjects younger than 65 years and those with higher TSH concentrations was observed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to describe the tools and strategies that were employed by C/W MARS to successfully develop and implement the Digital Treasures digital repository. Design/methodology/approach – This paper outlines the planning and subsequent technical issues that arise when implementing a digitization project on the scale of the large, multi-type, automated library network. Workflow solutions addressed include synchronous online metadata record submissions from multiple library sources and the delivery of collection-level use statistics to participating library administrators. The importance of standards-based descriptive metadata and the role of project collaboration are also discussed. Findings – From the time of its initial planning, the Digital Treasures repository was fully implemented in six months. The discernable and statistically quantified online discovery and access of actual digital objects greatly assisted libraries unsure of their own staffing costs/benefits to join the repository. Originality/value – This case study may serve as a possible example of initial planning, workflow and final implementation strategies for new repositories in both the general and library consortium environment. Keywords – Digital repositories, Library networks, Data management. Paper type – Case study

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The record of eolian deposition on the Ontong Java Plateau (OJP) since the Oligocene (approximately 33 Ma) has been investigated using dust grain size, dust flux, and dust mineralogy, with the goal of interpreting the paleoclimatology and paleometeorology of the western equatorial Pacific. Studies of modern dust dispersal in the Pacific have indicated that the equatorial regions receive contributions from both the Northern Hemisphere westerly winds and the equatorial easterlies; limited meteorological data suggest that low-altitude westerlies could also transport dust to OJP from proximal sources in the western Pacific. Previous studies have established the characteristics of the grain-size, flux, and mineralogy records of dust deposited in the North Pacific by the mid-latitude westerlies and in the eastern equatorial Pacific by the low-latitude easterlies since the Oligocene. By comparing the OJP records with the well-defined records of the mid-latitude westerlies and the low-latitude easterlies, the importance of multiple sources of dust to OJP can be recognized. OJP dust is composed of quartz, illite, kaolinite/chlorite, plagioclase feldspar, smectite, and heulandite. Mineral abundance profiles and principal components analysis (PCA) of the mineral abundance data have been used to identify assemblages of minerals that covary through all or part of the OJP record. Abundances of quartz, illite, and kaolinite/chlorite covary throughout the interval studied, defining a mineralogical assemblage supplied from Asia. Some plagioclase and smectite were also supplied as part of this assemblage during the late Miocene and Pliocene/Pleistocene, but other source areas have supplied significant amounts of plagioclase, smectite, and heulandite to OJP since the Oligocene. OJP dust is generally coarser than dust deposited by the Northern Hemisphere westerlies or the equatorial easterlies, and it accumulates more rapidly by 1-2 orders of magnitude. These relationships indicate the importance of the local sources on dust deposition at OJP. The grain-size and flux records of OJP dust do not exhibit most of the events observed in the corresponding records of the Northern Hemisphere westerlies or the equatorial easterlies, because these features are masked by the mixing of dust from several sources at OJP. The abundance record of the Asian dust assemblage at OJP, however, does contain most of the features characteristic of dust flux by means of the Northern Hemisphere westerlies, indicating that the paleoclimatic and paleometeorologic signal of a particular source area and wind system can be preserved in areas well beyond the region dominated by that source and those winds. Identifying such a signal requires "unmixing" the various dust assemblages, which can be accomplished by combining grain-size, flux, and mineralogic data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Publishing Linked Data is a process that involves several design decisions and technologies. Although some initial guidelines have been already provided by Linked Data publishers, these are still far from covering all the steps that are necessary (from data source selection to publication) or giving enough details about all these steps, technologies, intermediate products, etc. Furthermore, given the variety of data sources from which Linked Data can be generated, we believe that it is possible to have a single and uni�ed method for publishing Linked Data, but we should rely on di�erent techniques, technologies and tools for particular datasets of a given domain. In this paper we present a general method for publishing Linked Data and the application of the method to cover di�erent sources from di�erent domains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As the number of data sources publishing their data on the Web of Data is growing, we are experiencing an immense growth of the Linked Open Data cloud. The lack of control on the published sources, which could be untrustworthy or unreliable, along with their dynamic nature that often invalidates links and causes conflicts or other discrepancies, could lead to poor quality data. In order to judge data quality, a number of quality indicators have been proposed, coupled with quality metrics that quantify the “quality level” of a dataset. In addition to the above, some approaches address how to improve the quality of the datasets through a repair process that focuses on how to correct invalidities caused by constraint violations by either removing or adding triples. In this paper we argue that provenance is a critical factor that should be taken into account during repairs to ensure that the most reliable data is kept. Based on this idea, we propose quality metrics that take into account provenance and evaluate their applicability as repair guidelines in a particular data fusion setting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent future, wireless sensor networks (WSNs) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of WSNs facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers (DCs). The high economical and environmental impact of the energy consumption in DCs requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of WSNs: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of DCs: energy-optimal workload assignment policies in heterogeneous DCs, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.