982 resultados para Data source


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Upper-air observations are a fundamental data source for global atmospheric data products, but uncertainties, particularly in the early years, are not well known. Most of the early observations, which have now been digitized, are prone to a large variety of undocumented uncertainties (errors) that need to be quantified, e.g., for their assimilation in reanalysis projects. We apply a novel approach to estimate errors in upper-air temperature, geopotential height, and wind observations from the Comprehensive Historical Upper-Air Network for the time period from 1923 to 1966. We distinguish between random errors, biases, and a term that quantifies the representativity of the observations. The method is based on a comparison of neighboring observations and is hence independent of metadata, making it applicable to a wide scope of observational data sets. The estimated mean random errors for all observations within the study period are 1.5 K for air temperature, 1.3 hPa for pressure, 3.0 ms−1for wind speed, and 21.4° for wind direction. The estimates are compared to results of previous studies and analyzed with respect to their spatial and temporal variability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several approaches for the non-invasive MRI-based measurement of the aortic pressure waveform over the heart cycle have been proposed in the last years. These methods are normally based on time-resolved, two-dimensional phase-contrast sequences with uni-directionally encoded velocities (2D PC-MRI). In contrast, three-dimensional acquisitions with tridirectional velocity encoding (4D PC-MRI) have been shown to be a suitable data source for detailed investigations of blood flow and spatial blood pressure maps. In order to avoid additional MR acquisitions, it would be advantageous if the aortic pressure waveform could also be computed from this particular form of MRI. Therefore, we propose an approach for the computation of the aortic pressure waveform which can be completely performed using 4D PC-MRI. After the application of a segmentation algorithm, the approach automatically computes the aortic pressure waveform without any manual steps. We show that our method agrees well with catheter measurements in an experimental phantom setup and produces physiologically realistic results in three healthy volunteers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ships’ protests have been used for centuries as legal documents to record and detail damages and indemnify Captains from fault. We use them in this article, along with data extracted through forensic synoptic analysis (McNally, 1994, 2004) to identify a tropical or subtropical system in the North Atlantic Ocean in 1785. They are shown to be viable sources of meteorological information. By comparing a damaging storm in New England in 1996, which included an offshore tropical system, with one reconstructed in 1785, we demonstrate that the tropical system identified in a ship’s protest played a significant role in the 1785 storm. With both forensic reconstruction and anecdotal evidence, we are able to assess that these storms are remarkably identical. The recurrence rate calculated in previous studies of the 1996 storm is 400–500 years. We suggest that reconstruction of additional years in the 1700s would provide the basis for a reanalysis of recurrence rates, with implications for future insurance and reinsurance rates. The application of the methodology to this new data source can also be used for extension of the hurricane database in the North Atlantic basin, and elsewhere, much further back into history than is currently available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

How does the productivity of a commune compare with that of a conventional firm? This paper addresses this question quantitatively by focusing on the history of a religious commune called the United Society of Believers, better known as the Shakers. We utilize the information recorded in the enumeration schedules of the US Manufacturing and Agriculture Censuses, available for the period between 1850 to 1880, to estimate the productivities of Shaker shops and farms. From the same data source, we also construct random samples of other shops and farms and estimate their productivities for comparison with the Shakers. Our results provide support to the contention that communes need not always suffer from reduced productivity. Shaker farms and shops generally performed just as productively as their neighbors; when differences did exist between their productivities, there are good reasons to attribute them to factors other than organizational form.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives. To investigate procedural gender equity by assessing predisposing, enabling and need predictors of gender differences in annual medical expenditures and utilization among hypertensive individuals in the U.S. Also, to estimate and compare lifetime medical expenditures among hypertensive men and women in the U.S. ^ Data source. 2001-2004 the Medical Expenditure Panel Survey (MEPS);1986-2000 National Health Interview Survey (NHIS) and National Health Interview Survey linked to mortality in the National Death Index through 2002 (2002 NHIS-NDI). ^ Study design. We estimated total medical expenditure using four equations regression model, specific medical expenditures using two equations regression model and utilization using negative binomial regression model. Procedural equity was assessed by applying the Aday et al. theoretical framework. Expenditures were estimated in 2004 dollars. We estimated hypertension-attributable medical expenditure and utilization among men and women. ^ To estimate lifetime expenditures from ages 20 to 85+, we estimated medical expenditures with cross-sectional data and survival with prospective data. The four equations regression model were used to estimate average annual medical expenditures defined as sum of inpatient stay, emergency room visits, outpatient visits, office based visits, and prescription drugs expenditures. Life tables were used to estimate the distribution of life time medical expenditures for hypertensive men and women at different age and factors such as disease incidence, medical technology and health care cost were assumed to be fixed. Both total and hypertension attributable expenditures among men and women were estimated. ^ Data collection. We used the 2001-2004 MEPS household component and medical condition files; the NHIS person and condition files from 1986-1996 and 1997-2000 sample adult files were used; and the 1986-2000 NHIS that were linked to mortality in the 2002 NHIS-NDI. ^ Principal findings. Hypertensive men had significantly less utilization for most measures after controlling predisposing, enabling and need factors than hypertensive women. Similarly, hypertensive men had less prescription drug (-9.3%), office based (-7.2%) and total medical (-4.5%) expenditures than hypertensive women. However, men had more hypertension-attributable medical expenditures and utilization than women. ^ Expected total lifetime expenditure for average life table individuals at age 20, was $188,300 for hypertensive men and $254,910 for hypertensive women. But the lifetime expenditure that could be attributed to hypertension was $88,033 for men and $40,960 for women. ^ Conclusion. Hypertensive women had more utilization and expenditure for most measures than hypertensive men, possibly indicating procedural inequity. However, relatively higher hypertension-attributable health care of men shows more utilization of resources to treat hypertension related diseases among men than women. Similar results were reported in lifetime analyses.^ Key words: gender, medical expenditures, utilization, hypertension-attributable, lifetime expenditure ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this study was to understand the role of principle economic, sociodemographic and health status factors in determining the likelihood and volume of prescription drug use. Econometric demand regression models were developed for this purpose. Ten explanatory variables were examined: family income, coinsurance rate, age, sex, race, household head education level, size of family, health status, number of medical visits, and type of provider seen during medical visits. The economic factors (family income and coinsurance) were given special emphasis in this study.^ The National Medical Care Utilization and Expenditure Survey (NMCUES) was the data source. The sample represented the civilian, noninstitutionalized residents of the United States in 1980. The sample method used in the survey was a stratified four-stage, area probability design. The sample was comprised of 6,600 households (17,123 individuals). The weighted sample provided the population estimates used in the analysis. Five repeated interviews were conducted with each household. The household survey provided detailed information on the United States health status, pattern of health care utilization, charges for services received, and methods of payments for 1980.^ The study provided evidence that economic factors influenced the use of prescription drugs, but the use was not highly responsive to family income and coinsurance for the levels examined. The elasticities for family income ranged from -.0002 to -.013 and coinsurance ranged from -.174 to -.108. Income has a greater influence on the likelihood of prescription drug use, and coinsurance rates had an impact on the amount spent on prescription drugs. The coinsurance effect was not examined for the likelihood of drug use due to limitations in the measurement of coinsurance. Health status appeared to overwhelm any effects which may be attributed to family income or coinsurance. The likelihood of prescription drug use was highly dependent on visits to medical providers. The volume of prescription drug use was highly dependent on the health status, age, and whether or not the individual saw a general practitioner. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective. To perform a systematic review of the internet based HIV prevention interventions.^ Data Source. Data resources including Ovid Medline, CINAHL (EBSCOhost and Psycinfo) were searched for the relevant articles. Study Selection: Studies were selected if the entire article was in English, Randomized control trial performed after 1998 till current, internet based aimed towards HIV/AIDS prevention. ^ Data Extraction. Each relevant article was coded using a code sheet which consisted of background information, study characteristics, study population information, program details, intervention and results. Data Synthesis: Total of seven relevant articles were coded and the information obtained was incorporated into the evidence tables for comparison.^ Conclusion. Overall the internet based HIV prevention studies have been very successful in recruiting large number of people but has not been able to show effective results due to high attrition. More research needs to be done in the area.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Secchi depth is a measure of water transparency. In the Baltic Sea region, Secchi depth maps are used to assess eutrophication and as input for habitat models. Due to their spatial and temporal coverage, satellite data would be the most suitable data source for such maps. But the Baltic Sea's optical properties are so different from the open ocean that globally calibrated standard models suffer from large errors. Regional predictive models that take the Baltic Sea's special optical properties into account are thus needed. This paper tests how accurately generalized linear models (GLMs) and generalized additive models (GAMs) with MODIS/Aqua and auxiliary data as inputs can predict Secchi depth at a regional scale. It uses cross-validation to test the prediction accuracy of hundreds of GAMs and GLMs with up to 5 input variables. A GAM with 3 input variables (chlorophyll a, remote sensing reflectance at 678 nm, and long-term mean salinity) made the most accurate predictions. Tested against field observations not used for model selection and calibration, the best model's mean absolute error (MAE) for daily predictions was 1.07 m (22%), more than 50% lower than for other publicly available Baltic Sea Secchi depth maps. The MAE for predicting monthly averages was 0.86 m (15%). Thus, the proposed model selection process was able to find a regional model with good prediction accuracy. It could be useful to find predictive models for environmental variables other than Secchi depth, using data from other satellite sensors, and for other regions where non-standard remote sensing models are needed for prediction and mapping. Annual and monthly mean Secchi depth maps for 2003-2012 come with this paper as Supplementary materials.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

At Ny-Ålesund (78.9° N), Svalbard, surface radiation measurements of up- and downward short- and longwave radiation are operated since August 1992 in the frame of the Baseline Surface Radiation Network (BSRN), complemented with surface and upper air meteorology since August 1993. The long-term observations are the base for a climatological presentation of the surface radiation data. Over the 21-year observation period, ongoing changes in the Arctic climate system are reflected. Particularly, the observations indicate a strong seasonality of surface warming and related changes in different radiation parameters. The annual mean temperature at Ny-Ålesund has risen by +1.3 ± 0.7 K per decade, with a maximum seasonal increase during the winter months of +3.1 ± 2.6 K per decade. At the same time, winter is also the season with the largest long-term changes in radiation, featuring an increase of +15.6 ± 11.6 W/m**2 per decade in the downward longwave radiation. Furthermore, changes in the reflected solar radiation during the months of snow melt indicate an earlier onset of the warm season by about 1 week compared to the beginning of the observations. The online available dataset of Ny-Ålesund surface radiation measurements provides a valuable data source for the validation of satellite instruments and climate models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Species selection for forest restoration is often supported by expert knowledge on local distribution patterns of native tree species. This approach is not applicable to largely deforested regions unless enough data on pre-human tree species distribution is available. In such regions, ecological niche models may provide essential information to support species selection in the framework of forest restoration planning. In this study we used ecological niche models to predict habitat suitability for native tree species in "Tierra de Campos" region, an almost totally deforested area of the Duero Basin (Spain). Previously available models provide habitat suitability predictions for dominant native tree species, but including non-dominant tree species in the forest restoration planning may be desirable to promote biodiversity, specially in largely deforested areas were near seed sources are not expected. We used the Forest Map of Spain as species occurrence data source to maximize the number of modeled tree species. Penalized logistic regression was used to train models using climate and lithological predictors. Using model predictions a set of tools were developed to support species selection in forest restoration planning. Model predictions were used to build ordered lists of suitable species for each cell of the study area. The suitable species lists were summarized drawing maps that showed the two most suitable species for each cell. Additionally, potential distribution maps of the suitable species for the study area were drawn. For a scenario with two dominant species, the models predicted a mixed forest (Quercus ilex and a coniferous tree species) for almost one half of the study area. According to the models, 22 non-dominant native tree species are suitable for the study area, with up to six suitable species per cell. The model predictions pointed to Crataegus monogyna, Juniperus communis, J.oxycedrus and J.phoenicea as the most suitable non-dominant native tree species in the study area. Our results encourage further use of ecological niche models for forest restoration planning in largely deforested regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Metadata Provenance Task Group aims to define a data model that allows for making assertions about description sets. Creating a shared model of the data elements required to describe an aggregation of metadata statements allows to collectively import, access, use and publish facts about the quality, rights, timeliness, data source type, trust situation, etc. of the described statements. In this paper we outline the preliminary model created by the task group, together with first examples that demonstrate how the model is to be used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Parte de la investigación biomédica actual se encuentra centrada en el análisis de datos heterogéneos. Estos datos pueden tener distinto origen, estructura, y semántica. Gran cantidad de datos de interés para los investigadores se encuentran en bases de datos públicas, que recogen información de distintas fuentes y la ponen a disposición de la comunidad de forma gratuita. Para homogeneizar estas fuentes de datos públicas con otras de origen privado, existen diversas herramientas y técnicas que permiten automatizar los procesos de homogeneización de datos heterogéneos. El Grupo de Informática Biomédica (GIB) [1] de la Universidad Politécnica de Madrid colabora en el proyecto europeo P-medicine [2], cuya finalidad reside en el desarrollo de una infraestructura que facilite la evolución de los procedimientos médicos actuales hacia la medicina personalizada. Una de las tareas enmarcadas en el proyecto P-medicine que tiene asignado el grupo consiste en elaborar herramientas que ayuden a usuarios en el proceso de integración de datos contenidos en fuentes de información heterogéneas. Algunas de estas fuentes de información son bases de datos públicas de ámbito biomédico contenidas en la plataforma NCBI [3] (National Center for Biotechnology Information). Una de las herramientas que el grupo desarrolla para integrar fuentes de datos es Ontology Annotator. En una de sus fases, la labor del usuario consiste en recuperar información de una base de datos pública y seleccionar de forma manual los resultados relevantes. Para automatizar el proceso de búsqueda y selección de resultados relevantes, por un lado existe un gran interés en conseguir generar consultas que guíen hacia resultados lo más precisos y exactos como sea posible, por otro lado, existe un gran interés en extraer información relevante de elevadas cantidades de documentos, lo cual requiere de sistemas que analicen y ponderen los datos que caracterizan a los mismos. En el campo informático de la inteligencia artificial, dentro de la rama de la recuperación de la información, existen diversos estudios acerca de la expansión de consultas a partir de retroalimentación relevante que podrían ser de gran utilidad para dar solución a la cuestión. Estos estudios se centran en técnicas para reformular o expandir la consulta inicial utilizando como realimentación los resultados que en una primera instancia fueron relevantes para el usuario, de forma que el nuevo conjunto de resultados tenga mayor proximidad con los que el usuario realmente desea. El objetivo de este trabajo de fin de grado consiste en el estudio, implementación y experimentación de métodos que automaticen el proceso de extracción de información trascendente de documentos, utilizándola para expandir o reformular consultas. De esta forma se pretende mejorar la precisión y el ranking de los resultados asociados. Dichos métodos serán integrados en la herramienta Ontology Annotator y enfocados a la fuente de datos de PubMed [4].---ABSTRACT---Part of the current biomedical research is focused on the analysis of heterogeneous data. These data may have different origin, structure and semantics. A big quantity of interesting data is contained in public databases which gather information from different sources and make it open and free to be used by the community. In order to homogenize thise sources of public data with others which origin is private, there are some tools and techniques that allow automating the processes of integration heterogeneous data. The biomedical informatics group of the Universidad Politécnica de Madrid cooperates with the European project P-medicine which main purpose is to create an infrastructure and models to facilitate the transition from current medical practice to personalized medicine. One of the tasks of the project that the group is in charge of consists on the development of tools that will help users in the process of integrating data from diverse sources. Some of the sources are biomedical public data bases from the NCBI platform (National Center for Biotechnology Information). One of the tools in which the group is currently working on for the integration of data sources is called the Ontology Annotator. In this tool there is a phase in which the user has to retrieve information from a public data base and select the relevant data contained in it manually. For automating the process of searching and selecting data on the one hand, there is an interest in automatically generating queries that guide towards the more precise results as possible. On the other hand, there is an interest on retrieve relevant information from large quantities of documents. The solution requires systems that analyze and weigh the data allowing the localization of the relevant items. In the computer science field of the artificial intelligence, in the branch of information retrieval there are diverse studies about the query expansion from relevance feedback that could be used to solve the problem. The main purpose of this studies is to obtain a set of results that is the closer as possible to the information that the user really wants to retrieve. In order to reach this purpose different techniques are used to reformulate or expand the initial query using a feedback the results that where relevant for the user, with this method, the new set of results will have more proximity with the ones that the user really desires. The goal of this final dissertation project consists on the study, implementation and experimentation of methods that automate the process of extraction of relevant information from documents using this information to expand queries. This way, the precision and the ranking of the results associated will be improved. These methods will be integrated in the Ontology Annotator tool and will focus on the PubMed data source.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multibeam bathymetric data collected in the Puerto Rico Trench and northeastern Caribbean region are compiled into a seamless bathymetric terrain model for broad-scale geological investigations of the trench system. These data, collected during eight separate surveys between 2002 and 2013 and covering almost 180,000 square kilometers, are published here in large-format map sheet and digital spatial data. This report describes the common multibeam data collection and processing methods used to produce the bathymetric terrain model and corresponding data-source polygon. Details documenting the complete provenance of the data are provided in the metadata in the Data Catalog section.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data.