977 resultados para Distributed Database Integration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyses the dual functioning of the Mexican electromechanical sector between 1994 and 2008, as distinct from other globalized activities. An estimation of labour productivity in 52 industrial classes finds that structural heterogeneity increased particularly in the 1994-2001 subperiod, alongside technical and organizational improvements that were increasingly concentrated in a small number of subsidiary companies of transnational automotive-assembly enterprises. The application of a shift-share technique also revealed the absence of any significant structural change. Lastly, an extension of the methodology to evaluate competitiveness —developed by the Economic Commission for Latin America and the Caribbean (eclac)— and its application to a second database that reclassifies 1,345 foreign trade products, makes it possible to contrast these changes with the dynamism of the global production networks in which the leading firms of the sector in Mexico are engaged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A transmission line is characterized by the fact that its parameters are distributed along its length. This fact makes the voltages and currents along the line to behave like waves and these are described by differential equations. In general, the differential equations mentioned are difficult to solve in the time domain, due to the convolution integral, but in the frequency domain these equations become simpler and their solutions are known. The transmission line can be represented by a cascade of π circuits. This model has the advantage of being developed directly in the time domain, but there is a need to apply numerical integration methods. In this work a comparison of the model that considers the fact that the parameters are distributed (Universal Line Model) and the fact that the parameters considered concentrated along the line (π circuit model) using the trapezoidal integration method, and Simpson's rule Runge-Kutta in a single-phase transmission line length of 100 km subjected to an operation power. © 2003-2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary Healthcare Services of the municipality of Aracaju-Sergipe, Brazil. Methods: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (chi(2)) test adopting a 5% level of significance. Results: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. Conclusions: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary healthcare Services of the municipality of Aracaju-Sergipe, Brazil. METHODS: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (χ2) test adopting a 5% level of significance. RESULTS: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. CONCLUSIONS: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Marine N2 fixing microorganisms, termed diazotrophs, are a key functional group in marine pelagic ecosystems. The biological fixation of dinitrogen (N2) to bioavailable nitrogen provides an important new source of nitrogen for pelagic marine ecosystems 5 and influences primary productivity and organic matter export to the deep ocean. As one of a series of efforts to collect biomass and rates specific to different phytoplankton functional groups, we have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling about 12 000 direct field measurements of cyanobacterial diazotroph abundances (based on microscopic cell counts or qPCR 10 assays targeting the nifH genes) and N2 fixation rates. Biomass conversion factors are estimated based on cell sizes to convert  abundance data to diazotrophic biomass. The database is limited spatially, lacking large regions of the ocean especially in the Indian Ocean. The data are approximately log-normal distributed, and large variances exist in most sub-databases with non-zero values differing 5 to 8 orders of magnitude. 15 Lower mean N2 fixation rate was found in the North Atlantic Ocean than the Pacific Ocean. Reporting the geometric mean and the range of one geometric standard error below and above the geometric mean, the pelagic N2 fixation rate in the global ocean is estimated to be 62 (53–73) TgNyr−1 and the pelagic diazotrophic biomass in the global ocean is estimated to be 4.7 (2.3–9.6) TgC from cell counts and to 89 (40–20 200) TgC from nifH-based abundances. Uncertainties related to biomass conversion factors can change the estimate of geometric mean pelagic diazotrophic biomass in the global ocean by about ±70 %. This evolving database can be used to study spatial and temporal distributions and variations of marine N2 fixation, to validate geochemical estimates and to parameterize and validate biogeochemical models. The database is 25 stored in PANGAEA (http://doi.pangaea.de/10.1594/PANGAEA.774851).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spectacular advances computer science applied to geographic information systems (GIS) in recent times has favored the emergence of several technological solutions. These developments have given rise to enormous opportunities for digital management of the territory. Among the technological solutions, the most famous Google Maps offers free online mapping dynamic exhaustive of the Maps. In addition to meet the enormous needs of urban indicators geotagged information, we did work on this project “Integration of an urban observatory on Google Maps.” The problem of geolocation in the urban observatory is particularly relevant in the sense that there is currently no data (descriptive and geographical) reliable on the urban sector; we must stick to extrapolate from data old and obsolete. This helps to curb the effectiveness of urban management to make difficult investment programming and to prevent the acquisition of knowledge to make cities engines of growth. The use of a geolocation tool coupled to the data would allow better monitoring of indicators Our project's objective is to develop an interactive map server (WebMapping) which map layer is formed from the resources of the Google Maps servers and match information from the field to produce maps of urban equipment and infrastructure of a city data to the client's request To achieve this goal, we will participate in a study of a GPS location of strategic sites in our core sector (health facilities), on the other hand, using information from the field, we will build a postgresql database that will link the information from the field to map from Google Maps via KML scripts and PHP appropriate. We will limit ourselves in our work to the city of Douala Cameroon with the sectors of health facilities with the possibility of extension to other areas and other cities. Keywords: Geographic Information System (GIS), Thematic Mapping, Web Mapping, data mining, Google API.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to make scientific findings reproducible is increasingly important in areas where substantive results are the product of complex statistical computations. Reproducibility can allow others to verify the published findings and conduct alternate analyses of the same data. A question that arises naturally is how can one conduct and distribute reproducible research? This question is relevant from the point of view of both the authors who want to make their research reproducible and readers who want to reproduce relevant findings reported in the scientific literature. We present a framework in which reproducible research can be conducted and distributed via cached computations and describe specific tools for both authors and readers. As a prototype implementation we introduce three software packages written in the R language. The cacheSweave and stashR packages together provide tools for caching computational results in a key-value style database which can be published to a public repository for readers to download. The SRPM package provides tools for generating and interacting with "shared reproducibility packages" (SRPs) which can facilitate the distribution of the data and code. As a case study we demonstrate the use of the toolkit on a national study of air pollution exposure and mortality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: We evaluated Swiss slaughterhouse data for integration in a national syndromic surveillance system for the early detection of emerging diseases in production animals. We analysed meat inspection data for cattle, pigs and small ruminants slaughtered between 2007 and 2012 (including emergency slaughters of sick/injured animals); investigating patterns in the number of animals slaughtered and condemned; the reasons invoked for whole carcass condemnations; reporting biases and regional effects. RESULTS: Whole carcass condemnation rates were fairly uniform (1-2‰) over time and between the different types of production animals. Condemnation rates were much higher and less uniform following emergency slaughters. The number of condemnations peaked in December for both cattle and pigs, a time when individuals of lower quality are sent to slaughter when hay and food are limited and when certain diseases are more prevalent. Each type of production animal was associated with a different profile of condemnation reasons. The most commonly reported one was "severe lesions" for cattle, "abscesses" for pigs and "pronounced weight loss" for small ruminants. These reasons could constitute valuable syndromic indicators as they are unspecific clinical manifestations of a large range of animal diseases (as well as potential indicators of animal welfare). Differences were detected in the rate of carcass condemnation between cantons and between large and small slaughterhouses. A large percentage (>60% for all three animal categories) of slaughterhouses operating never reported a condemnation between 2007 and 2012, a potential indicator of widespread non-reporting bias in our database. CONCLUSIONS: The current system offers simultaneous coverage of cattle, pigs and small ruminants for the whole of Switzerland; and traceability of each condemnation to its farm of origin. The number of condemnations was significantly linked to the number of slaughters, meaning that the former should be always be offset by the later in analyses. Because this denominator is only communicated at the end of the month, condemnations may currently only be monitored on a monthly basis. Coupled with the lack of timeliness (30-60 days delay between condemnation and notification), this limits the use of the data for early-detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We obtained partial carcass condemnation (PCC) data for cattle (2009-2010) from a Swiss slaughterhouse. Data on whole carcass condemnations (WCC) carried out at the same slaughterhouse over those years were extracted from the national database for meat inspection. We found that given the differences observed in the WCC and PCC time series, it is likely that both indicators respond to different health events in the population and that one cannot be substituted by the other. Because PCC recordings are promising for syndromic surveillance, the meat inspection database should be capable to record both WCC and PCC data in the future. However, a standardised list of reasons for PCC needs to be defined and used nationwide in all slaughterhouses.