964 resultados para Data Migration Processes Modeling
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
This work shows a computational methodology for the determination of synchronous machines parameters using load rejection test data. By machine modeling one can obtain the quadrature parameters through a load rejection under an arbitrary reference, reducing the present difficulties. The proposed method is applied to a real machine.
Resumo:
Includes bibliography
Resumo:
Currently, many museums, botanic gardens and herbariums keep data of biological collections and using computational tools researchers digitalize and provide access to their data using data portals. The replication of databases in portals can be accomplished through the use of protocols and data schema. However, the implementation of this solution demands a large amount of time, concerning both the transfer of fragments of data and processing data within the portal. With the growth of data digitalization in institutions, this scenario tends to be increasingly exacerbated, making it hard to maintain the records updated on the portals. As an original contribution, this research proposes analysing the data replication process to evaluate the performance of portals. The Inter-American Biodiversity Information Network (IABIN) biodiversity data portal of pollinators was used as a study case, which supports both situations: conventional data replication of records of specimen occurrences and interactions between them. With the results of this research, it is possible to simulate a situation before its implementation, thus predicting the performance of replication operations. Additionally, these results may contribute to future improvements to this process, in order to decrease the time required to make the data available in portals. © Rinton Press.
Resumo:
O estudo tem como objetivo avaliar o processo de sustentabilidade em sistemas agrícolas, tendo como parâmetros os fluxos energético e econômico dentre seus compartimentos, cuja dinâmica é regida pela agrodiversidade de um ambiente agrário em transição. O trabalho de campo foi realizado no município de Igarapé-Açu, Nordeste do Pará. Inicialmente foi feito um survey em 60 unidades produtivas, seguindo-se da aplicação de questionário em 25 unidades, de uma modelagem sistémica em 11 unidades, subsidiada por um exame de contexto, além das entrevistas com agentes produtivos locais ligados direta ou indiretamente a ramos de interesses agrícolas. O resultado das análises revelou mecanismos que caracterizam as distintas lógicas que orientam os processos ecológicos e económicos no contexto da agrodiversidade local/regional. Os fenômenos que ocorrem no campo energético material não guardam correlação direta com os fenômenos de natureza econômica. Não há nem mesmo analogia, visto que os parâmetros adimensionais divergem em valores e padrão. Do ponto de vista da dinâmica ecológica/energética, a informação de maior relevância foi sobre o grau de dependência dos agricultores aos recursos da natureza, parametrizado através do seu coeficiente de depredação (φd). O modelo criado permite: configurar a dinâmica estrutural desses sistemas, estimar-lhes os respectivos níveis de dependência aos recursos, bem como o tempo e a área necessários à obtenção de equivalentes custos de oportunidade aos processos de produção agrícola, e, por fim, identificar fatores limitantes à transição agrária no Município. Esses parâmetros ecológicos/econômicos, pioneiramente definidos, podem ser considerados como operacionalizadores para o planejamento de um desenvolvimento agrário sustentável.
Resumo:
Traceability is a concept that arose from the need for monitoring of production processes, this concept is usually used in sectors related to food production or activities involving some kind of direct risk to people. Agribusiness in the cotton industry does not have a comprehensive infrastructure for all stages of the processes involved in production. Map and define the data to enable traceability of products is synonymous to delegate responsibilities for all involved in the production, the collection of aggregate data on cotton production is done in stages and specific pre-defined since the choice of the variety through the processing, the scope of this article specifically addresses the production of lint cotton. The paper presents a proposal based on service oriented architecture (SOA) for data integration processes in the cotton industry, this proposal provide support for the implementation of platform independent solutions.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
When compared to our Solar System, many exoplanet systems exhibit quite unusual planet configurations; some of these are hot Jupiters, which orbit their central stars with periods of a few days, others are resonant systems composed of two or more planets with commensurable orbital periods. It has been suggested that these configurations can be the result of a migration processes originated by tidal interactions of the planets with disks and central stars. The process known as planet migration occurs due to dissipative forces which affect the planetary semi-major axes and cause the planets to move towards to, or away from, the central star. In this talk, we present possible signatures of planet migration in the distribution of the hot Jupiters and resonant exoplanet pairs. For this task, we develop a semi-analytical model to describe the evolution of the migrating planetary pair, based on the fundamental concepts of conservative and dissipative dynamics of the three-body problem. Our approach is based on an analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces needs to be invoked. We show that, under assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the stationary solutions of the conservative problem (Birkhoff, Dynamical systems, 1966). The ultimate convergence and the evolution of the system along one of these modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the roper frequencies of the system. We show that it is possible to reassemble the starting configurations and migration history of the systems on the basis of their final states, and consequently to constrain the parameters of the physical processes involved.
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
Copper (Cu) and its alloys are used extensively in domestic and industrial applications. Cu is also an essential element in mammalian nutrition. Since both copper deficiency and copper excess produce adverse health effects, the dose-response curve is U-shaped, although the precise form has not yet been well characterized. Many animal and human studies were conducted on copper to provide a rich database from which data suitable for modeling the dose-response relationship for copper may be extracted. Possible dose-response modeling strategies are considered in this review, including those based on the benchmark dose and categorical regression. The usefulness of biologically based dose-response modeling techniques in understanding copper toxicity was difficult to assess at this time since the mechanisms underlying copper-induced toxicity have yet to be fully elucidated. A dose-response modeling strategy for copper toxicity was proposed associated with both deficiency and excess. This modeling strategy was applied to multiple studies of copper-induced toxicity, standardized with respect to severity of adverse health outcomes and selected on the basis of criteria reflecting the quality and relevance of individual studies. The use of a comprehensive database on copper-induced toxicity is essential for dose-response modeling since there is insufficient information in any single study to adequately characterize copper dose-response relationships. The dose-response modeling strategy envisioned here is designed to determine whether the existing toxicity data for copper excess or deficiency may be effectively utilized in defining the limits of the homeostatic range in humans and other species. By considering alternative techniques for determining a point of departure and low-dose extrapolation (including categorical regression, the benchmark dose, and identification of observed no-effect levels) this strategy will identify which techniques are most suitable for this purpose. This analysis also serves to identify areas in which additional data are needed to better define the characteristics of dose-response relationships for copper-induced toxicity in relation to excess or deficiency.
Resumo:
The single-electron transistor (SET) is one of the best candidates for future nano electronic circuits because of its ultralow power consumption, small size and unique functionality. SET devices operate on the principle of Coulomb blockade, which is more prominent at dimensions of a few nano meters. Typically, the SET device consists of two capacitively coupled ultra-small tunnel junctions with a nano island between them. In order to observe the Coulomb blockade effects in a SET device the charging energy of the device has to be greater that the thermal energy. This condition limits the operation of most of the existing SET devices to cryogenic temperatures. Room temperature operation of SET devices requires sub-10nm nano-islands due to the inverse dependence of charging energy on the radius of the conducting nano-island. Fabrication of sub-10nm structures using lithography processes is still a technological challenge. In the present investigation, Focused Ion Beam based etch and deposition technology is used to fabricate single electron transistors devices operating at room temperature. The SET device incorporates an array of tungsten nano-islands with an average diameter of 8nm. The fabricated devices are characterized at room temperature and clear Coulomb blockade and Coulomb oscillations are observed. An improvement in the resolution limitation of the FIB etching process is demonstrated by optimizing the thickness of the active layer. SET devices with structural and topological variation are developed to explore their impact on the behavior of the device. The threshold voltage of the device was minimized to ~500mV by minimizing the source-drain gap of the device to 17nm. Vertical source and drain terminals are fabricated to realize single-dot based SET device. A unique process flow is developed to fabricate Si dot based SET devices for better gate controllability in the device characteristic. The device vi parameters of the fabricated devices are extracted by using a conductance model. Finally, characteristic of these devices are validated with the simulated data from theoretical modeling.
Resumo:
A kvalitatív módszerekkel nyert kutatási eredményeink értelmezése során a transznacionális tér, a transznacionális és az etnikai migráció elméleti és szemléleti kereteit egyaránt figyelembe vettük. Az általunk vizsgált migrációs folyamatok transznacionális térben zajlanak, és a transznacionális irodalomban leírt migráns élethelyzetek, gyakorlatok – különböző nemzetállamokban elhelyezkedő lokalitásokhoz való egyidejű, bár eltérő intenzitású kötődés, kapcsolatok – több példájával is találkoztunk. Ludger Pries nyomán a transznacionális migrációt és a transznacionális migráns alakját olyan ideáltípusnak tekintettük, amelyhez az egyes migráns utak és helyzetek csupán közelítenek, és empirikus eredményeink alapján azt mondhatjuk, hogy a valóban plurilokális, vagyis a két helyhez való egyidejű, intenzív és tartós kötődés s az ehhez kapcsolódó gyakorlatok csupán a migránsok kisebbségét, illetve a migrációs életpályák egy-egy szakaszát jellemzik. A vizsgált migrációs folyamatokban az etnicitás strukturális tényezőként és a migráns tapasztalatok értelmezési kereteként egyaránt perdöntő szerepet játszik. Az etnikai migráció szakirodalomban tárgyalt mindhárom magyarázó modellje – az anyaországba való hazatérés, a gazdasági okokból való, illetve a kisebbségi létben elszenvedett sérelmek által ösztönzött migráció – alkalmas a migrációt kiváltó és mozgató okok elemzésére, a migráns narratívák értelmezésére, azt azonban nem állíthatjuk, hogy bármelyikük kizárólagos érvényre tehet szert. Más kutatókhoz hasonlóan Rogers Brubaker meghatározását tartjuk a leginkább gyümölcsözőnek, aki az etnikai migráció tág értelmezését használva minden olyan vándorlási folyamatot etnikai migrációnak tekint, amelyben az etnicitás kulturális és szimbolikus tőkeként szabályozó szerepet játszik. This special issue of Tér és Társadalom presents some results of an international research project carried out by researchers from Switzerland, Hungary and Serbia between 2010 and 2012. The topic of the research was “Integrating (Trans-)national Migrants in Transition States” (TRANSMIG) and was financed by the Swiss National Science Foundation (SNSF). The research aimed to explore and interpret migration flows from the Vojvodina (Serbia) to Hungary and from ex-Yugoslav republics to the Vojvodina. In the first period of the last twenty years, wars which contributed to the disintegration of Yugoslavia and the formation of new national states have caused migration flows. After the change of the millennium, educational migration of Vojvodina Hungarian youth can be considered the most important migratory movement from the Vojvodina to Hungary. Labour (economic) migration also occurs, but this cannot be understood as a one-way movement, since in the Hungarian–Serbian border zone migrants from the Vojvodina who already resettled to Hungary commute to the Vojvodina. While interpreting the qualitative research data the theoretical frameworks and approaches of transnational space, transnationalism and ethnic migration were taken into consideration. The migration movement in question occurs in a transnational social space where migrants are in constant motion. By their movements and actions that space is continually recreated. With Ludger Pries we see a transnational migrant as an ideal type to whom individual migratory movements and positions only approximate. Based on our empirical results we can conclude that real pluri-local, intensive and long-lasting bonding to two places at the same time and the relating practices only characterise a minority of migrants and certain sections of migratory careers. In the migration processes studied, ethnicity as a term is needed as a “structural factor” and frame of interpretation to approach migrant experiences. All three explanatory models for ethnic migration – return migration, economic migration, migration motivated by grievances suffered in a minority situation – are suitable to analyse the reasons that initiated migration and kept it in motion. They are helpful in interpreting migrant narratives. However, none of the reasons can claim exclusive validity. Agreeing with other researchers, we find Roger Brubaker’s definition the most useful: Ethnic migration should be comprehended in a broad sense. In addition, every migration can be considered as “ethnically” motivated where ethnicity plays a dominant role as a cultural and symbolic capital.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.