861 resultados para Multi Domain Information Model
Resumo:
Simulation techniques are almost indispensable in the analysis of complex systems. Materials- and related information flow processes in logistics often possess such complexity. Further problem arise as the processes change over time and pose a Big Data problem as well. To cope with these issues adaptive simulations are more and more frequently used. This paper presents a few relevant advanced simulation models and intro-duces a novel model structure, which unifies modelling of geometrical relations and time processes. This way the process structure and their geometric relations can be handled in a well understandable and transparent way. Capabilities and applicability of the model is also presented via a demonstrational example.
Resumo:
The performance of reanalysis-driven Canadian Regional Climate Model, version 5 (CRCM5) in reproducing the present climate over the North American COordinated Regional climate Downscaling EXperiment domain for the 1989–2008 period has been assessed in comparison with several observation-based datasets. The model reproduces satisfactorily the near-surface temperature and precipitation characteristics over most part of North America. Coastal and mountainous zones remain problematic: a cold bias (2–6 °C) prevails over Rocky Mountains in summertime and all year-round over Mexico; winter precipitation in mountainous coastal regions is overestimated. The precipitation patterns related to the North American Monsoon are well reproduced, except on its northern limit. The spatial and temporal structure of the Great Plains Low-Level Jet is well reproduced by the model; however, the night-time precipitation maximum in the jet area is underestimated. The performance of CRCM5 was assessed against earlier CRCM versions and other RCMs. CRCM5 is shown to have been substantially improved compared to CRCM3 and CRCM4 in terms of seasonal mean statistics, and to be comparable to other modern RCMs.
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
AIM As technological interventions treating acute myocardial infarction (MI) improve, post-ischemic heart failure increasingly threatens patient health. The aim of the current study was to test whether FADD could be a potential target of gene therapy in the treatment of heart failure. METHODS Cardiomyocyte-specific FADD knockout mice along with non-transgenic littermates (NLC) were subjected to 30 minutes myocardial ischemia followed by 7 days of reperfusion or 6 weeks of permanent myocardial ischemia via the ligation of left main descending coronary artery. Cardiac function were evaluated by echocardiography and left ventricular (LV) catheterization and cardiomyocyte death was measured by Evans blue-TTC staining, TUNEL staining, and caspase-3, -8, and -9 activities. In vitro, H9C2 cells transfected with ether scramble siRNA or FADD siRNA were stressed with chelerythrin for 30 min and cleaved caspase-3 was assessed. RESULTS FADD expression was significantly decreased in FADD knockout mice compared to NLC. Ischemia/reperfusion (I/R) upregulated FADD expression in NLC mice, but not in FADD knockout mice at the early time. FADD deletion significantly attenuated I/R-induced cardiac dysfunction, decreased myocardial necrosis, and inhibited cardiomyocyte apoptosis. Furthermore, in 6 weeks long term permanent ischemia model, FADD deletion significantly reduced the infarct size (from 41.20 ± 3.90% in NLC to 26.83 ± 4.17% in FADD deletion), attenuated myocardial remodeling, improved cardiac function and improved survival. In vitro, FADD knockdown significantly reduced chelerythrin-induced the level of cleaved caspase-3. CONCLUSION Taken together, our results suggest FADD plays a critical role in post-ischemic heart failure. Inhibition of FADD retards heart failure progression. Our data supports the further investigation of FADD as a potential target for genetic manipulation in the treatment of heart failure.
Resumo:
In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.
Resumo:
Currently more than half of Electronic Health Record (EHR) projects fail. Most of these failures are not due to flawed technology, but rather due to the lack of systematic considerations of human issues. Among the barriers for EHR adoption, function mismatching among users, activities, and systems is a major area that has not been systematically addressed from a human-centered perspective. A theoretical framework called Functional Framework was developed for identifying and reducing functional discrepancies among users, activities, and systems. The Functional Framework is composed of three models – the User Model, the Designer Model, and the Activity Model. The User Model was developed by conducting a survey (N = 32) that identified the functions needed and desired from the user’s perspective. The Designer Model was developed by conducting a systemic review of an Electronic Dental Record (EDR) and its functions. The Activity Model was developed using an ethnographic method called shadowing where EDR users (5 dentists, 5 dental assistants, 5 administrative personnel) were followed quietly and observed for their activities. These three models were combined to form a unified model. From the unified model the work domain ontology was developed by asking users to rate the functions (a total of 190 functions) in the unified model along the dimensions of frequency and criticality in a survey. The functional discrepancies, as indicated by the regions of the Venn diagrams formed by the three models, were consistent with the survey results, especially with user satisfaction. The survey for the Functional Framework indicated the preference of one system over the other (R=0.895). The results of this project showed that the Functional Framework provides a systematic method for identifying, evaluating, and reducing functional discrepancies among users, systems, and activities. Limitations and generalizability of the Functional Framework were discussed.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
An Ensemble Kalman Filter is applied to assimilate observed tracer fields in various combinations in the Bern3D ocean model. Each tracer combination yields a set of optimal transport parameter values that are used in projections with prescribed CO2 stabilization pathways. The assimilation of temperature and salinity fields yields a too vigorous ventilation of the thermocline and the deep ocean, whereas the inclusion of CFC-11 and radiocarbon improves the representation of physical and biogeochemical tracers and of ventilation time scales. Projected peak uptake rates and cumulative uptake of CO2 by the ocean are around 20% lower for the parameters determined with CFC-11 and radiocarbon as additional target compared to those with salinity and temperature only. Higher surface temperature changes are simulated in the Greenland–Norwegian–Iceland Sea and in the Southern Ocean when CFC-11 is included in the Ensemble Kalman model tuning. These findings highlights the importance of ocean transport calibration for the design of near-term and long-term CO2 emission mitigation strategies and for climate projections.
Resumo:
Decadal-to-century scale trends for a range of marine environmental variables in the upper mesopelagic layer (UML, 100–600 m) are investigated using results from seven Earth System Models forced by a high greenhouse gas emission scenario. The models as a class represent the observation-based distribution of oxygen (O2) and carbon dioxide (CO2), albeit major mismatches between observation-based and simulated values remain for individual models. By year 2100 all models project an increase in SST between 2 °C and 3 °C, and a decrease in the pH and in the saturation state of water with respect to calcium carbonate minerals in the UML. A decrease in the total ocean inventory of dissolved oxygen by 2% to 4% is projected by the range of models. Projected O2 changes in the UML show a complex pattern with both increasing and decreasing trends reflecting the subtle balance of different competing factors such as circulation, production, remineralization, and temperature changes. Projected changes in the total volume of hypoxic and suboxic waters remain relatively small in all models. A widespread increase of CO2 in the UML is projected. The median of the CO2 distribution between 100 and 600m shifts from 0.1–0.2 mol m−3 in year 1990 to 0.2–0.4 mol m−3 in year 2100, primarily as a result of the invasion of anthropogenic carbon from the atmosphere. The co-occurrence of changes in a range of environmental variables indicates the need to further investigate their synergistic impacts on marine ecosystems and Earth System feedbacks.
Resumo:
The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.
Resumo:
The family of membrane protein called glutamate receptors play an important role in the central nervous system in mediating signaling between neurons. Glutamate receptors are involved in the elaborate game that nerve cells play with each other in order to control movement, memory, and learning. Neurons achieve this communication by rapidly converting electrical signals into chemical signals and then converting them back into electrical signals. To propagate an electrical impulse, neurons in the brain launch bursts of neurotransmitter molecules like glutamate at the junction between neurons, called the synapse. Glutamate receptors are found lodged in the membranes of the post-synaptic neuron. They receive the burst of neurotransmitters and respond by fielding the neurotransmitters and opening ion channels. Glutamate receptors have been implicated in a number of neuropathologies like ischemia, stroke and amyotrophic lateral sclerosis. Specifically, the NMDA subtype of glutamate receptors has been linked to the onset of Alzheimer’s disease and the subsequent degeneration of neuronal cells. While crystal structures of AMPA and kainate subtypes of glutamate receptors have provided valuable information regarding the assembly and mechanism of activation; little is known about the NMDA receptors. Even the basic question of receptor assembly still remains unanswered. Therefore, to gain a clear understanding of how the receptors are assembled and how agonist binding gets translated to channel opening, I have used a technique called Luminescence Resonance Energy Transfer (LRET). LRET offers the unique advantage of tracking large scale conformational changes associated with receptor activation and desensitization. In this dissertation, LRET, in combination with biochemical and electrophysiological studies, were performed on the NMDA receptors to draw a correlation between structure and function. NMDA receptor subtypes GluN1 and GluN2A were modified such that fluorophores could be introduced at specific sites to determine their pattern of assembly. The results indicated that the GluN1 subunits assembled across each other in a diagonal manner to form a functional receptor. Once the subunit arrangement was established, this was used as a model to further examine the mechanism of activation in this subtype of glutamate receptor. Using LRET, the correlation between cleft closure and activation was tested for both the GluN1 and GluN2A subunit of the NMDA receptor in response to agonists of varying efficacies. These investigations revealed that cleft closure plays a major role in the mechanism of activation in the NMDA receptor, similar to the AMPA and kainate subtypes. Therefore, suggesting that the mechanism of activation is conserved across the different subtypes of glutamate receptors.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.