910 resultados para Event-based Model
Resumo:
Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
Introduction Prospective memory (PM), the ability to remember to perform intended activities in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to improve gradually over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in young school children in general, and even less is known about factors influencing its development. Currently, a number of studies suggest that executive functions (EF) are potentially influencing processes (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Additionally, metacognitive processes (MC: monitoring and control) are assumed to be involved while optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the relations between PM, EF and MC remain relatively unspecified. We intend to empirically examine the structural relations between these constructs. Method A cross-sectional study including 119 2nd graders (mage = 95.03, sdage = 4.82) will be presented. Participants (n = 68 girls) completed three EF tasks (stroop, updating, shifting), a computerised event-based PM task and a MC spelling task. The latent variables PM, EF and MC that were represented by manifest variables deriving from the conducted tasks, were interrelated by structural equation modelling. Results Analyses revealed clear associations between the three cognitive constructs PM, EF and MC (rpm-EF = .45, rpm-MC = .23, ref-MC = .20). A three factor model, as opposed to one or two factor models, appeared to fit excellently to the data (chi2(17, 119) = 18.86, p = .34, remsea = .030, cfi = .990, tli = .978). Discussion The results indicate that already in young elementary school children, PM, EF and MC are empirically well distinguishable, but nevertheless substantially interrelated. PM and EF seem to share a substantial amount of variance while for MC, more unique processes may be assumed.
Resumo:
The induction of late long-term potentiation (L-LTP) involves complex interactions among second-messenger cascades. To gain insights into these interactions, a mathematical model was developed for L-LTP induction in the CA1 region of the hippocampus. The differential equation-based model represents actions of protein kinase A (PKA), MAP kinase (MAPK), and CaM kinase II (CAMKII) in the vicinity of the synapse, and activation of transcription by CaM kinase IV (CAMKIV) and MAPK. L-LTP is represented by increases in a synaptic weight. Simulations suggest that steep, supralinear stimulus-response relationships between stimuli (e.g., elevations in [Ca(2+)]) and kinase activation are essential for translating brief stimuli into long-lasting gene activation and synaptic weight increases. Convergence of multiple kinase activities to induce L-LTP helps to generate a threshold whereby the amount of L-LTP varies steeply with the number of brief (tetanic) electrical stimuli. The model simulates tetanic, -burst, pairing-induced, and chemical L-LTP, as well as L-LTP due to synaptic tagging. The model also simulates inhibition of L-LTP by inhibition of MAPK, CAMKII, PKA, or CAMKIV. The model predicts results of experiments to delineate mechanisms underlying L-LTP induction and expression. For example, the cAMP antagonist RpcAMPs, which inhibits L-LTP induction, is predicted to inhibit ERK activation. The model also appears useful to clarify similarities and differences between hippocampal L-LTP and long-term synaptic strengthening in other systems.
Resumo:
In order to reconstruct the temperature of the North Greenland Ice Core Project (NGRIP) site, new measurements of δ15N have been performed covering the time period from the beginning of the Holocene to Dansgaard–Oeschger (DO) event 8. Together with previously measured and mostly published δ15N data, we present for the first time a NGRIP temperature reconstruction for the whole last glacial period from 10 to 120 kyr b2k (thousand years before 2000 AD) including every DO event based on δ15N isotope measurements combined with a firn densification and heat diffusion model. The detected temperature rises at the onset of DO events range from 5 °C (DO 25) up to 16.5 °C (DO 11) with an uncertainty of ±3 °C. To bring measured and modelled data into agreement, we had to reduce the accumulation rate given by the NGRIP ss09sea06bm timescale in some periods by 30 to 35%, especially during the last glacial maximum. A comparison between reconstructed temperature and δ18Oice data confirms that the isotopic composition of the stadial was strongly influenced by seasonality. We evidence an anticorrelation between the variations of the δ18Oice sensitivity to temperature (referred to as α) and obliquity in agreement with a simple Rayleigh distillation model. Finally, we suggest that α might be influenced by the Northern Hemisphere ice sheet volume.
Resumo:
The results of a search for pair production of light top squarks are presented, using 4.7 fb(-1) of root s = 7 TeV proton-proton collisions collected with the ATLAS detector at the Large Hadron Collider. This search targets top squarks with masses similar to, or lighter than, the top quark mass. Final states containing exclusively one or two leptons (e, mu), large missing transverse momentum, light-flavour jets and b-jets are used to reconstruct the top squark pair system. Event-based mass scale variables are used to separate the signal from a large t (t) over bar background. No excess over the Standard Model expectations is found. The results are interpreted in the framework of the Minimal Supersymmetric Standard Model, assuming the top squark decays exclusively to a chargino and a b-quark, while requiring different mass relationships between the Supersymmetric particles in the decay chain. Light top squarks with masses between 123-167 GeV are excluded for neutralino masses around 55 GeV.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Purpose – The purpose of this paper is to present a simulation‐based evaluation method for the comparison of different organizational forms and software support levels in the field of supply chain management (SCM). Design/methodology/approach – Apart from widely known logistic performance indicators, the discrete event simulation model considers explicitly coordination cost as stemming from iterative administration procedures. Findings - The method is applied to an exemplary supply chain configuration considering various parameter settings. Curiously, additional coordination cost does not always result in improved logistic performance. Influence factor variations lead to different organizational recommendations. The results confirm the high importance of (up to now) disregarded dimensions when evaluating SCM concepts and IT tools. Research limitations/implications – The model is based on simplified product and network structures. Future research shall include more complex, real world configurations. Practical implications – The developed method is designed for the identification of improvement potential when SCM software is employed. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and the method provides a comprehensive tool for strategic IT decision making. Originality/value – Reviewed literature is mostly focused on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but associated coordination cost has not been addressed by researchers.
Resumo:
The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.
Resumo:
Plant-specific polyketide synthase genes constitute a gene superfamily, including universal chalcone synthase [CHS; malonyl-CoA:4-coumaroyl-CoA malonyltransferase (cyclizing) (EC 2.3.1.74)] genes, sporadically distributed stilbene synthase (SS) genes, and atypical, as-yet-uncharacterized CHS-like genes. We have recently isolated from Gerbera hybrida (Asteraceae) an unusual CHS-like gene, GCHS2, which codes for an enzyme with structural and enzymatic properties as well as ontogenetic distribution distinct from both CHS and SS. Here, we show that the GCHS2-like function is encoded in the Gerbera genome by a family of at least three transcriptionally active genes. Conservation within the GCHS2 family was exploited with selective PCR to study the occurrence of GCHS2-like genes in other Asteraceae. Parsimony analysis of the amplified sequences together with CHS-like genes isolated from other taxa of angiosperm subclass Asteridae suggests that GCHS2 has evolved from CHS via a gene duplication event that occurred before the diversification of the Asteraceae. Enzyme activity analysis of proteins produced in vitro indicates that the GCHS2 reaction is a non-SS variant of the CHS reaction, with both different substrate specificity (to benzoyl-CoA) and a truncated catalytic profile. Together with the recent results of Durbin et al. [Durbin, M. L., Learn, G. H., Jr., Huttley, G. A. & Clegg, M. T. (1995) Proc. Natl. Acad. Sci. USA 92, 3338-3342], our study confirms a gene duplication-based model that explains how various related functions have arisen from CHS during plant evolution.
Resumo:
Trabalho apresentado na Conferência CPE-POWERENG 2016, 29 junho a 01 de julho 2016, Bydgoszcz, Polónia
Resumo:
The research work presented in the thesis describes a new methodology for the automated near real-time detection of pipe bursts in Water Distribution Systems (WDSs). The methodology analyses the pressure/flow data gathered by means of SCADA systems in order to extract useful informations that go beyond the simple and usual monitoring type activities and/or regulatory reporting , enabling the water company to proactively manage the WDSs sections. The work has an interdisciplinary nature covering AI techniques and WDSs management processes such as data collection, manipulation and analysis for event detection. Indeed, the methodology makes use of (i) Artificial Neural Network (ANN) for the short-term forecasting of future pressure/flow signal values and (ii) Rule-based Model for bursts detection at sensor and district level. The results of applying the new methodology to a District Metered Area in Emilia- Romagna’s region, Italy have also been reported in the thesis. The results gathered illustrate how the methodology is capable to detect the aforementioned failure events in fast and reliable manner. The methodology guarantees the water companies to save water, energy, money and therefore enhance them to achieve higher levels of operational efficiency, a compliance with the current regulations and, last but not least, an improvement of customer service.
Resumo:
IMB (Irvine, Michigan, Brookline), a collaboration between the University of Michigan, the University of California at Irvine, and the U.S. Department of Energy, was an experiment designed to determine the ultimate stability of matter. Computer display of PMT [photomultiplier tubes] hits from simulated event based on neutrino interatction seen in Gargamelle bubble chamber.