891 resultados para data-driven simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Kineticist's Workbench is a computer program currently under development whose purpose is to help chemists understand, analyze, and simplify complex chemical reaction mechanisms. This paper discusses one module of the program that numerically simulates mechanisms and constructs qualitative descriptions of the simulation results. These descriptions are given in terms that are meaningful to the working chemist (e.g., steady states, stable oscillations, and so on); and the descriptions (as well as the data structures used to construct them) are accessible as input to other programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed series of simulation chamber experiments has been performed on the atmospheric degradation pathways of the primary air pollutant naphthalene and two of its photooxidation products, phthaldialdehyde and 1-nitronaphthalene. The measured yields of secondary organic aerosol (SOA) arising from the photooxidation of naphthalene varied from 6-20%, depending on the concentrations of naphthalene and nitrogen oxides as well as relative humidity. A range of carbonyls, nitro-compounds, phenols and carboxylic acids were identified among the gas- and particle-phase products. On-line analysis of the chemical composition of naphthalene SOA was performed using aerosol time-of-flight mass spectrometry (ATOFMS) for the first time. The results indicate that enhanced formation of carboxylic acids may contribute to the observed increase in SOA yields at higher relative humidity. The photolysis of phthaldialdehyde and 1-nitronaphthalene was investigated using natural light at the European Photoreactor (EUPHORE) in Valencia, Spain. The photolysis rate coefficients were measured directly and used to confirm that photolysis is the major atmospheric loss process for these compounds. For phthaldialdehyde, the main gas-phase products were phthalide and phthalic anhydride. SOA yields in the range 2-11% were observed, with phthalic acid and dihydroxyphthalic acid identified among the particle phase products. The photolysis of 1-nitronaphthalene yielded nitric oxide and a naphthoxy radical which reacted to form several products. SOA yields in the range 57-71% were observed, with 1,4-naphthoquinone, 1-naphthol and 1,4-naphthalenediol identified in the particle phase. On-line analysis of the SOA generated in an indoor chamber using ATOFMS provided evidence for the formation of high-molecular-weight products. Further investigations revealed that these products are oxygenated polycyclic compounds most likely produced from the dimerization of naphthoxy radicals. These results of this work indicate that naphthalene is a potentially large source of SOA in urban areas and should be included in atmospheric models. The kinetic and mechanistic information could be combined with existing literature data to produce an overall degradation mechanism for naphthalene suitable for inclusion in photochemical models that are used to predict the effect of emissions on air quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the swamping and timeliness of data in the organizational context, the decision maker’s choice of an appropriate decision alternative in a given situation is defied. In particular, operational actors are facing the challenge to meet business-critical decisions in a short time and at high frequency. The construct of Situation Awareness (SA) has been established in cognitive psychology as a valid basis for understanding the behavior and decision making of human beings in complex and dynamic systems. SA gives decision makers the possibility to make informed, time-critical decisions and thereby improve the performance of the respective business process. This research paper leverages SA as starting point for a design science project for Operational Business Intelligence and Analytics systems and suggests a first version of design principles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Leaving Certificate (LC) is the national, standardised state examination in Ireland necessary for entry to third level education – this presents a massive, raw corpus of data with the potential to yield invaluable insight into the phenomena of learner interlanguage. With samples of official LC Spanish examination data, this project has compiled a digitised corpus of learner Spanish comprised of the written and oral production of 100 candidates. This corpus was then analysed using a specific investigative corpus technique, Computer-aided Error Analysis (CEA, Dagneaux et al, 1998). CEA is a powerful apparatus in that it greatly facilitates the quantification and analysis of a large learner corpus in digital format. The corpus was both compiled and analysed with the use of UAM Corpus Tool (O’Donnell 2013). This Tool allows for the recording of candidate-specific variables such as grade, examination level, task type and gender, therefore allowing for critical analysis of the corpus as one unit, as separate written and oral sub corpora and also of performance per task, level and gender. This is an interdisciplinary work combining aspects of Applied Linguistics, Learner Corpus Research and Foreign Language (FL) Learning. Beginning with a review of the context of FL learning in Ireland and Europe, I go on to discuss the disciplinary context and theoretical framework for this work and outline the methodology applied. I then perform detailed quantitative and qualitative analyses before going on to combine all research findings outlining principal conclusions. This investigation does not make a priori assumptions about the data set, the LC Spanish examination, the context of FLs or of any aspect of learner competence. It undertakes to provide the linguistic research community and the domain of Spanish language learning and pedagogy in Ireland with an empirical, descriptive profile of real learner performance, characterising learner difficulty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gemstone Team HOPE (Hospital Optimal Productivity Enterprise)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cost of electricity, a major operating cost of municipal wastewater treatment plants, is related to influent flow rate, power price, and power load. With knowledge of inflow and price patterns, plant operators can manage processes to reduce electricity costs. Records of influent flow, power price, and load are evaluated for Blue Plains Advanced Wastewater Treatment Plant. Diurnal and seasonal trends are analyzed. Power usage is broken down among treatment processes. A simulation model of influent pumping, a large power user, is developed. It predicts pump discharge and power usage based on wet-well level. Individual pump characteristics are tested in the plant. The model accurately simulates plant inflow and power use for two pumping stations [R2 = 0.68, 0.93 (inflow), R2 =0.94, 0.91(power)]. Wet-well stage-storage relationship is estimated from data. Time-varying wet-well level is added to the model. A synthetic example demonstrates application in managing pumps to reduce electricity cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Ritonavir inhibition of cytochrome P450 3A4 decreases the elimination clearance of fentanyl by 67%. We used a pharmacokinetic model developed from published data to simulate the effect of sample patient-controlled epidural labor analgesic regimens on plasma fentanyl concentrations in the absence and presence of ritonavir-induced cytochrome P450 3A4 inhibition. METHODS: Fentanyl absorption from the epidural space was modeled using tanks-in-series delay elements. Systemic fentanyl disposition was described using a three-compartment pharmacokinetic model. Parameters for epidural drug absorption were estimated by fitting the model to reported plasma fentanyl concentrations measured after epidural administration. The validity of the model was assessed by comparing predicted plasma concentrations after epidural administration to published data. The effect of ritonavir was modeled as a 67% decrease in fentanyl elimination clearance. Plasma fentanyl concentrations were simulated for six sample patient-controlled epidural labor analgesic regimens over 24 h using ritonavir and control models. Simulated data were analyzed to determine if plasma fentanyl concentrations producing a 50% decrease in minute ventilation (6.1 ng/mL) were achieved. RESULTS: Simulated plasma fentanyl concentrations in the ritonavir group were higher than those in the control group for all sample labor analgesic regimens. Maximum plasma fentanyl concentrations were 1.8 ng/mL and 3.4 ng/mL for the normal and ritonavir simulations, respectively, and did not reach concentrations associated with 50% decrease in minute ventilation. CONCLUSION: Our model predicts that even with maximal clinical dosing regimens of epidural fentanyl over 24 h, ritonavir-induced cytochrome P450 3A4 inhibition is unlikely to produce plasma fentanyl concentrations associated with a decrease in minute ventilation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2015. American Geophysical Union. All Rights Reserved.The role of surface and advective heat fluxes on buoyancy-driven circulation was examined within a tropical coral reef system. Measurements of local meteorological conditions as well as water temperature and velocity were made at six lagoon locations for 2 months during the austral summer. We found that temperature rather than salinity dominated buoyancy in this system. The data were used to calculate diurnally phase-averaged thermal balances. A one-dimensional momentum balance developed for a portion of the lagoon indicates that the diurnal heating pattern and consistent spatial gradients in surface heat fluxes create a baroclinic pressure gradient that is dynamically important in driving the observed circulation. The baroclinic and barotropic pressure gradients make up 90% of the momentum budget in part of the system; thus, when the baroclinic pressure gradient decreases 20% during the day due to changes in temperature gradient, this substantially changes the circulation, with different flow patterns occurring during night and day. Thermal balances computed across the entire lagoon show that the spatial heating patterns and resulting buoyancy-driven circulation are important in maintaining a persistent advective export of heat from the lagoon and for enhancing ocean-lagoon exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations of waves, setup, and wave-driven mean flows were made on a steep coral forereef and its associated lagoonal system on the north shore of Moorea, French Polynesia. Despite the steep and complex geometry of the forereef, and wave amplitudes that are nearly equal to the mean water depth, linear wave theory showed very good agreement with data. Measurements across the reef illustrate the importance of including both wave transport (owing to Stokes drift), as well as the Eulerian mean transport when computing the fluxes over the reef. Finally, the observed setup closely follows the theoretical relationship derived from classic radiation stress theory, although the two parameters that appear in the model-one reflecting wave breaking, the other the effective depth over the reef crest-must be chosen to match theory to data. © 2013 American Meteorological Society.