15 resultados para discrete-event simulation
em CentAUR: Central Archive University of Reading - UK
Resumo:
This study presents the findings of applying a Discrete Demand Side Control (DDSC) approach to the space heating of two case study buildings. High and low tolerance scenarios are implemented on the space heating controller to assess the impact of DDSC upon buildings with different thermal capacitances, light-weight and heavy-weight construction. Space heating is provided by an electric heat pump powered from a wind turbine, with a back-up electrical network connection in the event of insufficient wind being available when a demand occurs. Findings highlight that thermal comfort is maintained within an acceptable range while the DDSC controller maintains the demand/supply balance. Whilst it is noted that energy demand increases slightly, as this is mostly supplied from the wind turbine, this is of little significance and hence a reduction in operating costs and carbon emissions is still attained.
Resumo:
We perform a numerical study of the evolution of a Coronal Mass Ejection (CME) and its interaction with the coronal magnetic field based on the 12 May 1997, CME event using a global MagnetoHydroDynamic (MHD) model for the solar corona. The ambient solar wind steady-state solution is driven by photospheric magnetic field data, while the solar eruption is obtained by superimposing an unstable flux rope onto the steady-state solution. During the initial stage of CME expansion, the core flux rope reconnects with the neighboring field, which facilitates lateral expansion of the CME footprint in the low corona. The flux rope field also reconnects with the oppositely orientated overlying magnetic field in the manner of the breakout model. During this stage of the eruption, the simulated CME rotates counter-clockwise to achieve an orientation that is in agreement with the interplanetary flux rope observed at 1 AU. A significant component of the CME that expands into interplanetary space comprises one of the side lobes created mainly as a result of reconnection with the overlying field. Within 3 hours, reconnection effectively modifies the CME connectivity from the initial condition where both footpoints are rooted in the active region to a situation where one footpoint is displaced into the quiet Sun, at a significant distance (≈1R ) from the original source region. The expansion and rotation due to interaction with the overlying magnetic field stops when the CME reaches the outer edge of the helmet streamer belt, where the field is organized on a global scale. The simulation thus offers a new view of the role reconnection plays in rotating a CME flux rope and transporting its footpoints while preserving its core structure.
Resumo:
The problem of modeling solar energetic particle (SEP) events is important to both space weather research and forecasting, and yet it has seen relatively little progress. Most important SEP events are associated with coronal mass ejections (CMEs) that drive coronal and interplanetary shocks. These shocks can continuously produce accelerated particles from the ambient medium to well beyond 1 AU. This paper describes an effort to model real SEP events using a Center for Integrated Space weather Modeling (CISM) MHD solar wind simulation including a cone model of CMEs to initiate the related shocks. In addition to providing observation-inspired shock geometry and characteristics, this MHD simulation describes the time-dependent observer field line connections to the shock source. As a first approximation, we assume a shock jump-parameterized source strength and spectrum, and that scatter-free transport occurs outside of the shock source, thus emphasizing the role the shock evolution plays in determining the modeled SEP event profile. Three halo CME events on May 12, 1997, November 4, 1997 and December 13, 2006 are used to test the modeling approach. While challenges arise in the identification and characterization of the shocks in the MHD model results, this approach illustrates the importance to SEP event modeling of globally simulating the underlying heliospheric event. The results also suggest the potential utility of such a model for forcasting and for interpretation of separated multipoint measurements such as those expected from the STEREO mission.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
This article discusses approaches to the interpretation and analysis an event that is poised between reality and performance. It focuses upon a real event witnessed by the author while driving out of Los Angeles, USA. A body hanging on a rope from a bridge some 25/30 feet above the freeway held up the traffic. The status of the body was unclear. Was it the corpse of a dead human being or a stuffed dummy, a simulation of a death? Was it is tragic accident or suicide or was it a stunt, a protest or a performance? Whether a real body or not, it was an event: it drew an audience, it took place in a defined public space bound by time and it disrupted everyday normality and the familiar. The article debates how approaches to performance can engage with a shocking event, such as the Hanging Man, and the frameworks of interpretation that can be brought to bear on it. The analysis takes account of the function of memory in reconstructing the event, and the paradigms of cultural knowledge that offered themselves as parallels, comparators or distinctions against which the experience could be measured, such as the incidents of self-immolation related to demonstrations against the Vietnam War, the protest by the Irish Hunger Strikers and the visual impact of Anthony Gormley’s 2007 work, 'Event Horizon'. Theoretical frameworks deriving from analytical approaches to performance, media representation and ethical dilemmas are evaluated as means to assimilate an indeterminate and challenging event, and the notion of what an ‘event’ may be is itself addressed.
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
An algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimization and Parameter Estimation (DISOPE), which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimization procedure. A version of the algorithm with a linear-quadratic model-based problem, implemented in the C+ + programming language, is developed and applied to illustrative simulation examples. An analysis of the optimality and convergence properties of the algorithm is also presented.
Resumo:
This paper introduces a method for simulating multivariate samples that have exact means, covariances, skewness and kurtosis. We introduce a new class of rectangular orthogonal matrix which is fundamental to the methodology and we call these matrices L matrices. They may be deterministic, parametric or data specific in nature. The target moments determine the L matrix then infinitely many random samples with the same exact moments may be generated by multiplying the L matrix by arbitrary random orthogonal matrices. This methodology is thus termed “ROM simulation”. Considering certain elementary types of random orthogonal matrices we demonstrate that they generate samples with different characteristics. ROM simulation has applications to many problems that are resolved using standard Monte Carlo methods. But no parametric assumptions are required (unless parametric L matrices are used) so there is no sampling error caused by the discrete approximation of a continuous distribution, which is a major source of error in standard Monte Carlo simulations. For illustration, we apply ROM simulation to determine the value-at-risk of a stock portfolio.
Resumo:
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalizations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts.
Resumo:
The orthodox approach for incentivising Demand Side Participation (DSP) programs is that utility losses from capital, installation and planning costs should be recovered under financial incentive mechanisms which aim to ensure that utilities have the right incentives to implement DSP activities. The recent national smart metering roll-out in the UK implies that this approach needs to be reassessed since utilities will recover the capital costs associated with DSP technology through bills. This paper introduces a reward and penalty mechanism focusing on residential users. DSP planning costs are recovered through payments from those consumers who do not react to peak signals. Those consumers who do react are rewarded by paying lower bills. Because real-time incentives to residential consumers tend to fail due to the negligible amounts associated with net gains (and losses) or individual users, in the proposed mechanism the regulator determines benchmarks which are matched against responses to signals and caps the level of rewards/penalties to avoid market distortions. The paper presents an overview of existing financial incentive mechanisms for DSP; introduces the reward/penalty mechanism aimed at fostering DSP under the hypothesis of smart metering roll-out; considers the costs faced by utilities for DSP programs; assesses linear rate effects and value changes; introduces compensatory weights for those consumers who have physical or financial impediments; and shows findings based on simulation runs on three discrete levels of elasticity.
Resumo:
A rheological model of sea ice is presented that incorporates the orientational distribution of ice thickness in leads embedded in isotropic floe ice. Sea ice internal stress is determined by coulombic, ridging and tensile failure at orientations where corresponding failure criteria are satisfied at minimum stresses. Because sea ice traction increases in thinner leads and cohesion is finite, such failure line angles are determined by the orientational distribution of sea ice thickness relative to the imposed stresses. In contrast to the isotropic case, sea ice thickness anisotropy results in these failure lines becoming dependent on the stress magnitude. Although generally a given failure criteria type can be satisfied at many directions, only two at most are considered. The strain rate is determined by shearing along slip lines accompanied by dilatancy and closing or opening across orientations affected by ridging or tensile failure. The rheology is illustrated by a yield curve determined by combining coulombic and ridging failure for the case of two pairs of isotropically formed leads of different thicknesses rotated with regard to each other, which models two events of coulombic failure followed by dilatancy and refreezing. The yield curve consists of linear segments describing coulombic and ridging yield as failure switches from one lead to another as the stress grows. Because sliding along slip lines is accompanied by dilatancy, at typical Arctic sea ice deformation rates a one-day-long deformation event produces enough open water that these freshly formed slip lines are preferential places of ridging failure.
Resumo:
A coordinated ground-based observational campaign using the IMAGE magnetometer network, EISCAT radars and optical instruments on Svalbard has made possible detailed studies of a travelling convection vortices (TCV) event on 6 January 1992. Combining the data from these facilities allows us to draw a very detailed picture of the features and dynamics of this TCV event. On the way from the noon to the drawn meridian, the vortices went through a remarkable development. The propagation velocity in the ionosphere increased from 2.5 to 7.4 km s−1, and the orientation of the major axes of the vortices rotated from being almost parallel to the magnetic meridian near noon to essentially perpendicular at dawn. By combining electric fields obtained by EISCAT and ionospheric currents deduced from magnetic field recordings, conductivities associated with the vortices could be estimated. Contrary to expectations we found higher conductivities below the downward field aligned current (FAC) filament than below the upward directed. Unexpected results also emerged from the optical observations. For most of the time there were no discrete aurora at 557.7 nm associated with the TCVs. Only once did a discrete form appear at the foot of the upward FAC. This aurora subsequently expanded eastward and westward leaving its centre at the same longitude while the TCV continued to travel westward. Also we try to identify the source regions of TCVs in the magnetosphere and discuss possible generation mechanisms.
Resumo:
Traditionally, the cusp has been described in terms of a time-stationary feature of the magnetosphere which allows access of magnetosheath-like plasma to low altitudes. Statistical surveys of data from low-altitude spacecraft have shown the average characteristics and position of the cusp. Recently, however, it has been suggested that the ionospheric footprint of flux transfer events (FTEs) may be identified as variations of the “cusp” on timescales of a few minutes. In this model, the cusp can vary in form between a steady-state feature in one limit and a series of discrete ionospheric FTE signatures in the other limit. If this time-dependent cusp scenario is correct, then the signatures of the transient reconnection events must be able, on average, to reproduce the statistical cusp occurrence previously determined from the satellite observations. In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of FTE signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.
Resumo:
Cortical motor simulation supports the understanding of others' actions and intentions. This mechanism is thought to rely on the mirror neuron system (MNS), a brain network that is active both during action execution and observation. Indirect evidence suggests that alpha/beta suppression, an electroencephalographic (EEG) index of MNS activity, is modulated by reward. In this study we aimed to test the plasticity of the MNS by directly investigating the link between alpha/beta suppression and reward. 40 individuals from a general population sample took part in an evaluative conditioning experiment, where different neutral faces were associated with high or low reward values. In the test phase, EEG was recorded while participants viewed videoclips of happy expressions made by the conditioned faces. Alpha/beta suppression (identified using event-related desynchronisation of specific independent components) in response to rewarding faces was found to be greater than for non-rewarding faces. This result provides a mechanistic insight into the plasticity of the MNS and, more generally, into the role of reward in modulating physiological responses linked to empathy.