910 resultados para Event-based Model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using Common Random Numbers at various levels of replications. Furthermore, after optimising our Crossdocking distribution centre simulation model, we are able to achieve optimal performance using fewer simulations runs for the simulation model which uses Common Random Numbers as compared to the simulation model which does not use Common Random Numbers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we study the effect of two distinct discrete delays on the dynamics of a Wilson-Cowan neural network. This activity based model describes the dynamics of synaptically interacting excitatory and inhibitory neuronal populations. We discuss the interpretation of the delays in the language of neurobiology and show how they can contribute to the generation of network rhythms. First we focus on the use of linear stability theory to show how to destabilise a fixed point, leading to the onset of oscillatory behaviour. Next we show for the choice of a Heaviside nonlinearity for the firing rate that such emergent oscillations can be either synchronous or anti-synchronous depending on whether inhibition or excitation dominates the network architecture. To probe the behaviour of smooth (sigmoidal) nonlinear firing rates we use a mixture of numerical bifurcation analysis and direct simulations, and uncover parameter windows that support chaotic behaviour. Finally we comment on the role of delays in the generation of bursting oscillations, and discuss natural extensions of the work in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the eye-catching advances in sensing technologies, smart water networks have been attracting immense research interest in recent years. One of the most overarching tasks in smart water network management is the reduction of water loss (such as leaks and bursts in a pipe network). In this paper, we propose an efficient scheme to position water loss event based on water network topology. The state-of-the-art approach to this problem, however, utilizes the limited topology information of the water network, that is, only one single shortest path between two sensor locations. Consequently, the accuracy of positioning water loss events is still less desirable. To resolve this problem, our scheme consists of two key ingredients: First, we design a novel graph topology-based measure, which can recursively quantify the "average distances" for all pairs of senor locations simultaneously in a water network. This measure will substantially improve the accuracy of our positioning strategy, by capturing the entire water network topology information between every two sensor locations, yet without any sacrifice of computational efficiency. Then, we devise an efficient search algorithm that combines the "average distances" with the difference in the arrival times of the pressure variations detected at sensor locations. The viable experimental evaluations on real-world test bed (WaterWiSe@SG) demonstrate that our proposed positioning scheme can identify water loss event more accurately than the best-known competitor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Questa ricerca si concentra sui modi di produzione e ricezione della teatralità nelle pratiche performative contemporanee con finalità estetiche. In particolare, sono indagate quelle pratiche che – all’interno di ecosistemi performátici – impiegano modalità di progettazione dell’azione ricorrendo a strategie e dispositivi di teatralizzazione dell’evento attraverso modelli immersivi co-partecipativi, intervenendo sui meccanismi semiocognitivi di interpretazione dello spettatore. Il concetto di ecosistemi performátici consente di pertinentizzare le differenti formazioni semiotiche che emergono dal continuum performativo della semiosfera, cogliendo i rapporti ecologici ed evolutivi che si instaurano diacronicamente tra le forme teatrali. Sono soprattutto le trasformazioni a essere comprese, restituendo all’analisi semiotica un’immagine delle arti performátiche dinamica e radicata nella cultura e nella società, e delle modalità in cui i meccanismi di base della teatralità prendono forma. Con approccio etnografico ecologico cognitivo, si affronta il tema della corporeità e dei regimi di presenza, introducendo nell’analisi relazionale il concetto di emplacement a integrazione della nozione di embodiment. È elaborato, inoltre, un modello autopoietico dell’enunciazione come atto di mostrazione, sulla metafora della “conversazione”. Nell’ecologia dell’ambiente performático tra attore e spettatore si crea un “campo interattivo”, nel quale si consuma l’enunciazione teatrale. Attraverso casi studio, si illustra come le esperienze immersive co-partecipative scardinano e riconfigurano l’insieme di norme e usi naturalizzati nella tradizione teatrale occidentale del dramma. Si giunge, infine, a concepire la relazione tra frontalità e immersività non in termini di opposizione tra contrari, bensì in rapporto di continuità quale costante del discorso performático soggetta a multiformi gradazioni. Quella tra attore e spettatore è una interazione, un dialogo, che non si gioca sulla relazione frontalità/immersività bensì su quella interattività/non-interattività dalla cui articolazione emergono le differenti e cangianti forme teatrali che popolano e popoleranno gli ecosistemi performátici.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent estimate of CO(2) outgassing from Amazonian wetlands suggests that an order of magnitude more CO(2) leaves rivers through gas exchange with the atmosphere than is exported to the ocean as organic plus inorganic carbon. However, the contribution of smaller rivers is still poorly understood, mainly because of limitations in mapping their spatial extent. Considering that the largest extension of the Amazon River network is composed of small rivers, the authors` objective was to elucidate their role in air-water CO(2) exchange by developing a geographic information system ( GIS)- based model to calculate the surface area covered by rivers with channels less than 100 m wide, combined with estimated CO(2) outgassing rates at the Ji-Parana River basin, in the western Amazon. Estimated CO(2) outgassing was the main carbon export pathway for this river basin, totaling 289 Gg C yr(-1), about 2.4 times the amount of carbon exported as dissolved inorganic carbon ( 121 Gg C yr(-1)) and 1.6 times the dissolved organic carbon export ( 185 Gg C yr(-1)). The relationships established here between drainage area and channel width provide a new model for determining small river surface area, allowing regional extrapolations of air - water gas exchange. Applying this model to the entire Amazon River network of channels less than 100 m wide ( third to fifth order), the authors calculate that the surface area of small rivers is 0.3 +/- 0.05 million km(2), and it is potentially evading to the atmosphere 170 +/- 42 Tg C yr(-1) as CO(2). Therefore, these ecosystems play an important role in the regional carbon balance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we describe a model of the human visual system (HVS) based on the wavelet transform. This model is largely based on a previously proposed model, but has a number of modifications that make it more amenable to potential integration into a wavelet based image compression scheme. These modifications include the use of a separable wavelet transform instead of the cortex transform, the application of a wavelet contrast sensitivity function (CSP), and a simplified definition of subband contrast that allows us to predict noise visibility directly from wavelet coefficients. Initially, we outline the luminance, frequency, and masking sensitivities of the HVS and discuss how these can be incorporated into the wavelet transform. We then outline a number of limitations of the wavelet transform as a model of the HVS, namely the lack of translational invariance and poor orientation sensitivity. In order to investigate the efficacy of this wavelet based model, a wavelet visible difference predictor (WVDP) is described. The WVDP is then used to predict visible differences between an original and compressed (or noisy) image. Results are presented to emphasize the limitations of commonly used measures of image quality and to demonstrate the performance of the WVDP, The paper concludes with suggestions on bow the WVDP can be used to determine a visually optimal quantization strategy for wavelet coefficients and produce a quantitative measure of image quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent El Nino events have stimulated interest in the development of modeling techniques to forecast extremes of climate and related health events. Previous studies have documented associations between specific climate variables (particularly temperature and rainfall) and outbreaks of arboviral disease. In some countries, such diseases are sensitive to Fl Nino. Here we describe a climate-based model for the prediction of Ross River virus epidemics in Australia. From a literature search and data on case notifications, we determined in which years there were epidemics of Ross River virus in southern Australia between 1928 and 1998. Predictor variables were monthly Southern Oscillation index values for the year of an epidemic or lagged by 1 year. We found that in southeastern states, epidemic years were well predicted by monthly Southern Oscillation index values in January and September in the previous year. The model forecasts that there is a high probability of epidemic Ross River virus in the southern states of Australia in 1999. We conclude that epidemics of arboviral disease can, at least in principle, be predicted on the basis of climate relationships.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mutations in PKD2 are responsible for approximately 15% of the autosomal dominant polycystic kidney disease cases. This gene encodes polycystin-2, a calcium-permeable cation channel whose C-terminal intracytosolic tail (PC2t) plays an important role in its interaction with a number of different proteins. In the present study, we have comprehensively evaluated the macromolecular assembly of PC2t homooligomer using a series of biophysical and biochemical analyses. Our studies, based on a new delimitation of PC2t, have revealed that it is capable of assembling as a homotetramer independently of any other portion of the molecule. Our data support this tetrameric arrangement in the presence and absence of calcium. Molecular dynamics simulations performed with a modified all-atoms structure-based model supported the PC2t tetrameric assembly, as well as how different populations are disposed in solution. The simulations demonstrated, indeed, that the best-scored structures are the ones compatible with a fourfold oligomeric state. These findings clarify the structural properties of PC2t domain and strongly support a homotetramer assembly of PC2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Problems associated with the stickiness of food in processing and storage practices along with its causative factors are outlined. Fundamental mechanisms that explain why and how food products become sticky are discussed. Methods currently in use for characterizing and overcoming stickiness problems in food processing and storage operations are described. The use of glass transition temperature-based model, which provides a rational basis for understanding and characterizing the stickiness of many food products, is highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The particle-based Lattice Solid Model (LSM) was developed to provide a basis to study the physics of rocks and the nonlinear dynamics of earthquakes (MORA and PLACE, 1994; PLACE and MORA, 1999). A new modular and flexible LSM approach has been developed that allows different microphysics to be easily included in or removed from the model. The approach provides a virtual laboratory where numerical experiments can easily be set up and all measurable quantities visualised. The proposed approach provides a means to simulate complex phenomena such as fracturing or localisation processes, and enables the effect of different micro-physics on macroscopic behaviour to be studied. The initial 2-D model is extended to allow three-dimensional simulations to be performed and particles of different sizes to be specified. Numerical bi-axial compression experiments under different confining pressure are used to calibrate the model. By tuning the different microscopic parameters (such as coefficient of friction, microscopic strength and distribution of grain sizes), the macroscopic strength of the material and can be adjusted to be in agreement with laboratory experiments, and the orientation of fractures is consistent with the theoretical value predicted based on Mohr-Coulomb diagram. Simulations indicate that 3-D numerical models have different macroscopic properties than in 2-D and, hence, the model must be recalibrated for 3-D simulations. These numerical experiments illustrate that the new approach is capable of simulating typical rock fracture behaviour. The new model provides a basis to investigate nucleation, rupture and slip pulse propagation in complex fault zones without the previous model limitations of a regular low-level surface geometry and being restricted to two-dimensions.