921 resultados para Stochastic processes -- Mathematical models
Resumo:
A central difficulty in modeling epileptogenesis using biologically plausible computational and mathematical models is not the production of activity characteristic of a seizure, but rather producing it in response to specific and quantifiable physiologic change or pathologic abnormality. This is particularly problematic when it is considered that the pathophysiological genesis of most epilepsies is largely unknown. However, several volatile general anesthetic agents, whose principle targets of action are quantifiably well characterized, are also known to be proconvulsant. The authors describe recent approaches to theoretically describing the electroencephalographic effects of volatile general anesthetic agents that may be able to provide important insights into the physiologic mechanisms that underpin seizure initiation.
Resumo:
In this study, we assess changes of aerosol optical depth (AOD) and direct radiative forcing (DRF) in response to the reduction of anthropogenic emissions in four major pollution regions in the Northern Hemisphere by using results from nine global models in the framework of the Hemispheric Transport of Air Pollution (HTAP). DRF at top of atmosphere (TOA) and surface is estimated based on AOD results from the HTAP models and AOD-normalized DRF (NDRF) from a chemical transport model. The multimodel results show that, on average, a 20% reduction of anthropogenic emissions in North America, Europe, East Asia, and South Asia lowers the global mean AOD (all-sky TOA DRF) by 9.2% (9.0%), 3.5% (3.0%), and 9.4% (10.0%) for sulfate, particulate organic matter (POM), and black carbon (BC), respectively. Global annual average TOA all-sky forcing efficiency relative to particle or gaseous precursor emissions from the four regions (expressed as multimodel mean ± one standard deviation) is ±3.5 ±0.8, ±4.0 ±1.7, and 29.5 ±18.1mWm ±2 per Tg for sulfate (relative to SO2), POM, and BC, respectively. The impacts of the regional emission reductions on AOD and DRF extend well beyond the source regions because of intercontinental transport (ICT). On an annual basis, ICT accounts for 11 ±5% to 31 ±9% of AOD and DRF in a receptor region at continental or subcontinental scale, with domestic emissions accounting for the remainder, depending on regions and species. For sulfate AOD, the largest ICT contribution of 31 ±9% occurs in South Asia, which is dominated by the emissions from Europe. For BC AOD, the largest ICT contribution of 28 ±18% occurs in North America, which is dominated by the emissions from East Asia. The large spreads among models highlight the need to improve aerosol processes in models, and evaluate and constrain models with observations.
Resumo:
The inhibitory effects of toxin-producing phytoplankton (TPP) on zooplankton modulate the dynamics of marine plankton. In this article, we employ simple mathematical models to compare theoretically the dynamics of phytoplankton–zooplankton interaction in situations where the TPP are present with those where TPP are absent. We consider two sets of three-component interaction models: one that does not include the effect of TPP and the other that does. The negative effects of TPP on zooplankton is described by a non-linear interaction term. Extensive theoretical analyses of the models have been performed to understand the qualitative behaviour of the model systems around every possible equilibria. The results of local-stability analysis and numerical simulations demonstrate that the two model-systems differ qualitatively with regard to oscillations and stability. The model system that does not include TPP is asymptotically stable around the coexisting equilibria, whereas, the system that includes TPP oscillates for a range of parametric values associated with toxin-inhibition rate and competition coefficients. Our analysis suggests that the qualitative dynamics of the plankton–zooplankton interactions are very likely to alter due to the presence of TPP species, and therefore the effects of TPP should be considered carefully while modelling plankton dynamics.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Let X be a locally compact Polish space. A random measure on X is a probability measure on the space of all (nonnegative) Radon measures on X. Denote by K(X) the cone of all Radon measures η on X which are of the form η =
Resumo:
Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.
Resumo:
We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.
Resumo:
The deterpenation of bergamot essential oil can be performed by liquid liquid extraction using hydrous ethanol as the solvent. A ternary mixture composed of 1-methyl-4-prop-1-en-2-yl-cydohexene (limonene), 3,7-dimethylocta-1,6-dien-3-yl-acetate (linalyl acetate), and 3,7-dimethylocta-1,6-dien-3-ol (linalool), three major compounds commonly found in bergamot oil, was used to simulate this essential oil. Liquid liquid equilibrium data were experimentally determined for systems containing essential oil compounds, ethanol, and water at 298.2 K and are reported in this paper. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were lower than 0.0062 in all systems, indicating the good descriptive quality of the molecular models. To verify the effect of the water mass fraction in the solvent and the linalool mass fraction in the terpene phase on the distribution coefficients of the essential oil compounds, nonlinear regression analyses were performed, obtaining mathematical models with correlation coefficient values higher than 0.99. The results show that as the water content in the solvent phase increased, the kappa value decreased, regardless of the type of compound studied. Conversely, as the linalool content increased, the distribution coefficients of hydrocarbon terpene and ester also increased. However, the linalool distribution coefficient values were negatively affected when the terpene alcohol content increased in the terpene phase.
Resumo:
Organic aerosol (OA) in the atmosphere consists of a multitude of organic species which are either directly emitted or the products of a variety of chemical reactions. This complexity challenges our ability to explicitly characterize the chemical composition of these particles. We find that the bulk composition of OA from a variety of environments (laboratory and field) occupies a narrow range in the space of a Van Krevelen diagram (H: C versus O:C), characterized by a slope of similar to-1. The data show that atmospheric aging, involving processes such as volatilization, oxidation, mixing of air masses or condensation of further products, is consistent with movement along this line, producing a more oxidized aerosol. This finding has implications for our understanding of the evolution of atmospheric OA and representation of these processes in models. Citation: Heald, C. L., J. H. Kroll, J. L. Jimenez, K. S. Docherty, P. F. DeCarlo, A. C. Aiken, Q. Chen, S. T. Martin, D. K. Farmer, and P. Artaxo (2010), A simplified description of the evolution of organic aerosol composition in the atmosphere, Geophys. Res. Lett., 37, L08803, doi: 10.1029/2010GL042737.
Resumo:
A statistical data analysis methodology was developed to evaluate the field emission properties of many samples of copper oxide nanostructured field emitters. This analysis was largely done in terms of Seppen-Katamuki (SK) charts, field strength and emission current. Some physical and mathematical models were derived to describe the effect of small electric field perturbations in the Fowler-Nordheim (F-N) equation, and then to explain the trend of the data represented in the SK charts. The field enhancement factor and the emission area parameters showed to be very sensitive to variations in the electric field for most of the samples. We have found that the anode-cathode distance is critical in the field emission characterization of samples having a non-rigid nanostructure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We revisit the problem of an otherwise classical particle immersed in the zero-point radiation field, with the purpose of tracing the origin of the nonlocality characteristic of Schrodinger`s equation. The Fokker-Planck-type equation in the particles phase-space leads to an infinite hierarchy of equations in configuration space. In the radiationless limit the first two equations decouple from the rest. The first is the continuity equation: the second one, for the particle flux, contains a nonlocal term due to the momentum fluctuations impressed by the field. These equations are shown to lead to Schrodinger`s equation. Nonlocality (obtained here for the one-particle system) appears thus as a property of the description, not of Nature. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.
Resumo:
We study random walks systems on Z whose general description follows. At time zero, there is a number N >= 1 of particles at each vertex of N, all being inactive, except for those placed at the vertex one. Each active particle performs a simple random walk on Z and, up to the time it dies, it activates all inactive particles that it meets along its way. An active particle dies at the instant it reaches a certain fixed total of jumps (L >= 1) without activating any particle, so that its lifetime depends strongly on the past of the process. We investigate how the probability of survival of the process depends on L and on the jumping probabilities of the active particles.
Resumo:
Drinking water utilities in urban areas are focused on finding smart solutions facing new challenges in their real-time operation because of limited water resources, intensive energy requirements, a growing population, a costly and ageing infrastructure, increasingly stringent regulations, and increased attention towards the environmental impact of water use. Such challenges force water managers to monitor and control not only water supply and distribution, but also consumer demand. This paper presents and discusses novel methodologies and procedures towards an integrated water resource management system based on advanced ICT technologies of automation and telecommunications for largely improving the efficiency of drinking water networks (DWN) in terms of water use, energy consumption, water loss minimization, and water quality guarantees. In particular, the paper addresses the first results of the European project EFFINET (FP7-ICT2011-8-318556) devoted to the monitoring and control of the DWN in Barcelona (Spain). Results are split in two levels according to different management objectives: (i) the monitoring level is concerned with all the aspects involved in the observation of the current state of a system and the detection/diagnosis of abnormal situations. It is achieved through sensors and communications technology, together with mathematical models; (ii) the control level is concerned with computing the best suitable and admissible control strategies for network actuators as to optimize a given set of operational goals related to the performance of the overall system. This level covers the network control (optimal management of water and energy) and the demand management (smart metering, efficient supply). The consideration of the Barcelona DWN as the case study will allow to prove the general applicability of the proposed integrated ICT solutions and their effectiveness in the management of DWN, with considerable savings of electricity costs and reduced water loss while ensuring the high European standards of water quality to citizens.
Resumo:
A quantificação da precipitação é dificultada pela extrema aleatoriedade do fenômeno na natureza. Os métodos convencionais para mensuração da precipitação atuam no sentido de espacializar a precipitação mensurada pontualmente em postos pluviométricos para toda a área de interesse e, desta forma, uma rede com elevado número de postos bem distribuídos em toda a área de interesse é necessária para um resultado satisfatório. No entanto, é notória a escassez de postos pluviométricos e a má distribuição espacial dos poucos existentes, não somente no Brasil, mas em vastas áreas do globo. Neste contexto, as estimativas da precipitação com técnicas de sensoriamento remoto e geoprocessamento pretendem potencializar a utilização dos postos pluviométricos existentes através de uma espacialização baseada em critérios físicos. Além disto, o sensoriamento remoto é a ferramenta mais capaz para gerar estimativas de precipitação nos oceanos e nas vastas áreas continentais desprovidas de qualquer tipo de informação pluviométrica. Neste trabalho investigou-se o emprego de técnicas de sensoriamento remoto e geoprocessamento para estimativas de precipitação no sul do Brasil. Três algoritmos computadorizados foram testados, sendo utilizadas as imagens dos canais 1, 3 e 4 (visível, vapor d’água e infravermelho) do satélite GOES 8 (Geostacionary Operational Environmental Satellite – 8) fornecidas pelo Centro de Previsão de Tempo e Estudos Climáticos do Instituto Nacional de Pesquisas Espaciais. A área de estudo compreendeu todo o estado do Rio Grande do Sul, onde se utilizaram os dados pluviométricos diários derivados de 142 postos no ano de 1998. Os algoritmos citados buscam identificar as nuvens precipitáveis para construir modelos estatísticos que correlacionem as precipitações diária e decendial observadas em solo com determinadas características físicas das nuvens acumuladas durante o mesmo período de tempo e na mesma posição geográfica de cada pluviômetro considerado. Os critérios de decisão que norteiam os algoritmos foram baseados na temperatura do topo das nuvens (através do infravermelho termal), reflectância no canal visível, características de vizinhança e no plano de temperatura x gradiente de temperatura Os resultados obtidos pelos modelos estatísticos são expressos na forma de mapas de precipitação por intervalo de tempo que podem ser comparados com mapas de precipitação obtidas por meios convencionais.