907 resultados para Nonlinear system modeling
Resumo:
Earth System Models (ESM) have been successfuly developed over past few years, and are currently beeing used for simulating present day-climate, seasonal to interanual predictions of climate change. The supercomputer performance plays an important role in climate modeling since one of the challenging issues for climate modellers is to efficiently and accurately couple earth System components on present day computers architectures. At the Barcelona Supercomputing Center (BSC), we work with the EC- Earth System Model. The EC- Earth is an ESM, which currently consists of an atmosphere (IFS) and an ocean (NEMO) model that communicate with each other through the OASIS coupler. Additional modules (e.g. for chemistry and vegetation ) are under development. The EC-Earth ESM has been ported successfully over diferent high performance computin platforms (e.g, IBM P6 AIX, CRAY XT-5, Intelbased Linux Clusters, SGI Altix) at diferent sites in Europ (e.g., KNMI, ICHEC, ECMWF). The objective of the first phase of the project was to identify and document the issues related with the portability and performance of EC-Earth on the MareNostrum supercomputer, a System based on IBM PowerPC 970MP processors and run under a Linux Suse Distribution. EC-Earth was successfully ported to MareNostrum, and a compilation incompatibilty was solved by a two step compilation approach using XLF version 10.1 and 12.1 compilers. In addition, the EC-Earth performance was analyzed with respect to escalability and trace analysis with the Paravear software. This analysis showed that EC-Earth with a larger number of IFS CPUs (<128) is not feasible at the moment since some issues exists with the IFS-NEMO balance and MPI Communications.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facility location modeling. This is not at all surprising since location policy is one of the most profitable areas of applied systems analysis in regional science and ample theoretical and applied challenges are offered. Location-allocation models seek the location of facilities and/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or several objectives generally related to the efficiency of the system or to the allocation of resources. This paper concerns the location of facilities or services in discrete space or networks, that are related to the public sector, such as emergency services (ambulances, fire stations, and police units), school systems and postal facilities. The paper is structured as follows: first, we will focus on public facility location models that use some type of coverage criterion, with special emphasis in emergency services. The second section will examine models based on the P-Median problem and some of the issues faced by planners when implementing this formulation in real world locational decisions. Finally, the last section will examine new trends in public sector facility location modeling.
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
Synaptic plasticity involves a complex molecular machinery with various protein interactions but it is not yet clear how its components give rise to the different aspects of synaptic plasticity. Here we ask whether it is possible to mathematically model synaptic plasticity by making use of known substances only. We present a model of a multistable biochemical reaction system and use it to simulate the plasticity of synaptic transmission in long-term potentiation (LTP) or long-term depression (LTD) after repeated excitation of the synapse. According to our model, we can distinguish between two phases: first, a "viscosity" phase after the first excitation, the effects of which like the activation of NMDA receptors and CaMKII fade out in the absence of further excitations. Second, a "plasticity" phase actuated by an identical subsequent excitation that follows after a short time interval and causes the temporarily altered concentrations of AMPA subunits in the postsynaptic membrane to be stabilized. We show that positive feedback is the crucial element in the core chemical reaction, i.e. the activation of the short-tail AMPA subunit by NEM-sensitive factor, which allows generating multiple stable equilibria. Three stable equilibria are related to LTP, LTD and a third unfixed state called ACTIVE. Our mathematical approach shows that modeling synaptic multistability is possible by making use of known substances like NMDA and AMPA receptors, NEM-sensitive factor, glutamate, CaMKII and brain-derived neurotrophic factor. Furthermore, we could show that the heteromeric combination of short- and long-tail AMPA receptor subunits fulfills the function of a memory tag.
Resumo:
Carbon and oxygen isotope studies of the host and gangue carbonates of Mississippi Valley-type zinc-lead deposits in the San Vicente District hosted in the Upper Triassic to Lower Jurassic dolostones of the Pucara basin (central Peru) were used to constrain models of the ore formation. A mixing model between an incoming hot saline slightly acidic radiogenic (Pb, Sr) fluid and the native formation water explains the overall isotopic variation (delta(13)C = - 11.5 to + 2.5 parts per thousand relative to PDB and delta(18)O = + 18.0 to + 24.3 parts per thousand relative to SMOW) of the carbonate generations. The dolomites formed during the main ore stage show a narrower range (delta(13)C = - 0.1 to + 1.7 parts per thousand and delta(18)O = + 18.7 to + 23.4 parts per thousand) which is explained by exchange between the mineralizing fluids and the host carbonates combined with changes in temperature and pressure. This model of fluid-rock interaction explains the pervasive alteration of the host dolomite I and precipitation of sphalerite I. The open-space filling hydrothermal white sparry dolomite and the coexisting sphalerite II formed by prolonged fluid-host dolomite interaction and limited CO2 degassing. Late void-filling dolomite III (or calcite) and the associated sphalerite III formed as the consequence of CO2 degassing and concomitant pH increase of a slightly acidic ore fluid. Widespread brecciation is associated to CO2 outgassing. Consequently, pressure variability plays a major role in the ore precipitation during the late hydrothermal events in San Vicente. The presence of native sulfur associated with extremely carbon-light calcites replacing evaporitic sulfates (e.g., delta(13)C = - 11.5 parts per thousand), altered native organic matter and heavier hydrothermal bitumen (from - 27.0 to - 23.0 parts per thousand delta(13)C) points to thermochemical reduction of sulfate and/or thiosulfate. The delta(13)C- and delta(18)O-values of the altered host dolostone and hydrothermal carbonates, and the carbon isotope composition of the associated organic matter show a strong regional homogeneity. These results coupled with the strong mineralogical and petrographic similarities of the different MVT occurrences perhaps reflects the fact that the mineralizing processes were similar in the whole San Vicente belt, suggesting the existence of a common regional mineralizing hydrothermal system with interconnected plumbing.
Resumo:
The long-term mean properties of the global climate system and those of turbulent fluid systems are reviewed from a thermodynamic viewpoint. Two general expressions are derived for a rate of entropy production due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is shown with these expressions that maximum entropy production in the Earth s climate system suggested by Paltridge, as well as maximum transport properties of heat or momentum in a turbulent system suggested by Malkus and Busse, correspond to a state in which the rate of entropy production due to the turbulent dissipation is at a maximum. Entropy production due to absorption of solar radiation in the climate system is found to be irrelevant to the maximized properties associated with turbulence. The hypothesis of maximum entropy production also seems to be applicable to the planetary atmospheres of Mars and Titan and perhaps to mantle convection. Lorenz s conjecture on maximum generation of available potential energy is shown to be akin to this hypothesis with a few minor approximations. A possible mechanism by which turbulent fluid systems adjust themselves to the states of maximum entropy production is presented as a selffeedback mechanism for the generation of available potential energy. These results tend to support the hypothesis of maximum entropy production that underlies a wide variety of nonlinear fluid systems, including our planet as well as other planets and stars
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.
Resumo:
Fractal mathematics has been used to characterize water and solute transport in porous media and also to characterize and simulate porous media properties. The objective of this study was to evaluate the correlation between the soil infiltration parameters sorptivity (S) and time exponent (n) and the parameters dimension (D) and the Hurst exponent (H). For this purpose, ten horizontal columns with pure (either clay or loam) and heterogeneous porous media (clay and loam distributed in layers in the column) were simulated following the distribution of a deterministic Cantor Bar with fractal dimension H" 0.63. Horizontal water infiltration experiments were then simulated using Hydrus 2D software. The sorptivity (S) and time exponent (n) parameters of the Philip equation were estimated for each simulation, using the nonlinear regression procedure of the statistical software package SAS®. Sorptivity increased in the columns with the loam content, which was attributed to the relation of S with the capillary radius. The time exponent estimated by nonlinear regression was found to be less than the traditional value of 0.5. The fractal dimension estimated from the Hurst exponent was 17.5 % lower than the fractal dimension of the Cantor Bar used to generate the columns.
Resumo:
We consider the effects of external, multiplicative white noise on the relaxation time of a general representation of a bistable system from the points of view provided by two, quite different, theoretical approaches: the classical Stratonovich decoupling of correlations and the new method due to Jung and Risken. Experimental results, obtained from a bistable electronic circuit, are compared to the theoretical predictions. We show that the phenomenon of critical slowing down appears as a function of the noise parameters, thereby providing a correct characterization of a noise-induced transition.
Resumo:
We develop a systematic method to derive all orders of mode couplings in a weakly nonlinear approach to the dynamics of the interface between two immiscible viscous fluids in a Hele-Shaw cell. The method is completely general: it applies to arbitrary geometry and driving. Here we apply it to the channel geometry driven by gravity and pressure. The finite radius of convergence of the mode-coupling expansion is found. Calculation up to third-order couplings is done, which is necessary to account for the time-dependent Saffman-Taylor finger solution and the case of zero viscosity contrast. The explicit results provide relevant analytical information about the role that the viscosity contrast and the surface tension play in the dynamics of the system. We finally check the quantitative validity of different orders of approximation and a resummation scheme against a physically relevant, exact time-dependent solution. The agreement between the low-order approximations and the exact solution is excellent within the radius of convergence, and is even reasonably good beyond this radius.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
A new solvable model of synchronization dynamics is introduced. It consists of a system of long range interacting tops or magnetic moments with random precession frequencies. The model allows for an explicit study of orientational effects in synchronization phenomena as well as nonlinear processes in resonance phenomena in strongly coupled magnetic systems. A stability analysis of the incoherent solution is performed for different types of orientational disorder. A system with orientational disorder always synchronizes in the absence of noise.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.