907 resultados para Nonlinear system modeling
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
Control of a chaotic system by homogeneous nonlinear driving, when a conditional Lyapunov exponent is zero, may give rise to special and interesting synchronizationlike behaviors in which the response evolves in perfect correlation with the drive. Among them, there are the amplification of the drive attractor and the shift of it to a different region of phase space. In this paper, these synchronizationlike behaviors are discussed, and demonstrated by computer simulation of the Lorentz model [E. N. Lorenz, J. Atmos. Sci. 20 130 (1963)] and the double scroll [T. Matsumoto, L. O. Chua, and M. Komuro, IEEE Trans. CAS CAS-32, 798 (1985)].
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Abstract:The objective of this work was to develop and validate a prognosis system for volume yield and basal area of intensively managed loblolly pine (Pinus taeda) stands, using stand and diameter class models compatible in basal area estimates. The data used in the study were obtained from plantations located in northern Uruguay. For model validation without data loss, a three-phase validation scheme was applied: first, the equations were fitted without the validation database; then, model validation was carried out; and, finally, the database was regrouped to recalibrate the parameter values. After the validation and final parameterization of the models, a simulation of the first commercial thinning was carried out. The developed prognosis system was precise and accurate in estimating basal area production per hectare or per diameter classes. There was compatibility in basal area estimates between diameter class and whole stand models, with a mean difference of -0.01 m2ha-1. The validation scheme applied is logic and consistent, since information on the accuracy and precision of the models is obtained without the loss of any information in the estimation of the models' parameters.
Resumo:
We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.
Resumo:
The accumulation of aqueous pollutants is becoming a global problem. The search for suitable methods and/or combinations of water treatment processes is a task that can slow down and stop the process of water pollution. In this work, the method of wet oxidation was considered as an appropriate technique for the elimination of the impurities present in paper mill process waters. It has been shown that, when combined with traditional wastewater treatment processes, wet oxidation offers many advantages. The combination of coagulation and wet oxidation offers a new opportunity for the improvement of the quality of wastewater designated for discharge or recycling. First of all, the utilization of coagulated sludge via wet oxidation provides a conditioning process for the sludge, i.e. dewatering, which is rather difficult to carry out with untreated waste. Secondly, Fe2(SO4)3, which is employed earlier as a coagulant, transforms the conventional wet oxidation process into a catalytic one. The use of coagulation as the post-treatment for wet oxidation can offer the possibility of the brown hue that usually accompanies the partial oxidation to be reduced. As a result, the supernatant is less colored and also contains a rather low amount of Fe ions to beconsidered for recycling inside mills. The thickened part that consists of metal ions is then recycled back to the wet oxidation system. It was also observed that wet oxidation is favorable for the degradation of pitch substances (LWEs) and lignin that are present in the process waters of paper mills. Rather low operating temperatures are needed for wet oxidation in order to destruct LWEs. The oxidation in the alkaline media provides not only the faster elimination of pitch and lignin but also significantly improves the biodegradable characteristics of wastewater that contains lignin and pitch substances. During the course of the kinetic studies, a model, which can predict the enhancements of the biodegradability of wastewater, was elaborated. The model includes lumped concentrations suchas the chemical oxygen demand and biochemical oxygen demand and reflects a generalized reaction network of oxidative transformations. Later developments incorporated a new lump, the immediately available biochemical oxygen demand, which increased the fidelity of the predictions made by the model. Since changes in biodegradability occur simultaneously with the destruction of LWEs, an attempt was made to combine these two facts for modeling purposes.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Globalization and new information technologies mean that organizations have to face world-wide competition in rapidly transforming, unpredictable environments, and thus the ability to constantly generate novel and improved products, services and processes has become quintessential for organizational success. Performance in turbulent environments is, above all, influenced by the organization's capability for renewal. Renewal capability consists of the ability of the organization to replicate, adapt, develop and change its assets, capabilities and strategies. An organization with a high renewal capability can sustain its current success factors while at the same time building new strengths for the future. This capability does not only mean that the organization is able to respond to today's challenges and to keep up with the changes in its environment, but also that it can actas a forerunner by creating innovations, both at the tactical and strategic levels of operation and thereby change the rules of the market. However, even though it is widely agreed that the dynamic capability for continuous learning, development and renewal is a major source of competitive advantage, there is no widely shared view on how organizational renewal capability should be defined, and the field is characterized by a plethora of concepts and definitions. Furthermore,there is a lack of methods for systematically assessing organizational renewal capability. The dissertation aims to bridge these gaps in the existing research by constructing an integrative theoretical framework for organizational renewal capability and by presenting a method for modeling and measuring this capability. The viability of the measurement tool is demonstrated in several contexts, andthe framework is also applied to assess renewal in inter-organizational networks. In this dissertation, organizational renewal capability is examined by drawing on three complimentary theoretical perspectives: knowledge management, strategic management and intellectual capital. The knowledge management perspective considers knowledge as inherently social and activity-based, and focuses on the organizational processes associated with its application and development. Within this framework, organizational renewal capability is understood as the capacity for flexible knowledge integration and creation. The strategic management perspective, on the other hand, approaches knowledge in organizations from the standpoint of its implications for the creation of competitive advantage. In this approach, organizational renewal is framed as the dynamic capability of firms. The intellectual capital perspective is focused on exploring how intangible assets can be measured, reported and communicated. From this vantage point, renewal capability is comprehended as the dynamic dimension of intellectual capital, which consists of the capability to maintain, modify and create knowledge assets. Each of the perspectives significantly contributes to the understanding of organizationalrenewal capability, and the integrative approach presented in this dissertationcontributes to the individual perspectives as well as to the understanding of organizational renewal capability as a whole.
Resumo:
Vegetation has a profound effect on flow and sediment transport processes in natural rivers, by increasing both skin friction and form drag. The increase in drag introduces a drag discontinuity between the in-canopy flow and the flow above, which leads to the development of an inflection point in the velocity profile, resembling a free shear layer. Therefore, drag acts as the primary driver for the entire canopy system. Most current numerical hydraulic models which incorporate vegetation rely either on simple, static plant forms, or canopy-scaled drag terms. However, it is suggested that these are insufficient as vegetation canopies represent complex, dynamic, porous blockages within the flow, which are subject to spatially and temporally dynamic drag forces. Here we present a dynamic drag methodology within a CFD framework. Preliminary results for a benchmark cylinder case highlight the accuracy of the method, and suggest its applicability to more complex cases.
Resumo:
La présente thèse s'intitule "Développent et Application des Méthodologies Computationnelles pour la Modélisation Qualitative". Elle comprend tous les différents projets que j'ai entrepris en tant que doctorante. Plutôt qu'une mise en oeuvre systématique d'un cadre défini a priori, cette thèse devrait être considérée comme une exploration des méthodes qui peuvent nous aider à déduire le plan de processus regulatoires et de signalisation. Cette exploration a été mue par des questions biologiques concrètes, plutôt que par des investigations théoriques. Bien que tous les projets aient inclus des systèmes divergents (réseaux régulateurs de gènes du cycle cellulaire, réseaux de signalisation de cellules pulmonaires) ainsi que des organismes (levure à fission, levure bourgeonnante, rat, humain), nos objectifs étaient complémentaires et cohérents. Le projet principal de la thèse est la modélisation du réseau de l'initiation de septation (SIN) du S.pombe. La cytokinèse dans la levure à fission est contrôlée par le SIN, un réseau signalant de protéines kinases qui utilise le corps à pôle-fuseau comme échafaudage. Afin de décrire le comportement qualitatif du système et prédire des comportements mutants inconnus, nous avons décidé d'adopter l'approche de la modélisation booléenne. Dans cette thèse, nous présentons la construction d'un modèle booléen étendu du SIN, comprenant la plupart des composantes et des régulateurs du SIN en tant que noeuds individuels et testable expérimentalement. Ce modèle utilise des niveaux d'activité du CDK comme noeuds de contrôle pour la simulation d'évènements du SIN à différents stades du cycle cellulaire. Ce modèle a été optimisé en utilisant des expériences d'un seul "knock-out" avec des effets phénotypiques connus comme set d'entraînement. Il a permis de prédire correctement un set d'évaluation de "knock-out" doubles. De plus, le modèle a fait des prédictions in silico qui ont été validées in vivo, permettant d'obtenir de nouvelles idées de la régulation et l'organisation hiérarchique du SIN. Un autre projet concernant le cycle cellulaire qui fait partie de cette thèse a été la construction d'un modèle qualitatif et minimal de la réciprocité des cyclines dans la S.cerevisiae. Les protéines Clb dans la levure bourgeonnante présentent une activation et une dégradation caractéristique et séquentielle durant le cycle cellulaire, qu'on appelle communément les vagues des Clbs. Cet évènement est coordonné avec la courbe d'activation inverse du Sic1, qui a un rôle inhibitoire dans le système. Pour l'identification des modèles qualitatifs minimaux qui peuvent expliquer ce phénomène, nous avons sélectionné des expériences bien définies et construit tous les modèles minimaux possibles qui, une fois simulés, reproduisent les résultats attendus. Les modèles ont été filtrés en utilisant des simulations ODE qualitatives et standardisées; seules celles qui reproduisaient le phénotype des vagues ont été gardées. L'ensemble des modèles minimaux peut être utilisé pour suggérer des relations regulatoires entre les molécules participant qui peuvent ensuite être testées expérimentalement. Enfin, durant mon doctorat, j'ai participé au SBV Improver Challenge. Le but était de déduire des réseaux spécifiques à des espèces (humain et rat) en utilisant des données de phosphoprotéines, d'expressions des gènes et des cytokines, ainsi qu'un réseau de référence, qui était mis à disposition comme donnée préalable. Notre solution pour ce concours a pris la troisième place. L'approche utilisée est expliquée en détail dans le dernier chapitre de la thèse. -- The present dissertation is entitled "Development and Application of Computational Methodologies in Qualitative Modeling". It encompasses the diverse projects that were undertaken during my time as a PhD student. Instead of a systematic implementation of a framework defined a priori, this thesis should be considered as an exploration of the methods that can help us infer the blueprint of regulatory and signaling processes. This exploration was driven by concrete biological questions, rather than theoretical investigation. Even though the projects involved divergent systems (gene regulatory networks of cell cycle, signaling networks in lung cells), as well as organisms (fission yeast, budding yeast, rat, human), our goals were complementary and coherent. The main project of the thesis is the modeling of the Septation Initiation Network (SIN) in S.pombe. Cytokinesis in fission yeast is controlled by the SIN, a protein kinase signaling network that uses the spindle pole body as scaffold. In order to describe the qualitative behavior of the system and predict unknown mutant behaviors we decided to adopt a Boolean modeling approach. In this thesis, we report the construction of an extended, Boolean model of the SIN, comprising most SIN components and regulators as individual, experimentally testable nodes. The model uses CDK activity levels as control nodes for the simulation of SIN related events in different stages of the cell cycle. The model was optimized using single knock-out experiments of known phenotypic effect as a training set, and was able to correctly predict a double knock-out test set. Moreover, the model has made in silico predictions that have been validated in vivo, providing new insights into the regulation and hierarchical organization of the SIN. Another cell cycle related project that is part of this thesis was to create a qualitative, minimal model of cyclin interplay in S.cerevisiae. CLB proteins in budding yeast present a characteristic, sequential activation and decay during the cell cycle, commonly referred to as Clb waves. This event is coordinated with the inverse activation curve of Sic1, which has an inhibitory role in the system. To generate minimal qualitative models that can explain this phenomenon, we selected well-defined experiments and constructed all possible minimal models that, when simulated, reproduce the expected results. The models were filtered using standardized qualitative ODE simulations; only the ones reproducing the wave-like phenotype were kept. The set of minimal models can be used to suggest regulatory relations among the participating molecules, which will subsequently be tested experimentally. Finally, during my PhD I participated in the SBV Improver Challenge. The goal was to infer species-specific (human and rat) networks, using phosphoprotein, gene expression and cytokine data and a reference network provided as prior knowledge. Our solution to the challenge was selected as in the final chapter of the thesis.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.