80 resultados para Non-Linear Analysis
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
The spatial patterns of diffuse, primitive, classic and compact beta-amyloid (Abeta) deposits were studied in the medial temporal lobe in 14 elderly, non-demented patients (ND) and in nine patients with Alzheimer’s disease (AD). In both patient groups, Abeta deposits were clustered and in a number of tissues, a regular periodicity of Abeta deposit clusters was observed parallel to the tissue boundary. The primitive deposit clusters were significantly larger in the AD cases but there were no differences in the sizes of the diffuse and classic deposit clusters between patient groups. In AD, the relationship between Abeta deposit cluster size and density in the tissue was non-linear. This suggested that cluster size increased with increasing Abeta deposit density in some tissues while in others, Abeta deposit density was high but contained within smaller clusters. It was concluded that the formation of large clusters of primitive deposits could be a factor in the development of AD.
Resumo:
Non-linear solutions and studies of their stability are presented for flows in a homogeneously heated fluid layer under the influence of a constant pressure gradient or when the mass flux across any lateral cross-section of the channel is required to vanish. The critical Grashof number is determined by a linear stability analysis of the basic state which depends only on the z-coordinate perpendicular to the boundary. Bifurcating longitudinal rolls as well as secondary solutions depending on the streamwise x-coordinate are investigated and their amplitudes are determined as functions of the supercritical Grashof number for various Prandtl numbers and angles of inclination of the layer. Solutions that emerge from a Hopf bifurcation assume the form of propagating waves and can thus be considered as steady flows relative to an appropriately moving frame of reference. The stability of these solutions with respect to three-dimensional disturbances is also analyzed in order to identify possible bifurcation points for evolving tertiary flows.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
This thesis describes the design and implementation of an interactive dynamic simulator called DASPRII. The starting point of this research has been an existing dynamic simulation package, DASP. DASPII is written in standard FORTRAN 77 and is implemented on universally available IBM-PC or compatible machines. It provides a means for the analysis and design of chemical processes. Industrial interest in dynamic simulation has increased due to the recent increase in concern over plant operability, resiliency and safety. DASPII is an equation oriented simulation package which allows solution of dynamic and steady state equations. The steady state can be used to initialise the dynamic simulation. A robust non linear algebraic equation solver has been implemented for steady state solution. This has increased the general robustness of DASPII, compared to DASP. A graphical front end is used to generate the process flowsheet topology from a user constructed diagram of the process. A conversational interface is used to interrogate the user with the aid of a database, to complete the topological information. An original modelling strategy implemented in DASPII provides a simple mechanism for parameter switching which creates a more flexible simulation environment. The problem description generated is by a further conversational procedure using a data-base. The model format used allows the same model equations to be used for dynamic and steady state solution. All the useful features of DASPI are retained in DASPII. The program has been demonstrated and verified using a number of example problems, Significant improvements using the new NLAE solver have been shown. Topics requiring further research are described. The benefits of variable switching in models has been demonstrated with a literature problem.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
The thesis is concerned with the electron properties of single-polepiece magnetic electron lenses especially under conditions of extreme polepiece saturation. The electron optical properties are first analysed under conditions of high polepiece permeability. From this analysis, a general idea can be obtained of the important parameters that affect ultimate lens performance. In addition, useful information is obtained concerning the design of improved lenses operating under conditions of extreme polepiece saturation, for example at flux densities of the order of 10 Tesla. It is shown that in a single-polepiece lens , the position and shape of the lens exciting coil plays an important role. In particular, the maximum permissible current density in the windings,rather than the properties of the iron, can set a limit to lens performance. This factor was therefore investigated in some detail. The axial field distribution of a single-polepiece lens, unlike that of a conventional lens, is highly asymmetrical. There are therefore two possible physical arrangements of the lens with respect to the incoming electron beam. In general these two orientations will result in different aberration coefficients. This feature has also been investigated in some detail. Single-pole piece lenses are thus considerably more complicated electron- optically than conventional double polepiece lenses. In particular, the absence of the usual second polepiece causes most of the axial magnetic flux density distribution to lie outside the body of the lens. This can have many advantages in electron microscopy but it creates problems in calculating the magnetic field distribution. In particular, presently available computer programs are liable to be considerably in error when applied to such structures. It was therefore necessary to find independent ways of checking the field calculations. Furthermore, if the polepiece is allowed to saturate, much more calculation is involved since the field distribution becomes a non-linear function of the lens excitation. In searching for optimum lens designs, care was therefore taken to ensure that the coil was placed in the optimum position. If this condition is satisfied there seems to be no theoretical limit to the maximum flux density that can be attained at the polepiece tip. However , under iron saturation condition, some broadening of the axial field distribution will take place, thereby changing the lens aberrations . Extensive calculations were therefore made to find the minimum spherical and chromatic aberration coefficients . The focal properties of such lens designs are presented and compared with the best conventional double-polepiece lenses presently available.
Resumo:
The mechanism of "Helical Interference" in milled slots is examined and a coherent theory for the geometry of such surfaces is presented. An examination of the relevant literature shows a fragmented approach to the problem owing to its normally destructive nature, so a complete analysis is developed for slots of constant lead, thus giving a united and exact theory for many different setting parameters and a range of cutter shapes. For the first time, a theory is developed to explain the "Interference Surface" generated in variable lead slots for cylindrical work and attention is drawn to other practical surfaces, such as cones, where variable leads are encountered. Although generally outside the scope of this work, an introductory analysis of these cases is considered in order to develop the cylindrical theory. Special emphasis is laid upon practical areas where the interference mechanism can be used constructively and its application as the rake face of a cutting tool is discussed. A theory of rake angle for such cutting tools is given for commonly used planes, and relative variations in calculated rake angle between planes is examined. Practical tests are conducted to validate both constant lead and variable lead theories and some design improvements to the conventional dividing head are suggested in order to manufacture variable lead workpieces, by use of a "superposed" rotation. A prototype machine is manufactured and its kinematic principle given for both linear and non-linearly varying superposed rotations. Practical workpieces of the former type are manufactured and compared with analytical predictions,while theoretical curves are generated for non-linear workpieces and then compared with those of linear geometry. Finally suggestions are made for the application of these principles to the manufacture of spiral bevel gears, using the "Interference Surface" along a cone as the tooth form.
Resumo:
The research concerns the development and application of an analytical computer program, SAFE-ROC, that models material behaviour and structural behaviour of a slender reinforced concrete column that is part of an overall structure and is subjected to elevated temperatures as a result of exposure to fire. The analysis approach used in SAFE-RCC is non-linear. Computer calculations are used that take account of restraint and continuity, and the interaction of the column with the surrounding structure during the fire. Within a given time step an iterative approach is used to find a deformed shape for the column which results in equilibrium between the forces associated with the external loads and internal stresses and degradation. Non-linear geometric effects are taken into account by updating the geometry of the structure during deformation. The structural response program SAFE-ROC includes a total strain model which takes account of the compatibility of strain due to temperature and loading. The total strain model represents a constitutive law that governs the material behaviour for concrete and steel. The material behaviour models employed for concrete and steel take account of the dimensional changes caused by the temperature differentials and changes in the material mechanical properties with changes in temperature. Non-linear stress-strain laws are used that take account of loading to a strain greater than that corresponding to the peak stress of the concrete stress-strain relation, and model the inelastic deformation associated with unloading of the steel stress-strain relation. The cross section temperatures caused by the fire environment are obtained by a preceding non-linear thermal analysis, a computer program FIRES-T.
Resumo:
This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.
Resumo:
Purpose: The paper aims to explore the nature and purpose of higher education (HE) in the twenty-first century, focussing on how it can help fashion a green knowledge-based economy by developing approaches to learning and teaching that are social, networked and ecologically sensitive. Design/methodology/approach: The paper presents a discursive analysis of the skills and knowledge requirements of an emerging green knowledge-based economy using a range of policy focussed and academic research literature. Findings: The business opportunities that are emerging as a more sustainable world is developed requires the knowledge and skills that can capture and move then forward but in a complex and uncertain worlds learning needs to non-linear, creative and emergent. Practical implications: Sustainable learning and the attributes graduates will need to exhibit are prefigured in the activities and learning characterising the work and play facilitated by new media technologies. Social implications: Greater emphasis is required in higher learning understood as the capability to learn, adapt and direct sustainable change requires interprofessional co-operation that must utlise the potential of new media technologies to enhance social learning and collective intelligence. Originality/value: The practical relationship between low-carbon economic development, social sustainability and HE learning is based on both normative criteria and actual and emerging projections in economic, technological and skills needs.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.