865 resultados para structural path modelling
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
This work presents recent results concerning a design methodology used to estimate the positioning deviation for a gantry (Cartesian) manipulator, related mainly to structural elastic deformation of components during operational conditions. The case-study manipulator is classified as gantry type and its basic dimensions are 1,53m x 0,97m x 1,38m. The dimensions used for the calculation of effective workspace due to end-effector path displacement are: 1m x 0,5m x 0,5m. The manipulator is composed by four basic modules defined as module X, module Y, module Z and terminal arm, where is connected the end-effector. Each module controlled axis performs a linear-parabolic positioning movement. The planning path algorithm has the maximum velocity and the total distance as input parameters for a given task. The acceleration and deceleration times are the same. Denavit-Hartemberg parameterization method is used in the manipulator kinematics model. The gantry manipulator can be modeled as four rigid bodies with three degrees-of-freedom in translational movements, connected as an open kinematics chain. Dynamic analysis were performed considering inertial parameters specification such as component mass, inertia and center of gravity position of each module. These parameters are essential for a correct manipulator dynamic modelling, due to multiple possibilities of motion and manipulation of objects with different masses. The dynamic analysis consists of a mathematical modelling of the static and dynamic interactions among the modules. The computation of the structural deformations uses the finite element method (FEM).
Resumo:
Harrod under analysis: path-dependence, historic time and endogenous structural change. The article aims to demonstrate how the Harrod's approach (1937, 1938, 1948) can offer theoretical elements to form a complex, historicists and non-determinist view of the economic system. The relaxation of the constant warranty rate hypothesis make possible the system suffers endogenous qualitative change. It results in the notion of path-dependence and historic time. By the endogenization of the expectations and the existence of turn-points mechanisms, this approach allows a synthesis between non-convergency and economic regulation.
Resumo:
The primary aim of the present study is to acquire a large amount of gravity data, to prepare gravity maps and interpret the data in terms of crustal structure below the Bavali shear zone and adjacent regions of northern Kerala. The gravity modeling is basically a tool to obtain knowledge of the subsurface extension of the exposed geological units and their structural relationship with the surroundings. The study is expected to throw light on the nature of the shear zone, crustal configuration below the high-grade granulite terrain and the tectonics operating during geological times in the region. The Bavali shear is manifested in the gravity profiles by a steep gravity gradient. The gravity models indicate that the Bavali shear coincides with steep plane that separates two contrasting crustal densities extending beyond a depth of 30 km possibly down to Moho, justifying it to be a Mantle fault. It is difficult to construct a generalized model of crustal evolution in terms of its varied manifestations using only the gravity data. However, the data constrains several aspects of crustal evolution and provides insights into some of the major events.
Resumo:
Three dimensional (3D) composites are strong contenders for the structural applications in situations like aerospace,aircraft and automotive industries where multidirectional thermal and mechanical stresses exist. The presence of reinforcement along the thickness direction in 3D composites,increases the through the thickness stiffness and strength properties.The 3D preforms can be manufactured with numerous complex architecture variations to meet the needs of specific applications.For hot structure applications Carbon-Carbon(C-C) composites are generally used,whose property variation with respect to temperature is essential for carrying out the design of hot structures.The thermomechanical behavior of 3D composites is not fully understood and reported.The methodology to find the thermomechanical properties using analytical modelling of 3D woven,3D 4-axes braided and 3D 5-axes braided composites from Representative Unit Cells(RUC's) based on constitutive equations for 3D composites has been dealt in the present study.High Temperature Unidirectional (UD) Carbon-Carbon material properties have been evaluated using analytical methods,viz.,Composite cylinder assemblage Model and Method of Cells based on experiments carried out on Carbon-Carbon fabric composite for a temparature range of 300 degreeK to 2800degreeK.These properties have been used for evaluating the 3D composite properties.From among the existing methods of solution sequences for 3D composites,"3D composite Strength Model" has been identified as the most suitable method.For thegeneration of material properies of RUC's od 3D composites,software has been developed using MATLAB.Correlaton of the analytically determined properties with test results available in literature has been established.Parametric studies on the variation of all the thermomechanical constants for different 3D performs of Carbon-Carbon material have been studied and selection criteria have been formulated for their applications for the hot structures.Procedure for the structural design of hot structures made of 3D Carbon-Carbon composites has been established through the numerical investigations on a Nosecap.Nonlinear transient thermal and nonlinear transient thermo-structural analysis on the Nosecap have been carried out using finite element software NASTRAN.Failure indices have been established for the identified performs,identification of suitable 3D composite based on parametric studies on strength properties and recommendation of this material for Nosecap of RLV based on structural performance have been carried out in this Study.Based on the 3D failure theory the best perform for the Nosecap has been identified as 4-axis 15degree braided composite.
Resumo:
This lab follows the lectures 'System Design: http://www.edshare.soton.ac.uk/6280/ http://www.edshare.soton.ac.uk/9653/ and http://www.edshare.soton.ac.uk/9713/ Students use Visual Paradigm for UML to build Class models through project examples: Aircraft Manufacturing Company, Library, Plant Nursery.
Resumo:
The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.
Resumo:
This paper describes the results and conclusions of the INCA (Integrated Nitrogen Model for European CAtchments) project and sets the findings in the context of the ELOISE (European Land-Ocean Interaction Studies) programme. The INCA project was concerned with the development of a generic model of the major factors and processes controlling nitrogen dynamics in European river systems, thereby providing a tool (a) to aid the scientific understanding of nitrogen transport and retention in catchments and (b) for river-basin management and policy-making. The findings of the study highlight the heterogeneity of the factors and processes controlling nitrogen dynamics in freshwater systems. Nonetheless, the INCA model was able to simulate the in-stream nitrogen concentrations and fluxes observed at annual and seasonal timescales in Arctic, Continental and Maritime-Temperate regimes. This result suggests that the data requirements and structural complexity of the INCA model are appropriate to simulate nitrogen fluxes across a wide range of European freshwater environments. This is a major requirement for the production of coupled fiver-estuary-coastal shelf models for the management of our aquatic environment. With regard to river-basin management, to achieve an efficient reduction in nutrient fluxes from the land to the estuarine and coastal zone, the model simulations suggest that management options must be adaptable to the prevailing environmental and socio-economic factors in individual catchments: 'Blanket approaches' to environmental policy appear too simple. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.
Resumo:
Recent coordinated observations of interplanetary scintillation (IPS) from the EISCAT, MERLIN, and STELab, and stereoscopic white-light imaging from the two heliospheric imagers (HIs) onboard the twin STEREO spacecraft are significant to continuously track the propagation and evolution of solar eruptions throughout interplanetary space. In order to obtain a better understanding of the observational signatures in these two remote-sensing techniques, the magnetohydrodynamics of the macro-scale interplanetary disturbance and the radio-wave scattering of the micro-scale electron-density fluctuation are coupled and investigated using a newly constructed multi-scale numerical model. This model is then applied to a case of an interplanetary shock propagation within the ecliptic plane. The shock could be nearly invisible to an HI, once entering the Thomson-scattering sphere of the HI. The asymmetry in the optical images between the western and eastern HIs suggests the shock propagation off the Sun–Earth line. Meanwhile, an IPS signal, strongly dependent on the local electron density, is insensitive to the density cavity far downstream of the shock front. When this cavity (or the shock nose) is cut through by an IPS ray-path, a single speed component at the flank (or the nose) of the shock can be recorded; when an IPS ray-path penetrates the sheath between the shock nose and this cavity, two speed components at the sheath and flank can be detected. Moreover, once a shock front touches an IPS ray-path, the derived position and speed at the irregularity source of this IPS signal, together with an assumption of a radial and constant propagation of the shock, can be used to estimate the later appearance of the shock front in the elongation of the HI field of view. The results of synthetic measurements from forward modelling are helpful in inferring the in-situ properties of coronal mass ejection from real observational data via an inverse approach.
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.
Resumo:
The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 a, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts.
Resumo:
Determination of the local structure of a polymer glass by scattering methods is complex due to the number of spatial and orientational correlations, both from within the polymer chain (intrachain) and between neighbouring chains (interchain), from which the scattering arises. Recently considerable advances have been made in the structural analysis of relatively simple polymers such as poly(ethylene) through the use of broad Q neutron scattering data tightly coupled to atomistic modelling procedures. This paper presents the results of an investigation into the use of these procedures for the analysis of the local structure of a-PMMA which is chemically more complex with a much greater number of intrachain structural parameters. We have utilised high quality neutron scattering data obtained using SANDALS at ISIS coupled with computer models representing both the single chain and bulk polymer system. Several different modelling approaches have been explored which encompass such techniques as Reverse Monte Carlo refinement and energy minimisation and their relative merits and successes are discussed. These different approaches highlight structural parameters which any realistic model of glassy atactic PMMA must replicate.
Resumo:
Stereoscopic white-light imaging of a large portion of the inner heliosphere has been used to track interplanetary coronal mass ejections. At large elongations from the Sun, the white-light brightness depends on both the local electron density and the efficiency of the Thomson-scattering process. To quantify the effects of the Thomson-scattering geometry, we study an interplanetary shock using forward magnetohydrodynamic simulation and synthetic white-light imaging. Identifiable as an inclined streak of enhanced brightness in a time–elongation map, the travelling shock can be readily imaged by an observer located within a wide range of longitudes in the ecliptic. Different parts of the shock front contribute to the imaged brightness pattern viewed by observers at different longitudes. Moreover, even for an observer located at a fixed longitude, a different part of the shock front will contribute to the imaged brightness at any given time. The observed brightness within each imaging pixel results from a weighted integral along its corresponding ray-path. It is possible to infer the longitudinal location of the shock from the brightness pattern in an optical sky map, based on the east–west asymmetry in its brightness and degree of polarisation. Therefore, measurement of the interplanetary polarised brightness could significantly reduce the ambiguity in performing three-dimensional reconstruction of local electron density from white-light imaging.