967 resultados para Dynamic behaviour
Resumo:
In this paper we describe the design and fabrication of a mechanical autonomous impact oscillator with a MEMS resonator as the frequency control element. The design has been developed with scalability to large 2-D arrays of coupled oscillators in mind. The dynamic behaviour of the impact oscillator was numerically studied and it was found that the geometry nonlinearity has an effect on the static pull-in voltage and equilibrium position. The external driving power can alter the frequency of the impact oscillator. The autonomous nature of the oscillator simplifies the complexity of the drive circuitry and is essential for large 2-D arrays.
Resumo:
A study has been undertaken of the vapor-phase adsorptive separation of n-alkanes from Kuwait kerosene (Kuwait National Petroleum Company, heavy kerosene) using zeolite molecular sieves. Due to the shortage of information on the adsorption of multicomponent systems in the open literature, the present investigation was initiated to study the effect of feed flowrate, temperature, and zeolite particle size on the height of mass transfer zone (MTZ) and the dynamic capacity of the adsorbent for multicomponent n-alkanes adsorption on a fixed-bed of zeolite type-5A. The optimum operating conditions for separation of the n-alkanes has been identified so that the effluent would also be of marketable quality. The effect of multicycle adsorption-desorption stages on the dynamic behaviour of zeolite using steam as a desorbing agent has been studied and compared with n-pentane and n-hexane as desorbing agents. The separation process comprised one cycle of adsorption using a fixed-bed of zeolite type-5A. The bed was fed with vaporized kerosene until saturation had been achieved whereby the n-alkanes were adsorbed and the denormalized material eluted. The process of adsorption-desorption was carried out isobarically at one atmosphere. A mathematical model has been developed to predict the breakthrough time using the method of characteristics. The results were in a reasonable agreement with the experimental values. This model has also been utilized to develop the equilibrium isotherm. Optimum operating conditions were achieved at a feed flowrate of 33.33 x 10-9 m3/s, a temperature of 643 K, and a particle size of (1.0 - 2.0) x 10-3 m. This yielded an HMTZ value and a dynamic capacity of 0.206 m and 9.6S3 x 10-2 kg n-alkanes/kg of zeolite respectively. These data will serve as a basis for design of a commercial plant. The purity of liquid-paraffin product desorbed using steam was 83.24 wt%. The dynamic capacity was noticed to decrease sharply with the cycle number, without intermediate reactivation of zeolite, while it was kept unchanged by intermediate reactivation. Normal hexane was found to be the best desorbing agent, the efficiency of which was mounted to 88.2%.
Resumo:
The increasing demand for high capacity data storage requires decreasing the head-to-tape gap and reducing the track width. A problem very often encountered is the development of adhesive debris on the heads at low humidity and high temperatures that can lead to an increase of space between the head and media, and thus a decrease in the playback signal. The influence of stains on the playback signal of reading heads is studied using RAW (Read After Write) tests and their influence on the wear of the heads by using indentation technique. The playback signal has been found to vary and the errors to increase as stains form a patchy pattern and grow in size to form a continuous layer. The indentation technique shows that stains reduce the wear rate of the heads. In addition, the wear tends to be more pronounced at the leading edge of the head compared to the trailing one. Chemical analysis of the stains using ferrite samples in conjunction with MP (metal particulate) tapes shows that stains contain iron particles and polymeric binder transferred from the MP tape. The chemical anchors in the binder used to grip the iron particles now react with the ferrite surface to create strong chemical bonds. At high humidity, a thin layer of iron oxyhydroxide forms on the surface of the ferrite. This soft material increases the wear rate and so reduces the amount of stain present on the heads. The stability of the binder under high humidity and under high temperature as well as the chemical reactions that might occur on the ferrite poles of the heads influences the dynamic behaviour of stains. A model of stain formation taking into account the channels of binder degradation and evolution upon different environmental conditions is proposed.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
The analysis and prediction of the dynamic behaviour of s7ructural components plays an important role in modern engineering design. :n this work, the so-called "mixed" finite element models based on Reissnen's variational principle are applied to the solution of free and forced vibration problems, for beam and :late structures. The mixed beam models are obtained by using elements of various shape functions ranging from simple linear to complex cubic and quadratic functions. The elements were in general capable of predicting the natural frequencies and dynamic responses with good accuracy. An isoparametric quadrilateral element with 8-nodes was developed for application to thin plate problems. The element has 32 degrees of freedom (one deflection, two bending and one twisting moment per node) which is suitable for discretization of plates with arbitrary geometry. A linear isoparametric element and two non-conforming displacement elements (4-node and 8-node quadrilateral) were extended to the solution of dynamic problems. An auto-mesh generation program was used to facilitate the preparation of input data required by the 8-node quadrilateral elements of mixed and displacement type. Numerical examples were solved using both the mixed beam and plate elements for predicting a structure's natural frequencies and dynamic response to a variety of forcing functions. The solutions were compared with the available analytical and displacement model solutions. The mixed elements developed have been found to have significant advantages over the conventional displacement elements in the solution of plate type problems. A dramatic saving in computational time is possible without any loss in solution accuracy. With beam type problems, there appears to be no significant advantages in using mixed models.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
In this paper we study the self-organising behaviour of smart camera networks which use market-based handover of object tracking responsibilities to achieve an efficient allocation of objects to cameras. Specifically, we compare previously known homogeneous configurations, when all cameras use the same marketing strategy, with heterogeneous configurations, when each camera makes use of its own, possibly different marketing strategy. Our first contribution is to establish that such heterogeneity of marketing strategies can lead to system wide outcomes which are Pareto superior when compared to those possible in homogeneous configurations. However, since the particular configuration required to lead to Pareto efficiency in a given scenario will not be known in advance, our second contribution is to show how online learning of marketing strategies at the individual camera level can lead to high performing heterogeneous configurations from the system point of view, extending the Pareto front when compared to the homogeneous case. Our third contribution is to show that in many cases, the dynamic behaviour resulting from online learning leads to global outcomes which extend the Pareto front even when compared to static heterogeneous configurations. Our evaluation considers results obtained from an open source simulation package as well as data from a network of real cameras. © 2013 IEEE.
Resumo:
Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.
Resumo:
Warm intervals within the Pliocene epoch (5.33-2.58 million years ago) were characterized by global temperatures comparable to those predicted for the end of this century (Haywood and Valdes, doi:10.1016/S0012-821X(03)00685-X) and atmospheric CO2 concentrations similar to today (Seki et al., 2010, doi:10.1016/j.epsl.2010.01.037; Bartoli et al., 2011, doi:10.1029/2010PA002055; Pagani et al., 2010, doi:10.1038/ngeo724). Estimates for global sea level highstands during these times (Miller et al., 2012, doi:10.1130/G32869.1) imply possible retreat of the East Antarctic ice sheet, but ice-proximal evidence from the Antarctic margin is scarce. Here we present new data from Pliocene marine sediments recovered offshore of Adélie Land, East Antarctica, that reveal dynamic behaviour of the East Antarctic ice sheet in the vicinity of the low-lying Wilkes Subglacial Basin during times of past climatic warmth. Sedimentary sequences deposited between 5.3 and 3.3 million years ago indicate increases in Southern Ocean surface water productivity, associated with elevated circum-Antarctic temperatures. The geochemical provenance of detrital material deposited during these warm intervals suggests active erosion of continental bedrock from within the Wilkes Subglacial Basin, an area today buried beneath the East Antarctic ice sheet. We interpret this erosion to be associated with retreat of the ice sheet margin several hundreds of kilometres inland and conclude that the East Antarctic ice sheet was sensitive to climatic warmth during the Pliocene.
Resumo:
Efficient energy storage holds the key to reducing waste energy and enabling the use of advanced handheld electronic devices, hydrid electric vehicles and residential energy storage. Recently, Li-ion batteries have been identified and employed as energy storage devices due to their high gravimetric and volumetric energy densities, in comparison to previous technologies. However, more research is required to enhance the efficiency of Li-ion batteries by discovering electrodes with larger electrochemical discharge capacities, while maintaining electrochemical stability. The aims of this study are to develop new microwave-assisted synthesis routes to nanostructured insertion cathodes, which harbor a greater affinity for lithium extraction and insertion than bulk materials. Subsequent to this, state-of-the-art synchrotron based techniques have been employed to understand structural and dynamic behaviour of nanostructured cathode materials during battery cell operation. In this study, microwave-assisted routes to a-LiFePO4, VO2(B), V3O7, H2V3O8 and V4O6(OH)4 have all been developed. Muon spin relaxation has shown that the presence of b-LiFePO4 has a detrimental effect on the lithium diffusion properties of a-LiFePO4, in agreement with first principles calculations. For the first time, a-LiFePO4 nanostructures have been obtained by employing a deep eutectic solvent reaction media showing near theoretical capacity (162 mAh g–1). Studies on VO2(B) have shown that the discharge capacity obtained is linked to the synthesis method. Electrochemical studies of H2V3O8 nanowires have shown outstanding discharge capacities (323 mAh g–1 at 100 mA g–1) and rate capability (180 mAh g–1 at 1 A g–1). The electrochemcial properties of V4O6(OH)4 have been investigated for the first time and show a promising discharge capacity of (180 mAh g–1). Lastly, in situ X-ray absorption spectroscopy has been utilised to track the evolution of the oxidation states in a-LiFePO4, VO2(B) and H2V3O8, and has shown these can all be observed dynamically.
Resumo:
The severe accidents deriving from the impact of natural events on industrial installations have become a matter of growing concern in the last decades. In the literature, these events are typically referred to as Natech accidents. Several peculiarities distinguish them from conventional industrial accidents caused by internal factors, such as the possible occurrence of multiple simultaneous failures, and the enhanced probability of cascading events. The research project provides a comprehensive overview of Natech accidents that occurred in the Chemical and Process Industry, allowing for the identification of relevant aspects of Natech events. Quantified event trees and probability of ignition are derived from the collected dataset, providing a step forward in the quantitative risk assessment of Natech accidents. The investigation of past Natech accidents also demonstrated that wildfires may cause technological accidents. Climate change and global warming are promoting the conditions for wildfire development and rapid spread. Hence, ensuring the safety of industrial facilities exposed to wildfires is paramount. This was achieved defining safety distances between wildland vegetation and industrial equipment items. In addition, an innovative methodology for the vulnerability assessment of Natech and Domino scenarios triggered by wildfires was developed. The approach accounted for the dynamic behaviour of wildfire events and related technological scenarios. Besides, the performance of the emergency response and the related intervention time in the case of cascading events caused by natural events were evaluated. Overall, the tools presented in this thesis represent a step forward in the Quantitative Risk Assessment of Natech accidents. The methodologies developed also provide a solid basis for the definition of effective strategies for risk mitigation and reduction. These aspects are crucial to improve the resilience of industrial plants to natural hazards, especially considering the effects that climate change may have on the severity of such events.
Resumo:
A three-dimensional Direct Finite Element procedure is here presented which takes into account most of the factors affecting the interaction problem of the dam-water-foundation system, whilst keeping the computational cost at a reasonable level by introducing some simplified hypotheses. A truncated domain is defined, and the dynamic behaviour of the system is treated as a wave-scattering problem where the presence of the dam perturbs an original free-field system. The rock foundation truncated boundaries are enclosed by a set of free-field one-dimensional and two-dimensional systems which transmit the effective forces to the main model and apply adsorbing viscous boundaries to ensure radiation damping. The water domain is treated as an added mass moving with the dam. A strategy is proposed to keep the viscous dampers at the boundaries unloaded during the initial phases of analysis, when the static loads are initialised, and thus avoid spurious displacements. A focus is given to the nonlinear behaviour of the rock foundation, with concentrated plasticity along the natural discontinuities of the rock mass, immersed in an otherwise linear elastic medium with Rayleigh damping. The entire procedure is implemented in the commercial software Abaqus®, whose base code is enriched with specific user subroutines when needed. All the extra coding is attached to the Thesis and tested against analytical results and simple examples. Possible rock wedge instabilities induced by intense ground motion, which are not easily investigated within a comprehensive model of the dam-water-foundation system, are treated separately with a simplified decoupled dynamic approach derived from the classical Newmark method, integrated with FE calculation of dam thrust on the wedges during the earthquake. Both the described approaches are applied to the case study of the Ridracoli arch-gravity dam (Italy) in order to investigate its seismic response to the Maximum Credible Earthquake (MCE) in a full reservoir condition.
Resumo:
This study evaluated the influence of gastrointestinal environmental factors (pH, digestive enzymes, food components, medicaments) on the survival of Lactobacillus casei Shirota and Lactobacillus casei LC01, using a semi-dynamic in vitro model that simulates the transit of microorganisms through the human GIT. The strains were first exposed to different simulated gastric juices for different periods of time (0, 30, 60 and 120 min), and then to simulated intestinal fluids for zero, 120, 180 and 240 min, in a step-wise format. The number of viable cells was determined after each step. The influence of food residues (skim milk) in the fluids and resistance to medicaments commonly used for varied therapeutic purposes (analgesics, antiarrhythmics, antibiotics, antihistaminics, proton pump inhibitors, etc.) were also evaluated. Results indicated that survival of both cultures was pH and time dependent, and digestive enzymes had little influence. Milk components presented a protective effect, and medicaments, especially anti-inflammatory drugs, influenced markedly the viability of the probiotic cultures, indicating that the beneficial effects of the two probiotic cultures to health are dependent of environmental factors encountered in the human gastrointestinal tract.