951 resultados para linear irrigation system
Resumo:
A Jeffcott rotor consists of a disc at the centre of an axle supported at its end by bearings. A bolted Jeffcott rotor is formed by two discs, each with a shaft on one side. The discs are held together by spring loaded bolts near the outer edge. When the rotor turns there is tendency for the discs to separate on one side. This effect is more marked if the rotor is unbalanced, especially at resonance speeds. The equations of motion of the system have been developed with four degrees of freedom to include the rotor and bearing movements in the respective axes. These equations which include non-linear terms caused by the rotor opening, are subjected to external force such from rotor imbalance. A simulation model based on these equations was created using SIMULINK. An experimental test rig was used to characterise the dynamic features. Rotor discs open at a lateral displacement of the rotor of 0.8 mm. This is the threshold value used to show the change of stiffness from high stiffness to low stiffness. The experimental results, which measure the vibration amplitude of the rotor, show the dynamic behaviour of the bolted rotor due to imbalance. Close agreement of the experimental and theoretical results from time histories, waterfall plots, pseudo-phase plots and rotor orbit plot, indicated the validity of the model and existence of the non-linear jump phenomenon.
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
A potential low cost novel sensing scheme for monitoring absolute strain is demonstrated. The scheme utilizes a synthetic heterodyne interrogation technique working in conjunction with a linearly chirped, sinusoidally tapered, apodized Bragg grating sensor. The interrogation technique is relatively simple to implement in terms of the required optics and the peripheral electronics. This scheme generates an output signal that has a quasi-linear response to absolute strain with a static strain resolution of ~±20 με and an operating range of ~1000 με.
Resumo:
A novel form of low coherence interferometric sensor is described. The channelled spectrum produced by illuminating a sensing interferometer with a broadband source is analysed directly using a CCD array. The system currently provides unambiguous measurement over a range of 1.5 mm with an accuracy of better than 6 µm.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Desalination of groundwater is essential in arid regions that are remote from both seawater and freshwater resources. Desirable features of a groundwater desalination system include a high recovery ratio, operation from a sustainable energy source such as solar, and high water output per unit of energy and land. Here we propose a new system that uses a solar-Rankine cycle to drive reverse osmosis (RO). The working fluid such as steam is expanded against a power piston that actuates a pump piston which in turn pressurises the saline water thus passing it through RO membranes. A reciprocating crank mechanism is used to equalise the forces between the two pistons. The choice of batch mode in preference to continuous flow permits maximum energy recovery and minimal concentration polarisation in the vicinity of the RO membrane. This study analyses the sizing and efficiency of the crank mechanism, quantifies energy losses in the RO separation and predicts the overall performance. For example, a system using a field of linear Fresnel collectors occupying 1000 m2 of land and raising steam at 200 °C and 15.5 bar could desalinate 350 m3/day from saline water containing 5000 ppm of sodium chloride with a recovery ratio of 0.7.
Resumo:
The development of a system that integrates reverse osmosis (RO) with a horticultural greenhouse has been advanced through laboratory experiments. In this concept, intended for the inland desalination of brackish groundwater in dry areas, the RO concentrate will be reduced in volume by passing it through the evaporative cooling pads of the greenhouse. The system will be powered by solar photovoltaics (PV). Using a solar array simulator, we have verified that the RO can operate with varying power input and recovery rates to meet the water demands for irrigation and cooling of a greenhouse in north-west India. Cooling requires ventilation by a fan which has also been built, tested and optimised with a PV module outdoors. Results from the experiments with these two subsystems (RO and fan) are compared to theoretical predictions to reach conclusions about energy usage, sizing and cost. For example, the optimal sizing for the RO system is 0.12–1.3 m2 of PV module per m2 of membrane, depending on feed salinity. For the fan, the PV module area equals that of the fan aperture. The fan consumes <30 J of electrical energy per m3 of air moved which is 3 times less than that of standard fans. The specific energy consumption of the RO, at 1–2.3 kWh ?m-3, is comparable to that reported by others. Now that the subsystems have been verifi ed, the next step will be to integrate and test the whole system in the field.
Resumo:
The optimization of a wavelength tunable RZ transmitter, consisting of an electro-absorption modulator and a SG DBR tunable laser, is carried out using a linear spectrogram based characterization and leads to 1500 km transmission at 42.7 Gb/s independent of the operating wavelength. We demonstrate that, to ensure optimum and consistent transmission performance over a portion of the C-band, the RF drive and bias conditions of the EAM must be varied at each wavelength. The sign and magnitude of the pulse chirp (characterized using the linear spectrographic technique) is therefore tailored to suit the dispersion map of the transmission link. Results achieved show that by optimizing the drive and DC bias applied to the EAM, consistent transmission performance can be achieved over a wide wavelength range. Failure to optimize the EAM drive conditions at each wavelength can lead to serious degradation in system performance.
Resumo:
Many passengers experience discomfort during flight because of the effect of low humidity on the skin, eyes, throat, and nose. In this physiological study, we have investigated whether flight and low humidity also affect the tympanic membrane. From previous studies, a decrease in admittance of the tympanic membrane through drying might be expected to affect the buffering capacity of the middle ear and to disrupt automatic pressure regulation. This investigation involved an observational study onboard an aircraft combined with experiments in an environmental chamber, where the humidity could be controlled but could not be made to be as low as during flight. For the flight study, there was a linear relationship between the peak compensated static admittance of the tympanic membrane and relative humidity with a constant of proportionality of 0.00315 mmho/% relative humidity. The low humidity at cruise altitude (minimum 22.7 %) was associated with a mean decrease in admittance of about 20 % compared with measures in the airport. From the chamber study, we further found that a mean decrease in relative humidity of 23.4 % led to a significant decrease in mean admittance by 0.11 mmho [F(1,8) = 18.95, P = 0.002], a decrease of 9.4 %. The order of magnitude for the effect of humidity was similar for the flight and environmental chamber studies. We conclude that admittance changes during flight were likely to have been caused by the low humidity in the aircraft cabin and that these changes may affect the automatic pressure regulation of the middle ear during descent. © 2013 Association for Research in Otolaryngology.
Resumo:
Direct-drive linear reciprocating compressors offer numerous advantages over conventional counterparts which are usually driven by a rotary induction motor via a crank shaft. However, to ensure efficient and reliable operation under all conditions, it is essential that motor current of a linear compressor follows a sinusoidal current command with a frequency which matches the system resonant frequency. The design of a high-performance current controller for linear compressor drive presents a challenge since the system is highly nonlinear, and an effective solution must be low cost. In this paper, a learning feed-forward current controller for the linear compressors is proposed. It comprises a conventional feedback proportional-integral controller and a feed-forward B-spline neural network (BSNN). The feed-forward BSNN is trained online and in real time in order to minimize the current tracking error. Extensive simulation and experiment results with a prototype linear compressor show that the proposed current controller exhibits high steady state and transient performance. © 2009 IEEE.
Resumo:
A novel form of low coherence interferometric sensor is described. The channelled spectrum produced by illuminating a sensing interferometer with a broadband source is analysed directly using a CCD array. The system currently provides unambiguous measurement over a range of 1.5 mm with an accuracy of better than 6 µm.
Resumo:
We present what is to our knowledge the first comprehensive investigation of the use of blazed fiber Bragg gratings (BFBGs) to interrogate wavelength division multiplexed (WDM) in-fiber optical sensor arrays. We show that the light outcoupled from the core of these BFBGs is radiated with sufficient optical power that it may be detected with a low-cost charge-coupled device (CCD) array. We present thorough system performance analysis that shows sufficient spectral-spatial resolution to decode sensors with a WDM separation of 75 ρm, signal-to-noise ratio greater than 45-dB bandwidth of 70 nm, and drift of only 0.1 ρm. We show the system to be polarization-state insensitive, making the BFBG-CCD spectral analysis technique a practical, extremely low-cost, alternative to traditional tunable filter approaches.
Resumo:
Background: Esophageal intubation is a widely utilized technique for a diverse array of physiological studies, activating a complex physiological response mediated, in part, by the autonomic nervous system (ANS). In order to determine the optimal time period after intubation when physiological observations should be recorded, it is important to know the duration of, and factors that influence, this ANS response, in both health and disease. Methods: Fifty healthy subjects (27 males, median age 31.9 years, range 20-53 years) and 20 patients with Rome III defined functional chest pain (nine male, median age of 38.7 years, range 28-59 years) had personality traits and anxiety measured. Subjects had heart rate (HR), blood pressure (BP), sympathetic (cardiac sympathetic index, CSI), and parasympathetic nervous system (cardiac vagal tone, CVT) parameters measured at baseline and in response to per nasum intubation with an esophageal catheter. CSI/CVT recovery was measured following esophageal intubation. Key Results: In all subjects, esophageal intubation caused an elevation in HR, BP, CSI, and skin conductance response (SCR; all p < 0.0001) but concomitant CVT and cardiac sensitivity to the baroreflex (CSB) withdrawal (all p < 0.04). Multiple linear regression analysis demonstrated that longer CVT recovery times were independently associated with higher neuroticism (p < 0.001). Patients had prolonged CSI and CVT recovery times in comparison to healthy subjects (112.5 s vs 46.5 s, p = 0.0001 and 549 s vs 223.5 s, p = 0.0001, respectively). Conclusions & Inferences: Esophageal intubation activates a flight/flight ANS response. Future studies should allow for at least 10 min of recovery time. Consideration should be given to psychological traits and disease status as these can influence recovery. The psychological trait of neuroticism retards autonomic recovery following esophageal intubation in health and functional chest pain. © 2013 John Wiley & Sons Ltd.