958 resultados para Transfer function characteristics
Resumo:
The water time constant and mechanical time constant greatly influences the power and speed oscillations of hydro-turbine-generator unit. This paper discusses the turbine power transients in response to different nature and changes in the gate position. The work presented here analyses the characteristics of hydraulic system with an emphasis on changes in the above time constants. The simulation study is based on mathematical first-, second-, third- and fourth-order transfer function models. The study is further extended to identify discrete time-domain models and their characteristic representation without noise and with noise content of 10 & 20 dB signal-to-noise ratio (SNR). The use of self-tuned control approach in minimising the speed deviation under plant parameter changes and disturbances is also discussed.
Resumo:
Este proyecto se centra en la implementación de un sistema de control activo de ruido mediante algoritmos genéticos. Para ello, se ha tenido en cuenta el tipo de ruido que se quiere cancelar y el diseño del controlador, parte fundamental del sistema de control. El control activo de ruido sólo es eficaz a bajas frecuencias, hasta los 250 Hz, justo para las cuales los elementos pasivos pierden efectividad, y en zonas o recintos de pequeñas dimensiones y conductos. El controlador ha de ser capaz de seguir todas las posibles variaciones del campo acústico que puedan producirse (variaciones de fase, de frecuencia, de amplitud, de funciones de transferencia electro-acústicas, etc.). Su funcionamiento está basado en algoritmos FIR e IIR adaptativos. La elección de un tipo de filtro u otro depende de características tales como linealidad, causalidad y número de coeficientes. Para que la función de transferencia del controlador siga las variaciones que surgen en el entorno acústico de cancelación, tiene que ir variando el valor de los coeficientes del filtro mediante un algoritmo adaptativo. En este proyecto se emplea como algoritmo adaptativo un algoritmo genético, basado en la selección biológica, es decir, simulando el comportamiento evolutivo de los sistemas biológicos. Las simulaciones se han realizado con dos tipos de señales: ruido de carácter aleatorio (banda ancha) y ruido periódico (banda estrecha). En la parte final del proyecto se muestran los resultados obtenidos y las conclusiones al respecto. Summary. This project is focused on the implementation of an active noise control system using genetic algorithms. For that, it has been taken into account the noise type wanted to be canceled and the controller design, a key part of the control system. The active noise control is only effective at low frequencies, up to 250 Hz, for which the passive elements lose effectiveness, and in small areas or enclosures and ducts. The controller must be able to follow all the possible variations of the acoustic field that might be produced (phase, frequency, amplitude, electro-acoustic transfer functions, etc.). It is based on adaptive FIR and IIR algorithms. The choice of a kind of filter or another depends on characteristics like linearity, causality and number of coefficients. Moreover, the transfer function of the controller has to be changing filter coefficients value thought an adaptive algorithm. In this project a genetic algorithm is used as adaptive algorithm, based on biological selection, simulating the evolutionary behavior of biological systems. The simulations have been implemented with two signal types: random noise (broadband) and periodic noise (narrowband). In the final part of the project the results and conclusions are shown.
Resumo:
In this paper a model for the measuring process of sonic anemometers (ultrasound pulse based) is presented. The differential equations that describe the travel of ultrasound pulses are solved in the general case of non-steady, non-uniform atmospheric flow field. The concepts of instantaneous line-average and travelling pulse-referenced average are established and employed to explain and calculate the differences between the measured turbulent speed (travelling pulse-referenced average) and the line-averaged one. The limit k1l=1 established by Kaimal in 1968, as the maximum value which permits the neglect of the influence of the sonic measuring process on the measurement of turbulent components is reviewed here. Three particular measurement cases are analysed: A non-steady, uniform flow speed field, a steady, non-uniform flow speed field and finally an atmospheric flow speed field. In the first case, for a harmonic time-dependent flow field, Mach number, M (flow speed to sound speed ratio) and time delay between pulses have revealed themselves to be important parameters in the behaviour of sonic anemometers, within the range of operation. The second case demonstrates how the spatial non-uniformity of the flow speed field leads to an influence of the finite transit time of the pulses (M≠0) even in the absence of non-steady behaviour of the wind speed. In the last case, a model of the influence of the sonic anemometer processes on the measurement of wind speed spectral characteristics is presented. The new solution is compared to the line-averaging models existing in the literature. Mach number and time delay significantly distort the measurement in the normal operational range. Classical line averaging solutions are recovered when Mach number and time delay between pulses go to zero in the new proposed model. The results obtained from the mathematical model have been applied to the calculation of errors in different configurations of practical interest, such as an anemometer located on a meteorological mast and the transfer function of a sensor in an atmospheric wind. The expressions obtained can be also applied to determine the quality requirements of the flow in a wind tunnel used for ultrasonic anemometer calibrations.
Resumo:
A new radiolarian-based transfer function for sea surface temperature (SST) estimations has been developed from 23 taxa and taxa groups in 53 surface sediment samples recovered between 35° and 72°S in the Atlantic sector of the Southern Ocean. For the selection of taxa and taxa groups ecological information from water column studies was considered. The transfer function allows the estimation of austral summer SST (December-March) ranging between -1 and 18°C with a standard error of estimate of 1.2°C. SST estimates from selected late Pleistocene squences were sucessfully compared with independend paleotemperature estimates derived from a diatom transfer function. This shows that radiolarians provide an excellent tool for paleotemperature reconstructions in Pleistocene sediments of the Southern Ocean.
Resumo:
This paper investigates the input-output characteristics of structural health monitoring systems for composite plates based on permanently attached piezoelectric transmitter and sensor elements. Using dynamic piezoelectricity theory and a multiple integral transform method to describe the propagating and scattered flexural waves an electro-mechanical model for simulating the voltage input-output transfer function for circular piezoelectric transmitters and sensors adhesively attached to an orthotropic composite plate is developed. The method enables the characterization of all three physical processes, i.e. wave generation, wave propagation and wave reception. The influence of transducer, plate and attached electrical circuit characteristics on the voltage output behaviour of the system is examined through numerical calculations, both in frequency and the time domain. The results show that the input-output behaviour of the system is not properly predicted by the transducers' properties alone. Coupling effects between the transducers and the tested structure have to be taken into account, and adding backing materials to the piezoelectric elements can significantly improve the sensitivity of the system. It is shown that in order to achieve maximum sensitivity, particular piezoelectric transmitters and sensors need to be designed according to the structure to be monitored and the specific frequency regime of interest.
Resumo:
The thesis will show how to equalise the effect of quantal noise across spatial frequencies by keeping the retinal flux (If-2) constant. In addition, quantal noise is used to study the effect of grating area and spatial frequency on contrast sensitivity resulting in the extension of the new contrast detection model describing the human contrast detection system as a simple image processor. According to the model the human contrast detection system comprises low-pass filtering due to ocular optics, addition of light dependent noise at the event of quantal absorption, high-pass filtering due to the neural visual pathways, addition of internal neural noise, after which detection takes place by a local matched filter, whose sampling efficiency decreases as grating area is increased. Furthermore, this work will demonstrate how to extract both the optical and neural modulation transfer functions of the human eye. The neural transfer function is found to be proportional to spatial frequency up to the local cut-off frequency at eccentricities of 0 - 37 deg across the visual field. The optical transfer function of the human eye is proposed to be more affected by the Stiles-Crawford -effect than generally assumed in the literature. Similarly, this work questions the prevailing ideas about the factors limiting peripheral vision by showing that peripheral optical acts as a low-pass filter in normal viewing conditions, and therefore the effect of peripheral optics is worse than generally assumed.
Resumo:
Separate physiological mechanisms which respond to spatial and temporal stimulation have been identified in the visual system. Some pathological conditions may selectively affect these mechanisms, offering a unique opportunity to investigate how psychophysical and electrophysiological tests reflect these visual processes, and thus enhance the use of the tests in clinical diagnosis. Amblyopia and optical blur were studied, representing spatial visual defects of neural and optical origin, respectively. Selective defects of the visual pathways were also studied - optic neuritis which affects the optic nerve, and dementia of the Alzheimer type in which the higher association areas are believed to be affected, but the primary projections spared. Seventy control subjects from 10 to 79 years of age were investigated. This provided material for an additional study of the effect of age on the psychophysical and electrophysiological responses. Spatial processing was measured by visual acuity, the contrast sensitivity function, or spatial modulation transfer function (MTF), and the pattern reversal and pattern onset-offset visual evoked potential (VEP). Temporal, or luminance, processing was measured by the de Lange curve, or temporal MTF, and the flash VEP. The pattern VEP was shown to reflect the integrity of the optic nerve, geniculo striate pathway and primary projections, and was related to high temporal frequency processing. The individual components of the flash VEP differed in their characteristics. The results suggested that the P2 component reflects the function of the higher association areas and is related to low temporal frequency processing, while the Pl component reflects the primary projection areas. The combination of a delayed flash P2 component and a normal latency pattern VEP appears to be specific to dementia of the Alzheimer type and represents an important diagnostic test for this condition.
Resumo:
The unmitigated transmission of undesirable vibration can result in problems by way of causing human discomfort, machinery and equipment failure, and affecting the quality of a manufacturing process. When identifiable transmission paths are discernible, vibrations from the source can be isolated from the rest of the system and this prevents or minimises the problems. The approach proposed here for vibration isolation is active force cancellation at points close to the vibration source. It uses force feedback for multiple-input and multiple-output control at the mounting locations. This is particularly attractive for rigid mounting of machine on relative flexible base where machine alignment and motions are to be restricted. The force transfer function matrix is used as a disturbance rejection performance specification for the design of MIMO controllers. For machine soft-mounted via flexible isolators, a model for this matrix has been derived. Under certain conditions, a simple multiplicative uncertainty model is obtained that shows the amount of perturbation a flexible base has on the machine-isolator-rigid base transmissibility matrix. Such a model is very suitable for use with robust control design paradigm. A different model is derived for the machine on hard-mounts without the flexible isolators. With this model, the level of force transmitted from a machine to a final mounting structure using the measurements for the machine running on another mounting structure can be determined. The two mounting structures have dissimilar dynamic characteristics. Experiments have verified the usefulness of the expression. The model compares well with other methods in the literature. The disadvantage lies with the large amount of data that has to be collected. Active force cancellation is demonstrated on an experimental rig using an AC industrial motor hard-mounted onto a relative flexible structure. The force transfer function matrix, determined from measurements, is used to design H and Static Output Feedback controllers. Both types of controllers are stable and robust to modelling errors within the identified frequency range. They reduce the RMS of transmitted force by between 30?80% at all mounting locations for machine running at 1340 rpm. At the rated speed of 1440 rpm only the static gain controller is able to provide 30?55% reduction at all locations. The H controllers on the other hand could only give a small reduction at one mount location. This is due in part to the deficient of the model used in the design. Higher frequency dynamics has been ignored in the model. This can be resolved by the use of a higher order model that can result in a high order controller. A low order static gain controller, with some tuning, performs better. But it lacks the analytical framework for analysis and design.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.
In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.
We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.
Resumo:
The Indian monsoon system is an important climate feature of the northern Indian Ocean. Small variations of the wind and precipitation patterns have fundamental influence on the societal, agricultural, and economic development of India and its neighboring countries. To understand current trends, sensitivity to forcing, or natural variation, records beyond the instrumental period are needed. However, high-resolution archives of past winter monsoon variability are scarce. One potential archive of such records are marine sediments deposited on the continental slope in the NE Arabian Sea, an area where present-day conditions are dominated by the winter monsoon. In this region, winter monsoon conditions lead to distinctive changes in surface water properties, affecting marine plankton communities that are deposited in the sediment. Using planktic foraminifera as a sensitive and well-preserved plankton group, we first characterize the response of their species distribution on environmental gradients from a dataset of surface sediment samples in the tropical and sub-tropical Indian Ocean. Transfer functions for quantitative paleoenvironmental reconstructions were applied to a decadal-scale record of assemblage counts from the Pakistan Margin spanning the last 2000?years. The reconstructed temperature record reveals an intensification of winter monsoon intensity near the year 100 CE. Prior to this transition, winter temperatures were >1.5°C warmer than today. Conditions similar to the present seem to have established after 450 CE, interrupted by a singular event near 950 CE with warmer temperatures and accordingly weak winter monsoon. Frequency analysis revealed significant 75-, 40-, and 37-year cycles, which are known from decadal- to centennial-scale resolution records of Indian summer monsoon variability and interpreted as solar irradiance forcing. Our first independent record of Indian winter monsoon activity confirms that winter and summer monsoons were modulated on the same frequency bands and thus indicates that both monsoon systems are likely controlled by the same driving force.
Resumo:
Mammography equipment must be evaluated to ensure that images will be of acceptable diagnostic quality with lowest radiation dose. Quality Assurance (QA) aims to provide systematic and constant improvement through a feedback mechanism to address the technical, clinical and training aspects. Quality Control (QC), in relation to mammography equipment, comprises a series of tests to determine equipment performance characteristics. The introduction of digital technologies promoted changes in QC tests and protocols and there are some tests that are specific for each manufacturer. Within each country specifi c QC tests should be compliant with regulatory requirements and guidance. Ideally, one mammography practitioner should take overarching responsibility for QC within a service, with all practitioners having responsibility for actual QC testing. All QC results must be documented to facilitate troubleshooting, internal audit and external assessment. Generally speaking, the practitioner’s role includes performing, interpreting and recording the QC tests as well as reporting any out of action limits to their service lead. They must undertake additional continuous professional development to maintain their QC competencies. They are usually supported by technicians and medical physicists; in some countries the latter are mandatory. Technicians and/or medical physicists often perform many of the tests indicated within this chapter. It is important to recognise that this chapter is an attempt to encompass the main tests performed within European countries. Specific tests related to the service that you work within must be familiarised with and adhered too.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Dynamic vehicle behavior is used to identify safe traffic speed limits. The proposed methodology is based on the vehicle vertical wheel contact force response excited by measured pavement irregularities on the frequency domain. A quarter-car model is used to identify vehicle dynamic behavior. The vertical elevation of an unpaved road surface has been measured. The roughness spectral density is quantified as ISO Level C. Calculations for the vehicle inertance function were derived by using the vertical contact force transfer function weighed by the pavement spectral density roughness function in the frequency domain. The statistical contact load variation is obtained from the vehicle inertance density function integration. The vehicle safety behavior concept is based on its handling ability properties. The ability to generate tangential forces on the wheel/road contact interface is the key to vehicle handling. This ability is related to tire/pavement contact forces. A contribution to establish a traffic safety speed limit is obtained from the likelihood of the loss of driveability. The results show that at speeds faster than 25 km/h the likelihood of tire contact loss is possible when traveling on the measured road type. DOI: 10.1061/(ASCE)TE.19435436.0000216. (C) 2011 American Society of Civil Engineers.