942 resultados para Neonates, EEG Analysis, Seizures, Signal Processing
Resumo:
The development cost of any civil infrastructure is very high; during its life span, the civil structure undergoes a lot of physical loads and environmental effects which damage the structure. Failing to identify this damage at an early stage may result in severe property loss and may become a potential threat to people and the environment. Thus, there is a need to develop effective damage detection techniques to ensure the safety and integrity of the structure. One of the Structural Health Monitoring methods to evaluate a structure is by using statistical analysis. In this study, a civil structure measuring 8 feet in length, 3 feet in diameter, embedded with thermocouple sensors at 4 different levels is analyzed under controlled and variable conditions. With the help of statistical analysis, possible damage to the structure was analyzed. The analysis could detect the structural defects at various levels of the structure.
Resumo:
Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry’s standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device. This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
The electrocardiogram (ECG) signal has been widely used to study the physiological substrates of emotion. However, searching for better filtering techniques in order to obtain a signal with better quality and with the maximum relevant information remains an important issue for researchers in this field. Signal processing is largely performed for ECG analysis and interpretation, but this process can be susceptible to error in the delineation phase. In addition, it can lead to the loss of important information that is usually considered as noise and, consequently, discarded from the analysis. The goal of this study was to evaluate if the ECG noise allows for the classification of emotions, while using its entropy as an input in a decision tree classifier. We collected the ECG signal from 25 healthy participants while they were presented with videos eliciting negative (fear and disgust) and neutral emotions. The results indicated that the neutral condition showed a perfect identification (100%), whereas the classification of negative emotions indicated good identification performances (60% of sensitivity and 80% of specificity). These results suggest that the entropy of noise contains relevant information that can be useful to improve the analysis of the physiological correlates of emotion.
Resumo:
In this paper, a real-time optimal control technique for non-linear plants is proposed. The control system makes use of the cell-mapping (CM) techniques, widely used for the global analysis of highly non-linear systems. The CM framework is employed for designing approximate optimal controllers via a control variable discretization. Furthermore, CM-based designs can be improved by the use of supervised feedforward artificial neural networks (ANNs), which have proved to be universal and efficient tools for function approximation, providing also very fast responses. The quantitative nature of the approximate CM solutions fits very well with ANNs characteristics. Here, we propose several control architectures which combine, in a different manner, supervised neural networks and CM control algorithms. On the one hand, different CM control laws computed for various target objectives can be employed for training a neural network, explicitly including the target information in the input vectors. This way, tracking problems, in addition to regulation ones, can be addressed in a fast and unified manner, obtaining smooth, averaged and global feedback control laws. On the other hand, adjoining CM and ANNs are also combined into a hybrid architecture to address problems where accuracy and real-time response are critical. Finally, some optimal control problems are solved with the proposed CM, neural and hybrid techniques, illustrating their good performance.
Resumo:
El propósito de este estudio es medir los efectos que tiene el videojuego League of Legends en los procesos cognitivos de memoria de trabajo visual (MVT) y solución de problemas (SP). Para medir dichos efectos se implementó un diseño pre test-post con un grupo experimental y uno control, compuestos cada uno por siete participantes, en donde se evaluaron los procesos previamente mencionados utilizando los cubos de Corsi para MVT y las matrices del WAIS III para SP. Después de realizar los respectivos entrenamientos se encontraron resultados significativos en los diferentes momentos de aplicación. En el grupo experimental se encontraron diferencias en la variable dependiente SP, mientras que en el grupo control en MVT, pero no en la interacción entre grupos ni diferencias entre grupos, lo que sugiere un efecto de familiarización a la prueba.
Resumo:
A utilização generalizada do computador para a automatização das mais diversas tarefas, tem conduzido ao desenvolvimento de aplicações que possibilitam a realização de actividades que até então poderiam não só ser demoradas, como estar sujeitas a erros inerentes à actividade humana. A investigação desenvolvida no âmbito desta tese, tem como objectivo o desenvolvimento de um software e algoritmos que permitam a avaliação e classificação de queijos produzidos na região de Évora, através do processamento de imagens digitais. No decurso desta investigação, foram desenvolvidos algoritmos e metodologias que permitem a identificação dos olhos e dimensões do queijo, a presença de textura na parte exterior do queijo, assim como características relativas à cor do mesmo, permitindo que com base nestes parâmetros possa ser efectuada uma classificação e avaliação do queijo. A aplicação de software, resultou num produto de simples utilização. As fotografias devem respeitar algumas regras simples, sobre as quais se efectuará o processamento e classificação do queijo. ABSTRACT: The widespread use of computers for the automation of repetitive tasks, has resulted in developing applications that allow a range of activities, that until now could not only be time consuming and also subject to errors inherent to human activity, to be performed without or with little human intervention. The research carried out within this thesis, aims to develop a software application and algorithms that enable the assessment and classification of cheeses produced in the region of Évora, by digital images processing. Throughout this research, algorithms and methodologies have been developed that allow the identification of the cheese eyes, the dimensions of the cheese, the presence of texture on the outside of cheese, as well as an analysis of the color, so that, based on these parameters, a classification and evaluation of the cheese can be conducted. The developed software application, is product simple to use, requiring no special computer knowledge. Requires only the acquisition of the photographs following a simple set of rules, based on which it will do the processing and classification of cheese.
Resumo:
In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.
Resumo:
A new simple method to design linear-phase finite impulse response (FIR) digital filters, based on the steepest-descent optimization method, is presented in this paper. Starting from the specifications of the desired frequency response and a maximum approximation error a nearly optimum digital filter is obtained. Tests have shown that this method is alternative to other traditional ones such as Frequency Sampling and Parks-McClellan, mainly when other than brick wall frequency response is required as a desired frequency response. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Active control solutions appear to be a feasible approach to cope with the steadily increasing requirements for noise reduction in the transportation industry. Active controllers tend to be designed with a target on the sound pressure level reduction. However, the perceived control efficiency for the occupants can be more accurately assessed if psychoacoustic metrics can be taken into account. Therefore, this paper aims to evaluate, numerically and experimentally, the effect of a feedback controller on the sound quality of a vehicle mockup excited with engine noise. The proposed simulation scheme is described and experimentally validated. The engine excitation is provided by a sound quality equivalent engine simulator, running on a real-time platform that delivers harmonic excitation in function of the driving condition. The controller performance is evaluated in terms of specific loudness and roughness. It is shown that the use of a quite simple control strategy, such as a velocity feedback, can result in satisfactory loudness reduction with slightly spread roughness, improving the overall perception of the engine sound. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.