948 resultados para PROCESSING TECHNIQUE


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the classification of 110 copper ore samples from Sossego Mine, based on X-ray diffraction and cluster analysis. The comparison based on the position and the intensity of the diffracted peaks allowed the distinction of seven ore types, whose differences refer to the proportion of major minerals: quartz, feldspar, actinolite, iron oxides, mica and chlorite. There was a strong correlation between the grouping and the location of the samples in Sequeirinho and Sossego orebodies. This relationship is due to different types and intensities of hydrothermal alteration prevailing in each body, which reflect the mineralogical composition and thus the X-ray diffractograms of samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A specific manufacturing process to obtain continuous glass fiber-reinforced RIFE laminates was studied and some of their mechanical properties were evaluated. Young's modulus and maximum strength were measured by three-point bending test and tensile test using the Digital Image Correlation (DIC) technique. Adhesion tests, thermal analysis and microscopy were used to evaluate the fiber-matrix adhesion, which is very dependent on the sintering time. The composite material obtained had a Young's modulus of 14.2 GPa and ultimate strength of 165 MPa, which corresponds to approximately 24 times the modulus and six times the ultimate strength of pure RIFE. These results show that the RIFE composite, manufactured under specific conditions, has great potential to provide structural parts with a performance suitable for application in structural components. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proton nuclear magnetic resonance (H-1 NMR) spectroscopy for detection of biochemical changes in biological samples is a successful technique. However, the achieved NMR resolution is not sufficiently high when the analysis is performed with intact cells. To improve spectral resolution, high resolution magic angle spinning (HR-MAS) is used and the broad signals are separated by a T-2 filter based on the CPMG pulse sequence. Additionally, HR-MAS experiments with a T-2 filter are preceded by a water suppression procedure. The goal of this work is to demonstrate that the experimental procedures of water suppression and T-2 or diffusing filters are unnecessary steps when the filter diagonalization method (FDM) is used to process the time domain HR-MAS signals. Manipulation of the FDM results, represented as a tabular list of peak positions, widths, amplitudes and phases, allows the removal of water signals without the disturbing overlapping or nearby signals. Additionally, the FDM can also be used for phase correction and noise suppression, and to discriminate between sharp and broad lines. Results demonstrate the applicability of the FDM post-acquisition processing to obtain high quality HR-MAS spectra of heterogeneous biological materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the hydrophobicity is usually an arduous parameter to be determined in the field, it has been pointed out as a good option to monitor aging of polymeric outdoor insulators. Concerning this purpose, digital image processing of photos taken from wet insulators has been the main technique nowadays. However, important challenges on this technique still remain to be overcome, such as; images from non-controlled illumination conditions can interfere on analyses and no existence of standard surfaces with different levels of hydrophobicity. In this paper, the photo image samples were digitally filtered to reduce the illumination influence, and hydrophobic surface samples were prepared from wetting silicon surfaces with solution of water-alcohol. Furthermore norevious studies triying to quantify and relate these properties in a mathematical function were found, that could be used in the field by the electrical companies. Based on such considerations, high quality images of countless hydrophobic surfaces were obtained and three different image processing methodologies, the fractal dimension and two Haralick textures descriptors, entropy and homogeneity, associated with several digital filters, were compared. The entropy parameter Haralick's descriptors filtered with the White Top-Hat filter presented the best result to classify the hydrophobicity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We discuss the processing of data recorded with multimonochromatic x-ray imagers (MMI) in inertial confinement fusion experiments. The MMI records hundreds of gated, spectrally resolved images that can be used to unravel the spatial structure of the implosion core. In particular, we present a new method to determine the centers in all the array of images, a better reconstruction technique of narrowband implosion core images, two algorithms to determine the shape and size of the implosion core volume based on reconstructed broadband images recorded along three-quasiorthogonal lines of sight, and the removal of artifacts from the space-integrated spectra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A main objective of the human movement analysis is the quantitative description of joint kinematics and kinetics. This information may have great possibility to address clinical problems both in orthopaedics and motor rehabilitation. Previous studies have shown that the assessment of kinematics and kinetics from stereophotogrammetric data necessitates a setup phase, special equipment and expertise to operate. Besides, this procedure may cause feeling of uneasiness on the subjects and may hinder with their walking. The general aim of this thesis is the implementation and evaluation of new 2D markerless techniques, in order to contribute to the development of an alternative technique to the traditional stereophotogrammetric techniques. At first, the focus of the study has been the estimation of the ankle-foot complex kinematics during stance phase of the gait. Two particular cases were considered: subjects barefoot and subjects wearing ankle socks. The use of socks was investigated in view of the development of the hybrid method proposed in this work. Different algorithms were analyzed, evaluated and implemented in order to have a 2D markerless solution to estimate the kinematics for both cases. The validation of the proposed technique was done with a traditional stereophotogrammetric system. The implementation of the technique leads towards an easy to configure (and more comfortable for the subject) alternative to the traditional stereophotogrammetric system. Then, the abovementioned technique has been improved so that the measurement of knee flexion/extension could be done with a 2D markerless technique. The main changes on the implementation were on occlusion handling and background segmentation. With the additional constraints, the proposed technique was applied to the estimation of knee flexion/extension and compared with a traditional stereophotogrammetric system. Results showed that the knee flexion/extension estimation from traditional stereophotogrammetric system and the proposed markerless system were highly comparable, making the latter a potential alternative for clinical use. A contribution has also been given in the estimation of lower limb kinematics of the children with cerebral palsy (CP). For this purpose, a hybrid technique, which uses high-cut underwear and ankle socks as “segmental markers” in combination with a markerless methodology, was proposed. The proposed hybrid technique is different than the abovementioned markerless technique in terms of the algorithm chosen. Results showed that the proposed hybrid technique can become a simple and low-cost alternative to the traditional stereophotogrammetric systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heart is a wonderful but complex organ: it uses electrochemical mechanisms in order to produce mechanical energy to pump the blood throughout the body and allow the life of humans and animals. This organ can be subject to several diseases and sudden cardiac death (SCD) is the most catastrophic manifestation of these diseases, responsible for the death of a large number of people throughout the world. It is estimated that 325000 Americans annually die for SCD. SCD most commonly occurs as a result of reentrant tachyarrhythmias (ventricular tachycardia (VT) and ventricular fibrillation (VF)) and the identification of those patients at higher risk for the development of SCD has been a difficult clinical challenge. Nowadays, a particular electrocardiogram (ECG) abnormality, “T-wave alternans” (TWA), is considered a precursor of lethal cardiac arrhythmias and sudden death, a sensitive indicator of risk for SCD. TWA is defined as a beat-to-beat alternation in the shape, amplitude, or timing of the T-wave on the ECG, indicative of the underlying repolarization of cardiac cells [5]. In other words TWA is the macroscopic effect of subcellular and celluar mechanisms involving ionic kinetics and the consequent depolarization and repolarization of the myocytes. Experimental activities have shown that TWA on the ECG is a manifestation of an underlying alternation of long and short action potential durations (APDs), the so called APD-alternans, of cardiac myocytes in the myocardium. Understanding the mechanism of APDs-alternans is the first step for preventing them to occur. In order to investigate these mechanisms it’s very important to understand that the biological systems are complex systems and their macroscopic properties arise from the nonlinear interactions among the parts. The whole is greater than the sum of the parts, and it cannot be understood only by studying the single parts. In this sense the heart is a complex nonlinear system and its way of working follows nonlinear dynamics; alternans also, they are a manifestation of a phenomenon typical in nonlinear dynamical systems, called “period-dubling bifurcation”. Over the past decade, it has been demonstrated that electrical alternans in cardiac tissue is an important marker for the development of ventricular fibrillation and a significant predictor for mortality. It has been observed that acute exposure to low concentration of calcium does not decrease the magnitude of alternans and sustained ventricular Fibrillation (VF) is still easily induced under these condition. However with prolonged exposure to low concentration of calcium, alternans disappears, but VF is still inducible. This work is based on this observation and tries to make it clearer. The aim of this thesis is investigate the effect of hypocalcemia spatial alternans and VF doing experiments with canine hearts and perfusing them with a solution with physiological ionic concentration and with a solution with low calcium concentration (hypocalcemia); in order to investigate the so called memory effect, the experimental activity was modified during the way. The experiments were performed with the optical mapping technique, using voltage-sensitive dye, and a custom made Java code was used in post-processing. Finding the Nolasco and Dahlen’s criterion [8] inadequate for the prediction of alternans, and takin into account the experimental results, another criterion, which consider the memory effect, has been implemented. The implementation of this criterion could be the first step in the creation of a method, AP-based, discriminating who is at risk if developing VF. This work is divided into four chapters: the first is a brief presentation of the physiology of the heart; the second is a review of the major theories and discovers in the study of cardiac dynamics; the third chapter presents an overview on the experimental activity and the optical mapping technique; the forth chapter contains the presentation of the results and the conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on a comprehensive signal processing procedure for very low signal levels for the measurement of neutral deuterium in the local interstellar medium from a spacecraft in Earth orbit. The deuterium measurements were performed with the IBEX-Lo camera on NASA’s Interstellar Boundary Explorer (IBEX) satellite. Our analysis technique for these data consists of creating a mass relation in three-dimensional time of flight space to accurately determine the position of the predicted D events, to precisely model the tail of the H events in the region where the H tail events are near the expected D events, and then to separate the H tail from the observations to extract the very faint D signal. This interstellar D signal, which is expected to be a few counts per year, is extracted from a strong terrestrial background signal, consisting of sputter products from the sensor’s conversion surface. As reference we accurately measure the terrestrial D/H ratio in these sputtered products and then discriminate this terrestrial background source. During the three years of the mission time when the deuterium signal was visible to IBEX, the observation geometry and orbit allowed for a total observation time of 115.3 days. Because of the spinning of the spacecraft and the stepping through eight energy channels the actual observing time of the interstellar wind was only 1.44 days. With the optimised data analysis we found three counts that could be attributed to interstellar deuterium. These results update our earlier work.