971 resultados para high dynamic range
Resumo:
Tactile sensing is an important aspect of robotic systems, and enables safe, dexterous robot-environment interaction. The design and implementation of tactile sensors on robots has been a topic of research over the past 30 years, and current challenges include mechanically flexible “sensing skins”, high dynamic range (DR) sensing (i.e.: high force range and fine force resolution), multi-axis sensing, and integration between the sensors and robot. This dissertation focuses on addressing some of these challenges through a novel manufacturing process that incorporates conductive and dielectric elastomers in a reusable, multilength-scale mold, and new sensor designs for multi-axis sensing that improve force range without sacrificing resolution. A single taxel was integrated into a 1 degree of freedom robotic gripper for closed-loop slip detection. Manufacturing involved casting a composite silicone rubber, polydimethylsiloxane (PDMS) filled with conductive particles such as carbon nanotubes, into a mold to produce microscale flexible features on the order of 10s of microns. Molds were produced via microfabrication of silicon wafers, but were limited in sensing area and were costly. An improved technique was developed that produced molds of acrylic using a computer numerical controlled (CNC) milling machine. This maintained the ability to produce microscale features, and increased the sensing area while reducing costs. New sensing skins had features as small as 20 microns over an area as large as a human hand. Sensor architectures capable of sensing both shear and normal force sensing with high dynamic range were produced. Using this architecture, two sensing modalities were developed: a capacitive approach and a contact resistive approach. The capacitive approach demonstrated better dynamic range, while the contact resistive approach used simpler circuitry. Using the contact resistive approach, normal force range and resolution were 8,000 mN and 1,000 mN, respectively, and shear force range and resolution were 450 mN and 100 mN, respectively. Using the capacitive approach, normal force range and resolution were 10,000 mN and 100 mN, respectively, and shear force range and resolution were 1,500 mN and 50 mN, respectively.
Resumo:
The isotope composition of Ph is difficult to determine accurately due to the lack of a stable normalisation ratio. Double and triple-spike addition techniques provide one solution and presently yield the most accurate measurements. A number of recent studies have claimed that improved accuracy and precision could also be achieved by multi-collector ICP-MS (MC-ICP-MS) Pb-isotope analysis using the addition of Tl of known isotope composition to Pb samples. In this paper, we verify whether the known isotope composition of Tl can be used for correction of mass discrimination of Pb with an extensive dataset for the NIST standard SRM 981, comparison of MC-ICP-MS with TIMS data, and comparison with three isochrons from different geological environments. When all our NIST SRM 981 data are normalised with one constant Tl-205/Tl-203 of 2.38869, the following averages and reproducibilities were obtained: Pb-207/Pb-206=0.91461+/-18; Pb-208/Ph-206 = 2.1674+/-7; and (PbPh)-Pb-206-Ph-204 = 16.941+/-6. These two sigma standard deviations of the mean correspond to 149, 330, and 374 ppm, respectively. Accuracies relative to triple-spike values are 149, 157, and 52 ppm, respectively, and thus well within uncertainties. The largest component of the uncertainties stems from the Ph data alone and is not caused by differential mass discrimination behaviour of Ph and Tl. In routine operation, variation of sample introduction memory and production of isobaric molecular interferences in the spectrometer's collision cell currently appear to be the ultimate limitation to better reproducibility. Comparative study of five different datasets from actual samples (bullets, international rock standards, carbonates, metamorphic minerals, and sulphide minerals) demonstrates that in most cases geological scatter of the sample exceeds the achieved analytical reproducibility. We observe good agreement between TIMS and MC-ICP-MS data for international rock standards but find that such comparison does not constitute the ultimate. test for the validity of the MC-ICP-MS technique. Two attempted isochrons resulted in geological scatter (in one case small) in excess of analytical reproducibility. However, in one case (leached Great Dyke sulphides) we obtained a true isochron (MSWD = 0.63) age of 2578.3 +/- 0.9 Ma, which is identical to and more precise than a recently published U-Pb zircon age (2579 3 Ma) for a Great Dyke websterite [Earth Planet. Sci. Lett. 180 (2000) 1-12]. Reproducibility of this age by means of an isochron we regard as a robust test of accuracy over a wide dynamic range. We show that reliable and accurate Pb-isotope data can be obtained by careful operation of second-generation MC-ICP magnetic sector mass spectrometers. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
An all-in-one version of a capacitively coupled contactless conductivity detector is introduced. The absence of moving parts (potentiometers and connectors) makes it compact (6.5 cm(3)) and robust. A local oscillator, working at 1.1 MHz, was optimized to use capillaries of id from 20 to 100 lam. Low noise circuitry and a high-resolution analog-to-digital converter (ADC) (21 bits effective) grant good sensitivities for capillaries and background electrolytes currently used in capillary electrophoresis. The fixed frequency and amplitude of the signal generator is a drawback that is compensated by the steady calibration curves for conductivity. Another advantage is the possibility of determining the inner diameter of a capillary by reading the ADC when air and subsequently water flow through the capillary. The difference of ADC reading may be converted into the inner diameter by a calibration curve. This feature is granted by the 21-bit ADC, which eliminates the necessity of baseline compensation by hardware. In a typical application, the limits of detection based on the 3 sigma criterion (without baseline filtering) were 0.6, 0.4, 0.3, 0.5, 0.6, and 0.8 mu mol/L for K(+), Ba(2+), Ca(2+), Na(+), Mg(2+), and Li(+), respectively, which is comparable to other high-quality implementations of a capacitively coupled contactless conductivity detector.
Resumo:
High-Performance Liquid Chromatography (HPLC) conditions are described for separation of 2,4-dinitrophenylhydrazone (2,4-DNPH) derivatives of carbonyl compounds in a 10 cm long C-18 reversed phase monolithic column. Using a linear gradient from 40 to 77% acetonitrile (acetonitrile-water system), the separation was achieved in about 10 min-a time significantly shorter than that obtained with a packed particles column. The method was applied for determination of formaldehyde and acetaldehyde in Brazilian sugar cane spirits. The linear dynamic range was between 30 and 600 mu g L-1, and the detection limits were 8 and 4 mu g L-1 for formaldehyde and acetaldehyde, respectively.
Resumo:
This paper reports on the development and optimization of a modified Quick, Easy, Cheap Effective, Rugged and Safe (QuEChERS) based extraction technique coupled with a clean-up dispersive-solid phase extraction (dSPE) as a new, reliable and powerful strategy to enhance the extraction efficiency of free low molecular-weight polyphenols in selected species of dietary vegetables. The process involves two simple steps. First, the homogenized samples are extracted and partitioned using an organic solvent and salt solution. Then, the supernatant is further extracted and cleaned using a dSPE technique. Final clear extracts of vegetables were concentrated under vacuum to near dryness and taken up into initial mobile phase (0.1% formic acid and 20% methanol). The separation and quantification of free low molecular weight polyphenols from the vegetable extracts was achieved by ultrahigh pressure liquid chromatography (UHPLC) equipped with a phodiode array (PDA) detection system and a Trifunctional High Strength Silica capillary analytical column (HSS T3), specially designed for polar compounds. The performance of the method was assessed by studying the selectivity, linear dynamic range, the limit of detection (LOD) and limit of quantification (LOQ), precision, trueness, and matrix effects. The validation parameters of the method showed satisfactory figures of merit. Good linearity (View the MathML sourceRvalues2>0.954; (+)-catechin in carrot samples) was achieved at the studied concentration range. Reproducibility was better than 3%. Consistent recoveries of polyphenols ranging from 78.4 to 99.9% were observed when all target vegetable samples were spiked at two concentration levels, with relative standard deviations (RSDs, n = 5) lower than 2.9%. The LODs and the LOQs ranged from 0.005 μg mL−1 (trans-resveratrol, carrot) to 0.62 μg mL−1 (syringic acid, garlic) and from 0.016 μg mL−1 (trans-resveratrol, carrot) to 0.87 μg mL−1 ((+)-catechin, carrot) depending on the compound. The method was applied for studying the occurrence of free low molecular weight polyphenols in eight selected dietary vegetables (broccoli, tomato, carrot, garlic, onion, red pepper, green pepper and beetroot), providing a valuable and promising tool for food quality evaluation.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.
Resumo:
Optical fibre strain sensors using Fibre Bragg Gratings (FBGs) are poised to play a major role in structural health monitoring in a variety of application from aerospace to civil engineering. At the heart of technology is the optoelectronic instrumentation required to convert optical signals into measurands. Users are demanding compact, lightweight, rugged and low cost solutions. This paper describes development of a new device based on a blazed FBG and CCD array that can potentially meet the above demands. We have shown that this very low cost technique may be used to interrogate a WDM array of sensor gratings with highly accurate and highly repeatable results unaffected by the polarisation state of the radiation. In this paper, we present results showing that sensors may be interrogated with an RMS error of 1.7pm, drift below 0.12pm and dynamic range of up to 65nm.
Resumo:
Optical fibre strain sensors using Fibre Bragg Gratings (FBGs) are poised to play a major role in structural health monitoring in a variety of application from aerospace to civil engineering. At the heart of technology is the optoelectronic instrumentation required to convert optical signals into measurands. Users are demanding compact, lightweight, rugged and low cost solutions. This paper describes development of a new device based on a blazed FBG and CCD array that can potentially meet the above demands. We have shown that this very low cost technique may be used to interrogate a WDM array of sensor gratings with highly accurate and highly repeatable results unaffected by the polarisation state of the radiation. In this paper, we present results showing that sensors may be interrogated with an RMS error of 1.7pm, drift below 0.12pm and dynamic range of up to 65nm.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
A sensitive, selective, and reproducible in-tube polypyrrole-coated capillary (PPY) solid-phase microextraction and liquid chromatographic method for fluoxetine and norfluoxetine enantiomers analysis in plasma samples has been developed, validated, and further applied to the analysis of plasma samples from elderly patients undergoing therapy with antidepressants. Important factors in the optimization of in-tube SPME efficiency are discussed, including the sample draw/eject volume, draw/eject cycle number, draw/eject flow-rate, sample pH, and influence of plasma proteins. Separation of the analytes was achieved with a Chiralcel OD-R column and a mobile phase consisting of potassium hexafluorophosphate 7.5 mM and sodium phosphate 0.25 M solution, pH 3.0, and acetonitrile (75:25, v/v) in the isocratic mode, at a flow rate of 1.0 mL/min. Detection was carried out by fluorescence absorbance at Ex/Em 230/290 nm. The multifunctional porous surface structure of the PPY-coated film provided high precision and accuracy for enantiomers. Compared with other commercial capillaries, PPY-coated capillary showed better extraction efficiency for all the analytes. The quantification limits of the proposed method were 10 ng/mL for R- and S-fluoxetine, and 15 ng/mL for R- and S-norfluoxetine, with a coefficient of variation lower than 13%. The response of the method for enantiomers is linear over a dynamic range, from the limit of quantification to 700ng/mL, with correlation coefficients higher than 0.9940. The in-tube SPME/LC method can therefore be successfully used to analyze plasma samples from ageing patients undergoing therapy with fluoxetine. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes an efficient scalable Residue Number System (RNS) architecture supporting moduli sets with an arbitrary number of channels, allowing to achieve larger dynamic range and a higher level of parallelism. The proposed architecture allows the forward and reverse RNS conversion, by reusing the arithmetic channel units. The arithmetic operations supported at the channel level include addition, subtraction, and multiplication with accumulation capability. For the reverse conversion two algorithms are considered, one based on the Chinese Remainder Theorem and the other one on Mixed-Radix-Conversion, leading to implementations optimized for delay and required circuit area. With the proposed architecture a complete and compact RNS platform is achieved. Experimental results suggest gains of 17 % in the delay in the arithmetic operations, with an area reduction of 23 % regarding the RNS state of the art. When compared with a binary system the proposed architecture allows to perform the same computation 20 times faster alongside with only 10 % of the circuit area resources.
Resumo:
Introduction Molecular biology procedures to detect, genotype and quantify hepatitis C virus (HCV) RNA in clinical samples have been extensively described. Routine commercial methods for each specific purpose (detection, quantification and genotyping) are also available, all of which are typically based on polymerase chain reaction (PCR) targeting the HCV 5′ untranslated region (5′UTR). This study was performed to develop and validate a complete serial laboratory assay that combines real-time nested reverse transcription-polymerase chain reaction (RT-PCR) and restriction fragment length polymorphism (RFLP) techniques for the complete molecular analysis of HCV (detection, genotyping and viral load) in clinical samples. Methods Published HCV sequences were compared to select specific primers, probe and restriction enzyme sites. An original real-time nested RT-PCR-RFLP assay was then developed and validated to detect, genotype and quantify HCV in plasma samples. Results The real-time nested RT-PCR data were linear and reproducible for HCV analysis in clinical samples. High correlations (> 0.97) were observed between samples with different viral loads and the corresponding read cycle (Ct - Cycle threshold), and this part of the assay had a wide dynamic range of analysis. Additionally, HCV genotypes 1, 2 and 3 were successfully distinguished using the RFLP method. Conclusions A complete serial molecular assay was developed and validated for HCV detection, quantification and genotyping.