999 resultados para digital calibration
Resumo:
Introducción. Las imágenes obtenidas mediante rayos X, determinan una conducta clínica en ortopedia y son analizadas por parte del cirujano en el momento previo a realizar un acto quirúrgico. El planeamiento pre quirúrgico basado en radiografías de cadera, permite predecir el tamaño de los componentes protésicos a utilizar en un reemplazo de cadera. Con el advenimiento de las radiografías digitales, existe la falsa percepción de que estas tienen corregido el factor de magnificación. La corrección de dicho factor requiere un protocolo de calibración de imágenes, aún no implementado en la Fundación Santa Fe de Bogotá (FSFB). Como consecuencia, las radiografías de cadera actualmente resultan magnificadas. Materiales y métodos. Fueron seleccionados 73 pacientes con reemplazo articular total de la cadera intervenidos en la FSFB. Para cada paciente, se estableció la dimensión de la cabeza protésica en la radiografía de cadera obtenida mediante el sistema de radiología digital (PACS-IMPAX) y su tamaño fue comparado con el de la cabeza femoral implantada. Resultados. La concordancia entre los dos observadores al medir la dimensión radiológica de los componentes protésicos fue excelente y el coeficiente de magnificación promedio de 1.2 (20%). Este será introducido al PACS-IMPAX para ajustar el tamaño definitivo de la radiografía. Conclusión. El ajuste del PACS-IMPAX permite obtener radiografías en las cuales se refleja con mayor precisión el tamaño de los segmentos anatómicos y de los componentes protésicos.
Resumo:
This paper examines the characteristics of the probe-tube microphone and its use in measuring sound pressure in the ear canal. Specifically the paper studies the free field and the coupler calibrations of several probe-tube microphones with tubes of different sizes and determines which characteristics of a probe tube are necessary for accurate measurements in both a free field and in a closed coupler.
Resumo:
We present a catalogue of galaxy photometric redshifts and k-corrections for the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7), available on the World Wide Web. The photometric redshifts were estimated with an artificial neural network using five ugriz bands, concentration indices and Petrosian radii in the g and r bands. We have explored our redshift estimates with different training sets, thus concluding that the best choice for improving redshift accuracy comprises the main galaxy sample (MGS), the luminous red galaxies and the galaxies of active galactic nuclei covering the redshift range 0 < z < 0.3. For the MGS, the photometric redshift estimates agree with the spectroscopic values within rms = 0.0227. The distribution of photometric redshifts derived in the range 0 < z(phot) < 0.6 agrees well with the model predictions. k-corrections were derived by calibration of the k-correct_v4.2 code results for the MGS with the reference-frame (z = 0.1) (g - r) colours. We adopt a linear dependence of k-corrections on redshift and (g - r) colours that provide suitable distributions of luminosity and colours for galaxies up to redshift z(phot) = 0.6 comparable to the results in the literature. Thus, our k-correction estimate procedure is a powerful, low computational time algorithm capable of reproducing suitable results that can be used for testing galaxy properties at intermediate redshifts using the large SDSS data base.
Resumo:
Temporal and spatial acoustic intensity (SATA) of therapeutic ultrasound (US) equipment should be monitored periodically. In order to evaluate the conditions of US equipment in use in the city of Piracicaba-Sao Paulo, Brazil, 31 machines - representing all Brazilian manufacturers - were analysed under continuous and pulsed conditions at a frequency of 1 MHz. Data about temporal and spatial acoustic intensity were collected and the use of equipment was surveyed. Intensities of 0.1, 0.2, 0.5, 0.8, 1.0, 1.5, 2.0, 2.5 and 3.0 Wcm -2, indicated on the equipment panel were analysed using a previously calibrated digital radiation pressure scale, model UPM-DT-1 (Ohmic Instruments Co). The acoustic intensity (I) results were expressed as superior and inferior quartile ranges for transducers with metal surfaces of 9 cm 2 and an effective radiation area (ERA) Of 4 cm 2. The results under continuous conditions were: I 0.1 = -20.0% and -96%. I 0.2 = -3.1% and -83.7%. I 0.5 = -35.0% and -86.5%. I 0.8 = -37.5% and -71.0%. I 2.5 = -49.0% and -69.5%. I 3.0 = -58.1% and -77.6%. For pulsed conditions, intensities were: I 0.1 = -40.0% and -86.2%. I 1.0 = -50.0% and -86.5%. I 1.5 = -62.5% and -82.5%. I 2.0 = -62.5% and -81.6%. I 2.5 = -64.7% and -88.8%. I 3.0 = -87.1% and -94.8%. In reply to the questionnaire drawn up to check the conditions of use of equipment, all users reported the use of hydrosoluble gel as a coupling medium and none had carried out previous calibrations. Most users used intensities in the range of 0.4. to 1.0 Wcm -2 and used machines for 300 to 400 minutes per week. The majority of machines had been bought during the previous seven years and weekly use ranged from less than 100 minutes to 700 minutes (11 hours 40 minutes). Findings confirm previous observations of discrepancy between the intensity indicated on the equipment panel and that emitted by the transducer and highlight the necessity for periodic evaluations of US equipment.
Resumo:
A virtual studio system can use technologies as augmented reality and digital matting to decrease production costs at the same time it provides the same resources of a conventional studio. With this, it’s possible for the current studios, with low cost and using conventional devices, to create productions with greater image quality and effects. Some difficulties are recurrent in virtual studio applications that use augmented reality and digital matting. The virtual objects registration in augmented reality techniques suffer from problems caused by optical distortions in the camera, errors in the marker tracking system, lack of calibration on the equipments or on the environment (lighting, for example), or even by delays in the virtual objects display. On the other hand, the digital matting’s main problem is the real-time execution to preview the scene, which must have optimized processing speed at the same time while maintain the best image quality possible. Taking the given context into consideration, this work aims to give continuity to a virtual studio system called ARStudio, by enhancing digital matting, virtual objects registration and introducing a segmentation based on depth map, yet adding better control over functionalities previously implemented
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
The High-Altitude Water Cherenkov (HAWC) Experiment is a gamma-ray observatory that utilizes water silos as Cherenkov detectors to measure the electromagnetic air showers created by gamma rays. The experiment consists of an array of closely packed water Cherenkov detectors (WCDs), each with four photomultiplier tubes (PMTs). The direction of the gamma ray will be reconstructed using the times when the electromagnetic shower front triggers PMTs in each WCD. To achieve an angular resolution as low as 0.1 degrees, a laser calibration system will be used to measure relative PMT response times. The system will direct 300ps laser pulses into two fiber-optic networks. Each network will use optical fan-outs and switches to direct light to specific WCDs. The first network is used to measure the light transit time out to each pair of detectors, and the second network sends light to each detector, calibrating the response times of the four PMTs within each detector. As the relative PMT response times are dependent on the number of photons in the light pulse, neutral density filters will be used to control the light intensity across five orders of magnitude. This system will run both continuously in a low-rate mode, and in a high-rate mode with many intensity levels. In this thesis, the design of the calibration system and systematic studies verifying its performance are presented.
Resumo:
The combustion strategy in a diesel engine has an impact on the emissions, fuel consumption and the exhaust temperatures. The PM mass retained in the CPF is a function of NO2 and PM concentrations in addition to the exhaust temperatures and the flow rates. Thus the engine combustion strategy affects exhaust characteristics which has an impact on the CPF operation and PM mass retained and oxidized. In this report, a process has been developed to simulate the relationship between engine calibration, performance and HC and PM oxidation in the DOC and CPF respectively. Fuel Rail Pressure (FRP) and Start of Injection (SOI) sweeps were carried out at five steady state engine operating conditions. This data, along with data from a previously carried out surrogate HD-FTP cycle [1], was used to create a transfer function model which estimates the engine out emissions, flow rates, temperatures for varied FRP and SOI over a transient cycle. Four different calibrations (test cases) were considered in this study, which were simulated through the transfer function model and the DOC model [1, 2]. The DOC outputs were then input into a model which simulates the NO2 assisted and thermal PM oxidation inside a CPF. Finally, results were analyzed as to how engine calibration impacts the engine fuel consumption, HC oxidation in the DOC and the PM oxidation in the CPF. Also, active regeneration for various test cases was simulated and a comparative analysis of the fuel penalties involved was carried out.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
With full-waveform (FWF) lidar systems becoming increasingly available from different commercial manufacturers, the possibility for extracting physical parameters of the scanned surfaces in an area-wide sense, as addendum to their geometric representation, has risen as well. The mentioned FWF systems digitize the temporal profiles of the transmitted laser pulse and of its backscattered echoes, allowing for a reliable determination of the target distance to the instrument and of physical target quantities by means of radiometric calibration, one of such quantities being the diffuse Lambertian reflectance. The delineation of glaciers is a time-consuming task, commonly performed manually by experts and involving field trips as well as image interpretation of orthophotos, digital terrain models and shaded reliefs. In this study, the diffuse Lambertian reflectance was compared to the glacier outlines mapped by experts. We start the presentation with the workflow for analysis of FWF data, their direct georeferencing and the calculation of the diffuse Lambertian reflectance by radiometric calibration; this workflow is illustrated for a large FWF lidar campaign in the Ötztal Alps (Tyrol, Austria), operated with an Optech ALTM 3100 system. The geometric performance of the presented procedure was evaluated by means of a relative and an absolute accuracy assessment using strip differences and orthophotos, resp. The diffuse Lambertian reflectance was evaluated at two rock glaciers within the mentioned lidar campaign. This feature showed good performance for the delineation of the rock glacier boundaries, especially at their lower parts.
Resumo:
En este proyecto se estudian y analizan las diferentes técnicas de procesado digital de señal aplicadas a acelerómetros. Se hace uso de una tarjeta de prototipado, basada en DSP, para realizar las diferentes pruebas. El proyecto se basa, principalmente, en realizar filtrado digital en señales provenientes de un acelerómetro en concreto, el 1201F, cuyo campo de aplicación es básicamente la automoción. Una vez estudiadas la teoría de procesado y las características de los filtros, diseñamos una aplicación basándonos sobre todo en el entorno en el que se desarrollaría una aplicación de este tipo. A lo largo del diseño, se explican las diferentes fases: diseño por ordenador (Matlab), diseño de los filtros en el DSP (C), pruebas sobre el DSP sin el acelerómetro, calibración del acelerómetro, pruebas finales sobre el acelerómetro... Las herramientas utilizadas son: la plataforma Kit de evaluación 21-161N de Analog Devices (equipado con el entorno de desarrollo Visual DSP 4.5++), el acelerómetro 1201F, el sistema de calibración de acelerómetros CS-18-LF de Spektra y los programas software MATLAB 7.5 y CoolEditPRO 2.0. Se realizan únicamente filtros IIR de 2º orden, de todos los tipos (Butterworth, Chebyshev I y II y Elípticos). Realizamos filtros de banda estrecha, paso-banda y banda eliminada, de varios tipos, dentro del fondo de escala que permite el acelerómetro. Una vez realizadas todas las pruebas, tanto simulaciones como físicas, se seleccionan los filtros que presentan un mejor funcionamiento y se analizan para obtener conclusiones. Como se dispone de un entorno adecuado para ello, se combinan los filtros entre sí de varias maneras, para obtener filtros de mayor orden (estructura paralelo). De esta forma, a partir de filtros paso-banda, podemos obtener otras configuraciones que nos darán mayor flexibilidad. El objetivo de este proyecto no se basa sólo en obtener buenos resultados en el filtrado, sino también de aprovechar las facilidades del entorno y las herramientas de las que disponemos para realizar el diseño más eficiente posible. In this project, we study and analize digital signal processing in order to design an accelerometer-based application. We use a hardware card of evaluation, based on DSP, to make different tests. This project is based in design digital filters for an automotion application. The accelerometer type is 1201F. First, we study digital processing theory and main parameters of real filters, to make a design based on the application environment. Along the application, we comment all the different steps: computer design (Matlab), filter design on the DSP (C language), simulation test on the DSP without the accelerometer, accelerometer calibration, final tests on the accelerometer... Hardware and software tools used are: Kit of Evaluation 21-161-N, based on DSP, of Analog Devices (equiped with software development tool Visual DSP 4.5++), 1201-F accelerometer, CS-18-LF calibration system of SPEKTRA and software tools MATLAB 7.5 and CoolEditPRO 2.0. We only perform 2nd orden IIR filters, all-type : Butterworth, Chebyshev I and II and Ellyptics. We perform bandpass and stopband filters, with very narrow band, taking advantage of the accelerometer's full scale. Once all the evidence, both simulations and physical, are finished, filters having better performance and analyzed and selected to draw conclusions. As there is a suitable environment for it, the filters are combined together in different ways to obtain higher order filters (parallel structure). Thus, from band-pass filters, we can obtain many configurations that will give us greater flexibility. The purpose of this project is not only based on good results in filtering, but also to exploit the facilities of the environment and the available tools to make the most efficient design possible.
Resumo:
Nighttime satellite imagery from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS) has a unique capability to observe nocturnal light emissions from sources including cities, wild fires, and gas flares. Data from the DMSP OLS is used in a wide range of studies including mapping urban areas, estimating informal economies, and estimating urban populations. Given the extensive and increasing list of applications a repeatable method for assessing geolocation accuracy, performing inter-calibration, and defining the minimum detectable brightness would be beneficial. An array of portable lights was designed and taken to multiple field sites known to have no other light sources. The lights were operated during nighttime overpasses by the DMSP OLS and observed in the imagery. A first estimate of the minimum detectable brightness is presented based on the field experiments conducted. An assessment of the geolocation accuracy was performed by measuring the distance between the GPS measured location of the lights and the observed location in the imagery. A systematic shift was observed and the mean distance was measured at 2.9km. A method for in situ radiance calibration of the DMSP OLS using a ground based light source as an active target is presented. The wattage of light used by the active target strongly correlates with the signal measured by the DMSP OLS. This approach can be used to enhance our ability to make inter-temporal and inter-satellite comparisons of DMSP OLS imagery. Exploring the possibility of establishing a permanent active target for the calibration of nocturnal imaging systems is recommended. The methods used to assess the minimum detectable brightness, assess the geolocation accuracy, and build inter-calibration models lay the ground work for assessing the energy expended on light emitted into the sky at night. An estimate of the total energy consumed to light the night sky globally is presented.