997 resultados para Algorithm Calibration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-spectral rainfall estimation algorithm has been developed for the Sahel region of West Africa with the purpose of producing accumulated rainfall estimates for drought monitoring and food security. Radar data were used to calibrate multi-channel SEVIRI data from MSG, and a probability of rainfall at several different rain-rates was established for each combination of SEVIRI radiances. Radar calibrations from both Europe (the SatPrecip algorithm) and Niger (TAMORA algorithm) were used. 10 day estimates were accumulated from SatPrecip and TAMORA and compared with kriged gauge data and TAMSAT satellite rainfall estimates over West Africa. SatPrecip was found to produce large overestimates for the region, probably because of its non-local calibration. TAMORA was negatively biased for areas of West Africa with relatively high rainfall, but its skill was comparable to TAMSAT for the low-rainfall region climatologically similar to its calibration area around Niamey. These results confirm the high importance of local calibration for satellite-derived rainfall estimates. As TAMORA shows no improvement in skill over TAMSAT for dekadal estimates, the extra cloud-microphysical information provided by multi-spectral data may not be useful in determining rainfall accumulations at a ten day timescale. Work is ongoing to determine whether it shows improved accuracy at shorter timescales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyze and study a pervasive computing system in a mining environment to track people based on RFID (radio frequency identification) technology. In first instance, we explain the RFID fundamentals and the LANDMARC (location identification based on dynamic active RFID calibration) algorithm, then we present the proposed algorithm combining LANDMARC and trilateration technique to collect the coordinates of the people inside the mine, next we generalize a pervasive computing system that can be implemented in mining, and finally we show the results and conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today electronic portal imaging devices (EPID's) are used primarily to verify patient positioning. They have, however, also the potential as 2D-dosimeters and could be used as such for transit dosimetry or dose reconstruction. It has been proven that such devices, especially liquid filled ionization chambers, have a stable dose response relationship which can be described in terms of the physical properties of the EPID and the pulsed linac radiation. For absolute dosimetry however, an accurate method of calibration to an absolute dose is needed. In this work, we concentrate on calibration against dose in a homogeneous water phantom. Using a Monte Carlo model of the detector we calculated dose spread kernels in units of absolute dose per incident energy fluence and compared them to calculated dose spread kernels in water at different depths. The energy of the incident pencil beams varied between 0.5 and 18 MeV. At the depth of dose maximum in water for a 6 MV beam (1.5 cm) and for a 18 MV beam (3.0 cm) we observed large absolute differences between water and detector dose above an incident energy of 4 MeV but only small relative differences in the most frequent energy range of the beam energy spectra. It is shown that for a 6 MV beam the absolute reference dose measured at 1.5 cm water depth differs from the absolute detector dose by 3.8%. At depth 1.2 cm in water, however, the relative dose differences are almost constant between 2 and 6 MeV. The effects of changes in the energy spectrum of the beam on the dose responses in water and in the detector are also investigated. We show that differences larger than 2% can occur for different beam qualities of the incident photon beam behind water slabs of different thicknesses. It is therefore concluded that for high-precision dosimetry such effects have to be taken into account. Nevertheless, the precise information about the dose response of the detector provided in this Monte Carlo study forms the basis of extracting directly the basic radiometric quantities photon fluence and photon energy fluence from the detector's signal using a deconvolution algorithm. The results are therefore promising for future application in absolute transit dosimetry and absolute dose reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD = d(max) and comparing measurements with corresponding doses computed at d(max), B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d(max). This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index (gamma), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The gamma index was computed for a distance to agreement (DTA) of 3 mm. The dose difference deltaD was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA remain below 3% for deltaD equal or larger than 3%, while they are slightly larger for deltaD = 2% with %FA in the range from 3% to 8%. Since its introduction in routine practice, 1453 fields have been verified with GLAaS at the authors' institute (6 MV beam). Using a DTA of 3 mm and a deltaD of 4% the authors obtained %FA = 0.9 +/- 1.1 for the entire data set while, stratifying according to the dose calculation algorithm, they observed: %FA = 0.7 +/- 0.9 for fields computed with the analytical anisotropic algorithm and %FA = 2.4 +/- 1.3 for pencil-beam based fields with a statistically significant difference between the two groups. If data are stratified according to field splitting, they observed %FA = 0.8 +/- 1.0 for split fields and 1.0 +/- 1.2 for nonsplit fields without any significant difference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed microdosimetric characterization of the M. D. Anderson 42 MeV (p,Be) fast neutron beam was performed using the techniques of microdosimetry and a 1/2 inch diameter Rossi proportional counter. These measurements were performed at 5, 15, and 30 cm depths on the central axis, 3 cm inside, and 3 cm outside the field edge for 10 $\times$ 10 and 20 $\times$ 20 cm field sizes. Spectra were also measured at 5 and 15 cm depth on central axis for a 6 $\times$ 6 cm field size. Continuous slowing down approximation calculations were performed to model the nuclear processes that occur in the fast neutron beam. Irradiation of the CR-39 was performed using a tandem electrostatic accelerator for protons of 10, 6, and 3 MeV and alpha particles of 15, 10, and 7 MeV incident energy on target at angles of incidence from 0 to 85 degrees. The critical angle as well as track etch rate and normal incidence diameter versus linear energy transfer (LET) were obtained from these measurements. The bulk etch rate was also calculated from these measurements. Dose response of the material was studied, and the angular distribution of charged particles created by the fast neutron beam was measured with CR-39. The efficiency of CR-39 was calculated versus that of the Rossi chamber, and an algorithm was devised for derivation of LET spectra from the major and minor axis dimensions of the observed tracks. The CR-39 was irradiated in the same positions as the Rossi chamber, and the derived spectra were compared directly. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between phytoplankton assemblages and the associated optical properties of the water body is important for the further development of algorithms for large-scale remote sensing of phytoplankton biomass and the identification of phytoplankton functional types (PFTs), which are often representative for different biogeochemical export scenarios. Optical in-situ measurements aid in the identification of phytoplankton groups with differing pigment compositions and are widely used to validate remote sensing data. In this study we present results from an interdisciplinary cruise aboard the RV Polarstern along a north-to-south transect in the eastern Atlantic Ocean in November 2008. Phytoplankton community composition was identified using a broad set of in-situ measurements. Water samples from the surface and the depth of maximum chlorophyll concentration were analyzed by high performance liquid chromatography (HPLC), flow cytometry, spectrophotometry and microscopy. Simultaneously, the above- and underwater light field was measured by a set of high spectral resolution (hyperspectral) radiometers. An unsupervised cluster algorithm applied to the measured parameters allowed us to define bio-optical provinces, which we compared to ecological provinces proposed elsewhere in the literature. As could be expected, picophytoplankton was responsible for most of the variability of PFTs in the eastern Atlantic Ocean. Our bio-optical clusters agreed well with established provinces and thus can be used to classify areas of similar biogeography. This method has the potential to become an automated approach where satellite data could be used to identify shifting boundaries of established ecological provinces or to track exceptions from the rule to improve our understanding of the biogeochemical cycles in the ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Validating modern oceanographic theories using models produced through stereo computer vision principles has recently emerged. Space-time (4-D) models of the ocean surface may be generated by stacking a series of 3-D reconstructions independently generated for each time instant or, in a more robust manner, by simultaneously processing several snapshots coherently in a true ?4-D reconstruction.? However, the accuracy of these computer-vision-generated models is subject to the estimations of camera parameters, which may be corrupted under the influence of natural factors such as wind and vibrations. Therefore, removing the unpredictable errors of the camera parameters is necessary for an accurate reconstruction. In this paper, we propose a novel algorithm that can jointly perform a 4-D reconstruction as well as correct the camera parameter errors introduced by external factors. The technique is founded upon variational optimization methods to benefit from their numerous advantages: continuity of the estimated surface in space and time, robustness, and accuracy. The performance of the proposed algorithm is tested using synthetic data produced through computer graphics techniques, based on which the errors of the camera parameters arising from natural factors can be simulated.