977 resultados para image noise modeling
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.
Resumo:
Background - For dose reduction actions, the principle of “image quality as good as possible” to “image quality as good as needed” requires to know whether the physical measures and visual image quality relate. Visual evaluation and objective physical measures of image quality can appear to be different. If there is no noticeable effect on the visual image quality with a low dose but there is a objective physical measure impact, then the overall dose may be reduced without compromising the diagnostic image quality. Low dose imaging can be used for certain types of observations, e.g. thoracic scoliosis, control after metal implantation for osteosynthesis, reviewing pneumonia and tuberculosis. Aim of the study - To determine whether physical measures of noise predict visual (clinical) image quality at low dose levels.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
We report on the construction of anatomically realistic three-dimensional in-silico breast phantoms with adjustable sizes, shapes and morphologic features. The concept of multiscale spatial resolution is implemented for generating breast tissue images from multiple modalities. Breast epidermal boundary and subcutaneous fat layer is generated by fitting an ellipsoid and 2nd degree polynomials to reconstructive surgical data and ultrasound imaging data. Intraglandular fat is simulated by randomly distributing and orienting adipose ellipsoids within a fibrous region immediately within the dermal layer. Cooper’s ligaments are simulated as fibrous ellipsoidal shells distributed within the subcutaneous fat layer. Individual ductal lobes are simulated following a random binary tree model which is generated based upon probabilistic branching conditions described by ramification matrices, as originally proposed by Bakic et al [3, 4]. The complete ductal structure of the breast is simulated from multiple lobes that extend from the base of the nipple and branch towards the chest wall. As lobe branching progresses, branches are reduced in height and radius and terminal branches are capped with spherical lobular clusters. Biophysical parameters are mapped onto the complete anatomical model and synthetic multimodal images (Mammography, Ultrasound, CT) are generated for phantoms of different adipose percentages (40%, 50%, 60%, and 70%) and are analytically compared with clinical examples. Results demonstrate that the in-silico breast phantom has applications in imaging performance evaluation and, specifically, great utility for solving image registration issues in multimodality imaging.
Resumo:
The spike-diffuse-spike (SDS) model describes a passive dendritic tree with active dendritic spines. Spine-head dynamics is modelled with a simple integrate-and-fire process, whilst communication between spines is mediated by the cable equation. Here we develop a computational framework that allows the study of multiple spiking events in a network of such spines embedded in a simple one-dimensional cable. This system is shown to support saltatory waves as a result of the discrete distribution of spines. Moreover, we demonstrate one of the ways to incorporate noise into the spine-head whilst retaining computational tractability of the model. The SDS model sustains a variety of propagating patterns.
Resumo:
We present the first image of the Madeira upper crustal structure, using ambient seismic noise tomography. 16 months of ambient noise, recorded in a dense network of 26 seismometers deployed across Madeira, allowed reconstructing Rayleigh wave Green's functions between receivers. Dispersion analysis was performed in the short period band from 1.0 to 4.0 s. Group velocity measurements were regionalized to obtain 2D tomographic images, with a lateral resolution of 2.0 km in central Madeira. Afterwards, the dispersion curves, extracted from each cell of the 2D group velocity maps, were inverted as a function of depth to obtain a 3D shear wave velocity model of the upper crust, from the surface to a depth of 2.0 km. The obtained 3D velocity model reveals features throughout the island that correlates well with surface geology and island evolution.
Resumo:
The spike-diffuse-spike (SDS) model describes a passive dendritic tree with active dendritic spines. Spine-head dynamics is modeled with a simple integrate-and-fire process, whilst communication between spines is mediated by the cable equation. In this paper we develop a computational framework that allows the study of multiple spiking events in a network of such spines embedded on a simple one-dimensional cable. In the first instance this system is shown to support saltatory waves with the same qualitative features as those observed in a model with Hodgkin-Huxley kinetics in the spine-head. Moreover, there is excellent agreement with the analytically calculated speed for a solitary saltatory pulse. Upon driving the system with time varying external input we find that the distribution of spines can play a crucial role in determining spatio-temporal filtering properties. In particular, the SDS model in response to periodic pulse train shows a positive correlation between spine density and low-pass temporal filtering that is consistent with the experimental results of Rose and Fortune [1999, Mechanisms for generating temporal filters in the electrosensory system. The Journal of Experimental Biology 202, 1281-1289]. Further, we demonstrate the robustness of observed wave properties to natural sources of noise that arise both in the cable and the spine-head, and highlight the possibility of purely noise induced waves and coherent oscillations.
Resumo:
We present a new technical simulator for the eLISA mission, based on state space modeling techniques and developed in MATLAB. This simulator computes the coordinate and velocity over time of each body involved in the constellation, i.e. the spacecraft and its test masses, taking into account the different disturbances and actuations. This allows studying the contribution of instrumental noises and system imperfections on the residual acceleration applied on the TMs, the latter reflecting the performance of the achieved free-fall along the sensitive axis. A preliminary version of the results is presented.
Resumo:
Secondary microseism sources are pressure fluctuations close to the ocean surface. They generate acoustic P-waves that propagate in water down to the ocean bottom where they are partly reflected, and partly transmitted into the crust to continue their propagation through the Earth. We present the theory for computing the displacement power spectral density of secondary microseism P-waves recorded by receivers in the far field. In the frequency domain, the P-wave displacement can be modeled as the product of (1) the pressure source, (2) the source site effect that accounts for the constructive interference of multiply reflected P-waves in the ocean, (3) the propagation from the ocean bottom to the stations, (4) the receiver site effect. Secondary microseism P-waves have weak amplitudes, but they can be investigated by beamforming analysis. We validate our approach by analyzing the seismic signals generated by Typhoon Ioke (2006) and recorded by the Southern California Seismic Network. Back projecting the beam onto the ocean surface enables to follow the source motion. The observed beam centroid is in the vicinity of the pressure source derived from the ocean wave model WAVEWATCH IIIR. The pressure source is then used for modeling the beam and a good agreement is obtained between measured and modeled beam amplitude variation over time. This modeling approach can be used to invert P-wave noise data and retrieve the source intensity and lateral extent.
Resumo:
The quality of the image of 18F-FDG PET/CT scans in overweight patients is commonly degraded. This study evaluates, retrospectively, the relation between SNR, weight and dose injected in 65 patients, with a range of weights from 35 to 120 kg, with scans performed using the Biograph mCT using a standardized protocol in the Nuclear Medicine Department at Radboud University Medical Centre in Nijmegen, The Netherlands. Five ROI’s were made in the liver, assumed to be an organ of homogenous metabolism, at the same location, in five consecutive slices of the PET/CT scans to obtain the mean uptake (signal) values and its standard deviation (noise). The ratio of both gave us the Signal-to- Noise Ratio in the liver. With the help of a spreadsheet, weight, height, SNR and Body Mass Index were calculated and graphs were designed in order to obtain the relation between these factors. The graphs showed that SNR decreases as the body weight and/or BMI increased and also showed that, even though the dose injected increased, the SNR also decreased. This is due to the fact that heavier patients receive higher dose and, as reported, heavier patients have less SNR. These findings suggest that the quality of the images, measured by SNR, that were acquired in heavier patients are worst than thinner patients, even though higher FDG doses are given. With all this taken in consideration, it was necessary to make a new formula to calculate a new dose to give to patients and having a good and constant SNR in every patient. Through mathematic calculations, it was possible to reach to two new equations (power and exponential), which would lead to a SNR from a scan made with a specific reference weight (86 kg was the considered one) which was independent of body mass. The study implies that with these new formulas, patients heavier than the reference weight will receive higher doses and lighter patients will receive less doses. With the median being 86 kg, the new dose and new SNR was calculated and concluded that the quality of the image remains almost constant as the weight increases and the quantity of the necessary FDG remains almost the same, without increasing the costs for the total amount of FDG used in all these patients.
Resumo:
Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
This paper presents the development of a combined experimental and numerical approach to study the anaerobic digestion of both the wastes produced in a biorefinery using yeast for biodiesel production and the wastes generated in the preceding microbial biomass production. The experimental results show that it is possible to valorise through anaerobic digestion all the tested residues. In the implementation of the numerical model for anaerobic digestion, a procedure for the identification of its parameters needs to be developed. A hybrid search Genetic Algorithm was used, followed by a direct search method. In order to test the procedure for estimation of parameters, first noise-free data was considered and a critical analysis of the results obtain so far was undertaken. As a demonstration of its application, the procedure was applied to experimental data.
Resumo:
Conceptual interpretation of languages has gathered peak interest in the world of artificial intelligence. The challenge in modeling various complications involved in a language is the main motivation behind our work. Our main focus in this work is to develop conceptual graphical representation for image captions. We have used discourse representation structure to gain semantic information which is further modeled into a graphical structure. The effectiveness of the model is evaluated by a caption based image retrieval system. The image retrieval is performed by computing subgraph based similarity measures. Best retrievals were given an average rating of . ± . out of 4 by a group of 25 human judges. The experiments were performed on a subset of the SBU Captioned Photo Dataset. This purpose of this work is to establish the cognitive sensibility of the approach to caption representations