926 resultados para Image processing -- Digital techniques -- Mathematical models
Resumo:
Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images
Resumo:
In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram
Resumo:
Creació d'un entorn de treball per tal de visualitzar models tridimensionals en temps real amb dos objectius: proporcionar una interfície gràfica per poder visualitzar interactivament una escena, modificant-ne els seus elements i aconseguir un disseny que faci el projecte altament revisable i reutilitzable en elfutur, i serveixi per tant de plataforma per provar altres projectes
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
This paper addresses a fully automatic landmarks detection method for breast reconstruction aesthetic assessment. The set of landmarks detected are the supraesternal notch (SSN), armpits, nipples, and inframammary fold (IMF). These landmarks are commonly used in order to perform anthropometric measurements for aesthetic assessment. The methodological approach is based on both illumination and morphological analysis. The proposed method has been tested with 21 images. A good overall performance is observed, although several improvements must be achieved in order to refine the detection of nipples and SSNs.
Resumo:
Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.
Resumo:
Mosaics have been commonly used as visual maps for undersea exploration and navigation. The position and orientation of an underwater vehicle can be calculated by integrating the apparent motion of the images which form the mosaic. A feature-based mosaicking method is proposed in this paper. The creation of the mosaic is accomplished in four stages: feature selection and matching, detection of points describing the dominant motion, homography computation and mosaic construction. In this work we demonstrate that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic. The system is able to provide 3D metric information concerning the vehicle motion using the knowledge of the intrinsic parameters of the camera while integrating the measurements of an ultrasonic sensor. The experimental results of real images have been tested on the GARBI underwater vehicle
Resumo:
Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system
Resumo:
The introduction of an infective-infectious period on the geographic spread of epidemics is considered in two different models. The classical evolution equations arising in the literature are generalized and the existence of epidemic wave fronts is revised. The asymptotic speed is obtained and improves previous results for the Black Death plague
Resumo:
Coronary magnetic resonance angiography (MRA) is a powerful noninvasive technique with high soft-tissue contrast for the visualization of the coronary anatomy without X-ray exposure. Due to the small dimensions and tortuous nature of the coronary arteries, a high spatial resolution and sufficient volumetric coverage have to be obtained. However, this necessitates scanning times that are typically much longer than one cardiac cycle. By collecting image data during multiple RR intervals, one can successfully acquire coronary MR angiograms. However, constant cardiac contraction and relaxation, as well as respiratory motion, adversely affect image quality. Therefore, sophisticated motion-compensation strategies are needed. Furthermore, a high contrast between the coronary arteries and the surrounding tissue is mandatory. In the present article, challenges and solutions of coronary imaging are discussed, and results obtained in both healthy and diseased states are reviewed. This includes preliminary data obtained with state-of-the-art techniques such as steady-state free precession (SSFP), whole-heart imaging, intravascular contrast agents, coronary vessel wall imaging, and high-field imaging. Simultaneously, the utility of electron beam computed tomography (EBCT) and multidetector computed tomography (MDCT) for the visualization of the coronary arteries is discussed.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.
Resumo:
For free-breathing, high-resolution, three-dimensional coronary magnetic resonance angiography (MRA), the use of intravascular contrast agents may be helpful for contrast enhancement between coronary blood and myocardium. In six patients, 0.1 mmol/kg of the intravascular contrast agent MS-325/AngioMARK was given intravenously followed by double-oblique, free-breathing, three-dimensional inversion-recovery coronary MRA with real-time navigator gating and motion correction. Contrast-enhanced, three-dimensional coronary MRA images were compared with images obtained with a T2 prepulse (T2Prep) without exogenous contrast. The contrast-enhanced images demonstrated a 69% improvement in the contrast-to-noise ratio (6.6 +/- 1.1 vs. 11.1 +/- 2.5; P < 0.01) compared with the T2Prep approach. By using the intravascular agent, extensive portions (> 80 mm) of the native left and right coronary system could be displayed consistently with sub-millimeter in-plane resolution. The intravascular contrast agent, MS-325/AngioMARK, leads to a considerable enhancement of the blood/muscle contrast for coronary MRA compared with T2Prep techniques. The clinical value of the agent remains to be defined in a larger patient series. J. Magn. Reson. Imaging 1999;10:790-799.
Resumo:
Computed tomography (CT) is used increasingly to measure liver volume in patients undergoing evaluation for transplantation or resection. This study is designed to determine a formula predicting total liver volume (TLV) based on body surface area (BSA) or body weight in Western adults. TLV was measured in 292 patients from four Western centers. Liver volumes were calculated from helical computed tomographic scans obtained for conditions unrelated to the hepatobiliary system. BSA was calculated based on height and weight. Each center used a different established method of three-dimensional volume reconstruction. Using regression analysis, measurements were compared, and formulas correlating BSA or body weight to TLV were established. A linear regression formula to estimate TLV based on BSA was obtained: TLV = -794.41 + 1,267.28 x BSA (square meters; r(2) = 0.46; P <.0001). A formula based on patient weight also was derived: TLV = 191.80 + 18.51 x weight (kilograms; r(2) = 0.49; P <.0001). The newly derived TLV formula based on BSA was compared with previously reported formulas. The application of a formula obtained from healthy Japanese individuals underestimated TLV. Two formulas derived from autopsy data for Western populations were similar to the newly derived BSA formula, with a slight overestimation of TLV. In conclusion, hepatic three-dimensional volume reconstruction based on helical CT predicts TLV based on BSA or body weight. The new formulas derived from this correlation should contribute to the estimation of TLV before liver transplantation or major hepatic resection.
Resumo:
Purpose: To evaluate whether parametric imaging with contrast material-enhanced ultrasonography (US) is superior to visual assessment for the differential diagnosis of focal liver lesions (FLLs). Materials and Methods: This study had institutional review board approval, and verbal patient informed consent was obtained. Between August 2005 and October 2008, 146 FLLs in 145 patients (63 women, 82 men; mean age, 62.5 years; age range, 22-89 years) were imaged with real-time low-mechanical-index contrast-enhanced US after a bolus injection of 2.4 mL of a second-generation contrast agent. Clips showing contrast agent uptake kinetics (including arterial, portal, and late phases) were recorded and subsequently analyzed off-line with dedicated image processing software. Analysis of the dynamic vascular patterns (DVPs) of lesions with respect to adjacent parenchyma allowed mapping DVP signatures on a single parametric image. Cine loops of contrast-enhanced US and results from parametric imaging of DVP were assessed separately by three independent off-site readers who classified each lesion as benign, malignant, or indeterminate. Sensitivity, specificity, accuracy, and positive and negative predictive values were calculated for both techniques. Interobserver agreement (κ statistics) was determined. Results: Sensitivities for visual interpretation of cine loops for the three readers were 85.0%, 77.9%, and 87.6%, which improved significantly to 96.5%, 97.3%, and 96.5% for parametric imaging, respectively (P < .05, McNemar test), while retaining high specificity (90.9% for all three readers). Accuracy scores of parametric imaging were higher than those of conventional contrast-enhanced US for all three readers (P < .001, McNemar test). Interobserver agreement increased with DVP parametric imaging compared with conventional contrast-enhanced US (change of κ from 0.54 to 0.99). Conclusion: Parametric imaging of DVP improves diagnostic performance of contrast-enhanced US in the differentiation between malignant and benign FLLs; it also provides excellent interobserver agreement.