971 resultados para image reduction algorith
Resumo:
DDHQ/TCC esters 3a–f, 7a–g were prepared either by oxidation of spiroketones 1 with DDQ/Image -chloranil or by condensation of acid chloride with DDHQ/TCC. NaBH4 reduction of unsaturated DDHQ 3a–b and TCC 7a–c esters gave the corresponding allylic alcohols in good yield without any observable 1,4-addition products. Reduction of saturated esters 3e, 7d, gave the corresponding alcohols. Alkyl esters 5 and 6, methyl benzoate and phenyl benzoate remained unaffected under these reduction conditions. In the reduction of compound 7e containing both alkyl and TCC esters, TCC ester is selectively reduced. Reduction of TCC mono esters 7f–g gave the lactones. The observed facile reduction has been rationalised.
Resumo:
Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Resumo:
An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
In this thesis we study Galois representations corresponding to abelian varieties with certain reduction conditions. We show that these conditions force the image of the representations to be "big," so that the Mumford-Tate conjecture (:= MT) holds. We also prove that the set of abelian varieties satisfying these conditions is dense in a corresponding moduli space.
The main results of the thesis are the following two theorems.
Theorem A: Let A be an absolutely simple abelian variety, End° (A) = k : imaginary quadratic field, g = dim(A). Assume either dim(A) ≤ 4, or A has bad reduction at some prime ϕ, with the dimension of the toric part of the reduction equal to 2r, and gcd(r,g) = 1, and (r,g) ≠ (15,56) or (m -1, m(m+1)/2). Then MT holds.
Theorem B: Let M be the moduli space of abelian varieties with fixed polarization, level structure and a k-action. It is defined over a number field F. The subset of M(Q) corresponding to absolutely simple abelian varieties with a prescribed stable reduction at a large enough prime ϕ of F is dense in M(C) in the complex topology. In particular, the set of simple abelian varieties having bad reductions with fixed dimension of the toric parts is dense.
Besides this we also established the following results:
(1) MT holds for some other classes of abelian varieties with similar reduction conditions. For example, if A is an abelian variety with End° (A) = Q and the dimension of the toric part of its reduction is prime to dim( A), then MT holds.
(2) MT holds for Ribet-type abelian varieties.
(3) The Hodge and the Tate conjectures are equivalent for abelian 4-folds.
(4) MT holds for abelian 4-folds of type II, III, IV (Theorem 5.0(2)) and some 4-folds of type I.
(5) For some abelian varieties either MT or the Hodge conjecture holds.
Resumo:
The speeds of sound u in, densities ? and refractive indices nD of some homologous series, such as n-alkyl ethanoates, n-alkyl propionates, methyl alkanoates, ethyl alkanoates, dialkyl malonates, and alkyl haloalkanoates, were measured in the temperature range from 298.15 to 333.15 K. Molar volume V, isentropic and isothermal compressibilities ?S and ?T, molar refraction Rm, Eykman’s constant Cm, molecular radius r, Rao’s molar function R, thermal expansion coefficient a, thermal pressure coefficient ?, and Flory’s characteristic parameters image, P*, V*, and T* have been calculated from the measured experimental data. Applicability of Rao theory and Flory–Patterson–Pandey (FPP) theory have been examined and discussed for these alkanoates.
Resumo:
Blind steganalysis of JPEG images is addressed by modeling the correlations among the DCT coefficients using K -variate (K = 2) p.d.f. estimates (p.d.f.s) constructed by means of Markov random field (MRF) cliques. The reasoning of using high variate p.d.f.s together with MRF cliques for image steganalysis is explained via a classical detection problem. Although our approach has many improvements over the current state-of-the-art, it suffers from the high dimensionality and the sparseness of the high variate p.d.f.s. The dimensionality problem as well as the sparseness problem are solved heuristically by means of dimensionality reduction and feature selection algorithms. The detection accuracy of the proposed method(s) is evaluated over Memon's (30.000 images) and Goljan's (1912 images) image sets. It is shown that practically applicable steganalysis systems are possible with a suitable dimensionality reduction technique and these systems can provide, in general, improved detection accuracy over the current state-of-the-art. Experimental results also justify this assertion.
Resumo:
PURPOSE: To investigate the effects of using volumetric modulated arc therapy (VMAT) and/or voluntary moderate deep inspiration breath-hold (vmDIBH) in the radiation therapy (RT) of left-sided breast cancer including the regional lymph nodes.
MATERIALS AND METHODS: For 13 patients, four treatment combinations were compared; 3D-conformal RT (i.e., forward IMRT) in free-breathing 3D-CRT(FB), 3D-CRT(vmDIBH), 2 partial arcs VMAT(FB), and VMAT(vmDIBH). Prescribed dose was 42.56 Gy in 16 fractions. For 10 additional patients, 3D-CRT and VMAT in vmDIBH only were also compared.
RESULTS: Dose conformity, PTV coverage, ipsilateral and total lung doses were significantly better for VMAT plans compared to 3D-CRT. Mean heart dose (D(mean,heart)) reduction in 3D-CRT(vmDIBH) was between 0.9 and 8.6 Gy, depending on initial D(mean,heart) (in 3D-CRT(FB) plans). VMAT(vmDIBH) reduced the D(mean,heart) further when D(mean,heart) was still >3.2 Gy in 3D-CRT(vmDIBH). Mean contralateral breast dose was higher for VMAT plans (2.7 Gy) compared to 3DCRT plans (0.7 Gy).
CONCLUSIONS: VMAT and 3D-CRT(vmDIBH) significantly reduced heart dose for patients treated with locoregional RT of left-sided breast cancer. When Dmean,heart exceeded 3.2 Gy in 3D-CRT(vmDIBH) plans, VMAT(vmDIBH) resulted in a cumulative heart dose reduction. VMAT also provided better target coverage and reduced ipsilateral lung dose, at the expense of a small increase in the dose to the contralateral breast.
Resumo:
Thesis (Master's)--University of Washington, 2015
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a deformity of the spine manifested by asymmetry and deformities of the external surface of the trunk. Classification of scoliosis deformities according to curve type is used to plan management of scoliosis patients. Currently, scoliosis curve type is determined based on X-ray exam. However, cumulative exposure to X-rays radiation significantly increases the risk for certain cancer. In this paper, we propose a robust system that can classify the scoliosis curve type from non invasive acquisition of 3D trunk surface of the patients. The 3D image of the trunk is divided into patches and local geometric descriptors characterizing the surface of the back are computed from each patch and forming the features. We perform the reduction of the dimensionality by using Principal Component Analysis and 53 components were retained. In this work a multi-class classifier is built with Least-squares support vector machine (LS-SVM) which is a kernel classifier. For this study, a new kernel was designed in order to achieve a robust classifier in comparison with polynomial and Gaussian kernel. The proposed system was validated using data of 103 patients with different scoliosis curve types diagnosed and classified by an orthopedic surgeon from the X-ray images. The average rate of successful classification was 93.3% with a better rate of prediction for the major thoracic and lumbar/thoracolumbar types.
Resumo:
Individuals with schizophrenia, particularly those with passivity symptoms, may not feel in control of their actions, believing them to be controlled by external agents. Cognitive operations that contribute to these symptoms may include abnormal processing in agency as well as body representations that deal with body schema and body image. However, these operations in schizophrenia are not fully understood, and the questions of general versus specific deficits in individuals with different symptom profiles remain unanswered. Using the projected-hand illusion (a digital video version of the rubber-hand illusion) with synchronous and asynchronous stroking (500 ms delay), and a hand laterality judgment task, we assessed sense of agency, body image, and body schema in 53 people with clinically stable schizophrenia (with a current, past, and no history of passivity symptoms) and 48 healthy controls. The results revealed a stable trait in schizophrenia with no difference between clinical subgroups (sense of agency) and some quantitative (specific) differences depending on the passivity symptom profile (body image and body schema). Specifically, a reduced sense of self-agency was a common feature of all clinical subgroups. However, subgroup comparisons showed that individuals with passivity symptoms (both current and past) had significantly greater deficits on tasks assessing body image and body schema, relative to the other groups. In addition, patients with current passivity symptoms failed to demonstrate the normal reduction in body illusion typically seen with a 500 ms delay in visual feedback (asynchronous condition), suggesting internal timing problems. Altogether, the results underscore self-abnormalities in schizophrenia, provide evidence for both trait abnormalities and state changes specific to passivity symptoms, and point to a role for internal timing deficits as a mechanistic explanation for external cues becoming a possible source of self-body input.