966 resultados para Piecewise smooth vector field
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Resumo:
This paper presents an analysis of the slow-peaking phenomenon, a pitfall of low-gain designs that imposes basic limitations to large regions of attraction in nonlinear control systems. The phenomenon is best understood on a chain of integrators perturbed by a vector field up(x, u) that satisfies p(x, 0) = 0. Because small controls (or low-gain designs) are sufficient to stabilize the unperturbed chain of integrators, it may seem that smaller controls, which attenuate the perturbation up(x, u) in a large compact set, can be employed to achieve larger regions of attraction. This intuition is false, however, and peaking may cause a loss of global controllability unless severe growth restrictions are imposed on p(x, u). These growth restrictions are expressed as a higher order condition with respect to a particular weighted dilation related to the peaking exponents of the nominal system. When this higher order condition is satisfied, an explicit control law is derived that achieves global asymptotic stability of x = 0. This stabilization result is extended to more general cascade nonlinear systems in which the perturbation p(x, v) v, v = (ξ, u) T, contains the state ξ and the control u of a stabilizable subsystem ξ = a(ξ, u). As an illustration, a control law is derived that achieves global stabilization of the frictionless ball-and-beam model.
Resumo:
In this paper we propose a new algorithm for reconstructing phase-encoded velocity images of catalytic reactors from undersampled NMR acquisitions. Previous work on this application has employed total variation and nonlinear conjugate gradients which, although promising, yields unsatisfactory, unphysical visual results. Our approach leverages prior knowledge about the piecewise-smoothness of the phase map and physical constraints imposed by the system under study. We show how iteratively regularizing the real and imaginary parts of the acquired complex image separately in a shift-invariant wavelet domain works to produce a piecewise-smooth velocity map, in general. Using appropriately defined metrics we demonstrate higher fidelity to the ground truth and physical system constraints than previous methods for this specific application. © 2013 IEEE.
Resumo:
We investigate the decomposition of noncommutative gauge potential (A) over cap (i), and find that it has inner structure, namely, (A) over cap (i) can he decomposed in two parts, (b) over cap (i) and (a) over cap (i), where (b) over cap (i) satisfies gauge transformations while (a) over cap (i) satisfies adjoint transformations, so close the Seiberg-Witten mapping of noncommutative, U(1) gauge potential. By, means of Seiberg-Witten mapping, we construct a mapping of unit vector field between noncommutative space and ordinary space, and find the noncommutative U(1) gauge potential and its gauge field tensor can be expressed in terms of the unit vector field. When the unit vector field has no singularity point, noncommutative gauge potential and gauge field tensor will equal ordinary gauge potential and gauge field tensor
Resumo:
Mammographic mass detection is an important task for the early diagnosis of breast cancer. However, it is difficult to distinguish masses from normal regions because of their abundant morphological characteristics and ambiguous margins. To improve the mass detection performance, it is essential to effectively preprocess mammogram to preserve both the intensity distribution and morphological characteristics of regions. In this paper, morphological component analysis is first introduced to decompose a mammogram into a piecewise-smooth component and a texture component. The former is utilized in our detection scheme as it effectively suppresses both structural noises and effects of blood vessels. Then, we propose two novel concentric layer criteria to detect different types of suspicious regions in a mammogram. The combination is evaluated based on the Digital Database for Screening Mammography, where 100 malignant cases and 50 benign cases are utilized. The sensitivity of the proposed scheme is 99% in malignant, 88% in benign, and 95.3% in all types of cases. The results show that the proposed detection scheme achieves satisfactory detection performance and preferable compromises between sensitivity and false positive rates.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
The study of real hypersurfaces in pseudo-Riemannian complex space forms and para-complex space forms, which are the pseudo-Riemannian generalizations of the complex space forms, is addressed. It is proved that there are no umbilic hypersurfaces, nor real hypersurfaces with parallel shape operator in such spaces. Denoting by J be the complex or para-complex structure of a pseudo-complex or para-complex space form respectively, a non-degenerate hypersurface of such space with unit normal vector field N is said to be Hopf if the tangent vector field JN is a principal direction. It is proved that if a hypersurface is Hopf, then the corresponding principal curvature (the Hopf curvature) is constant. It is also observed that in some cases a Hopf hypersurface must be, locally, a tube over a complex (or para-complex) submanifold, thus generalizing previous results of Cecil, Ryan and Montiel.
Resumo:
La thèse présente une description géométrique d’un germe de famille générique déployant un champ de vecteurs réel analytique avec un foyer faible à l’origine et son complexifié : le feuilletage holomorphe singulier associé. On montre que deux germes de telles familles sont orbitalement analytiquement équivalents si et seulement si les germes de familles de difféomorphismes déployant la complexification de leurs fonctions de retour de Poincaré sont conjuguées par une conjugaison analytique réelle. Le “caractère réel” de la famille correspond à sa Z2-équivariance dans R^4, et cela s’exprime comme l’invariance du plan réel sous le flot du système laquelle, à son tour, entraîne que l’expansion asymptotique de la fonction de Poincaré est réelle quand le paramètre est réel. Le pullback du plan réel après éclatement par la projection monoidal standard intersecte le feuilletage en une bande de Möbius réelle. La technique d’éclatement des singularités permet aussi de donner une réponse à la question de la “réalisation” d’un germe de famille déployant un germe de difféomorphisme avec un point fixe de multiplicateur égal à −1 et de codimension un comme application de semi-monodromie d’une famille générique déployant un foyer faible d’ordre un. Afin d’étudier l’espace des orbites de l’application de Poincaré, nous utilisons le point de vue de Glutsyuk, puisque la dynamique est linéarisable auprès des points singuliers : pour les valeurs réels du paramètre, notre démarche, classique, utilise une méthode géométrique, soit un changement de coordonée (coordonée “déroulante”) dans lequel la dynamique devient beaucoup plus simple. Mais le prix à payer est que la géométrie locale du plan complexe ambiante devient une surface de Riemann, sur laquelle deux notions de translation sont définies. Après avoir pris le quotient par le relèvement de la dynamique nous obtenons l’espace des orbites, ce qui s’avère être l’union de trois tores complexes plus les points singuliers (l’espace résultant est non-Hausdorff). Les translations, le caractère réel de l’application de Poincaré et le fait que cette application est un carré relient les différentes composantes du “module de Glutsyuk”. Cette propriété implique donc le fait qu’une seule composante de l’invariant Glutsyuk est indépendante.
Resumo:
La méthode de projection et l'approche variationnelle de Sasaki sont deux techniques permettant d'obtenir un champ vectoriel à divergence nulle à partir d'un champ initial quelconque. Pour une vitesse d'un vent en haute altitude, un champ de vitesse sur une grille décalée est généré au-dessus d'une topographie donnée par une fonction analytique. L'approche cartésienne nommée Embedded Boundary Method est utilisée pour résoudre une équation de Poisson découlant de la projection sur un domaine irrégulier avec des conditions aux limites mixtes. La solution obtenue permet de corriger le champ initial afin d'obtenir un champ respectant la loi de conservation de la masse et prenant également en compte les effets dûs à la géométrie du terrain. Le champ de vitesse ainsi généré permettra de propager un feu de forêt sur la topographie à l'aide de la méthode iso-niveaux. L'algorithme est décrit pour le cas en deux et trois dimensions et des tests de convergence sont effectués.
Resumo:
Les surfaces de subdivision fournissent une méthode alternative prometteuse dans la modélisation géométrique, et ont des avantages sur la représentation classique de trimmed-NURBS, en particulier dans la modélisation de surfaces lisses par morceaux. Dans ce mémoire, nous considérons le problème des opérations géométriques sur les surfaces de subdivision, avec l'exigence stricte de forme topologique correcte. Puisque ce problème peut être mal conditionné, nous proposons une approche pour la gestion de l'incertitude qui existe dans le calcul géométrique. Nous exigeons l'exactitude des informations topologiques lorsque l'on considère la nature de robustesse du problème des opérations géométriques sur les modèles de solides, et il devient clair que le problème peut être mal conditionné en présence de l'incertitude qui est omniprésente dans les données. Nous proposons donc une approche interactive de gestion de l'incertitude des opérations géométriques, dans le cadre d'un calcul basé sur la norme IEEE arithmétique et la modélisation en surfaces de subdivision. Un algorithme pour le problème planar-cut est alors présenté qui a comme but de satisfaire à l'exigence topologique mentionnée ci-dessus.
Resumo:
Exercises and solutions in LaTex
Resumo:
Exercises and solutions in PDF
Resumo:
Exercises and solutions in LaTex
Resumo:
Exercises and solutions in PDF
Resumo:
Diffusion Tensor Imaging (DTI) is a new magnetic resonance imaging modality capable of producing quantitative maps of microscopic natural displacements of water molecules that occur in brain tissues as part of the physical diffusion process. This technique has become a powerful tool in the investigation of brain structure and function because it allows for in vivo measurements of white matter fiber orientation. The application of DTI in clinical practice requires specialized processing and visualization techniques to extract and represent acquired information in a comprehensible manner. Tracking techniques are used to infer patterns of continuity in the brain by following in a step-wise mode the path of a set of particles dropped into a vector field. In this way, white matter fiber maps can be obtained.