945 resultados para CONVEX
Resumo:
Geometric packing problems may be formulated mathematically as constrained optimization problems. But finding a good solution is a challenging task. The more complicated the geometry of the container or the objects to be packed, the more complex the non-penetration constraints become. In this work we propose the use of a physics engine that simulates a system of colliding rigid bodies. It is a tool to resolve interpenetration conflicts and to optimize configurations locally. We develop an efficient and easy-to-implement physics engine that is specialized for collision detection and contact handling. In succession of the development of this engine a number of novel algorithms for distance calculation and intersection volume were designed and imple- mented, which are presented in this work. They are highly specialized to pro- vide fast responses for cuboids and triangles as input geometry whereas the concepts they are based on can easily be extended to other convex shapes. Especially noteworthy in this context is our ε-distance algorithm - a novel application that is not only very robust and fast but also compact in its im- plementation. Several state-of-the-art third party implementations are being presented and we show that our implementations beat them in runtime and robustness. The packing algorithm that lies on top of the physics engine is a Monte Carlo based approach implemented for packing cuboids into a container described by a triangle soup. We give an implementation for the SAE J1100 variant of the trunk packing problem. We compare this implementation to several established approaches and we show that it gives better results in faster time than these existing implementations.
Resumo:
The purpose of this study is to analyse the regularity of a differential operator, the Kohn Laplacian, in two settings: the Heisenberg group and the strongly pseudoconvex CR manifolds. The Heisenberg group is defined as a space of dimension 2n+1 with a product. It can be seen in two different ways: as a Lie group and as the boundary of the Siegel UpperHalf Space. On the Heisenberg group there exists the tangential CR complex. From this we define its adjoint and the Kohn-Laplacian. Then we obtain estimates for the Kohn-Laplacian and find its solvability and hypoellipticity. For stating L^p and Holder estimates, we talk about homogeneous distributions. In the second part we start working with a manifold M of real dimension 2n+1. We say that M is a CR manifold if some properties are satisfied. More, we say that a CR manifold M is strongly pseudoconvex if the Levi form defined on M is positive defined. Since we will show that the Heisenberg group is a model for the strongly pseudo-convex CR manifolds, we look for an osculating Heisenberg structure in a neighborhood of a point in M, and we want this structure to change smoothly from a point to another. For that, we define Normal Coordinates and we study their properties. We also examinate different Normal Coordinates in the case of a real hypersurface with an induced CR structure. Finally, we define again the CR complex, its adjoint and the Laplacian operator on M. We study these new operators showing subelliptic estimates. For that, we don't need M to be pseudo-complex but we ask less, that is, the Z(q) and the Y(q) conditions. This provides local regularity theorems for Laplacian and show its hypoellipticity on M.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
OBJECTIVE: To retrospectively evaluate the craniofacial morphology of children with a complete unilateral cleft lip and palate treated with a 1-stage simultaneous cleft repair performed in the first year of life. METHODS: Cephalograms and extraoral profile photographs of 61 consecutively treated patients (42 boys, 19 girls) who had been operated on at 9.2 (SD, 2.0) months by a single experienced surgeon were analyzed at 11.4 (SD, 1.5) years. The noncleft control group comprised 81 children (43 boys and 38 girls) of the same ethnicity at the age of 10.4 (SD, 0.5) years. RESULTS: In children with cleft, the maxilla and mandible were retrusive; the palatal and mandibular planes were more open, and sagittal maxillomandibular relationship was less favorable in comparison to noncleft control subjects. Soft tissues in patients with cleft reflected retrusive morphology of hard tissues--subnasal and supramental regions were less convex, profile was flatter, and nasolabial angle was more acute relative to those of the control subjects. CONCLUSIONS: Craniofacial morphology after 1-stage repair was deviated in comparison with noncleft control subjects. However, the degree of deviation was comparable with that found after treatment with alternative surgical protocols.
Resumo:
In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.
Resumo:
Introduction: Spinal fusion is a widely and successfully performed strategy for the treatment of spinal deformities and degenerative diseases. The general approach has been to stabilize the spine with implants so that a solid bony fusion between the vertebrae can develop. However, new implant designs have emerged that aim at preservation or restoration of the motion of the spinal segment. In addition to static, load sharing principles, these designs also require a profound knowledge of kinematic and dynamic properties to properly characterise the in vivo performance of the implants. Methods: To address this, an apparatus was developed that enables the intraoperative determination of the load–displacement behavior of spinal motion segments. The apparatus consists of a sensor-equipped distractor to measure the applied force between the transverse processes, and an optoelectronic camera to track the motion of vertebrae and the distractor. In this intraoperative trial, measurements from two patients with adolescent idiopathic scoliosis with right thoracic curves were made at four motion segments each. Results: At a lateral bending moment of 5 N m, the mean flexibility of all eight motion segments was 0.18 ± 0.08°/N m on the convex side and 0.24 ± 0.11°/N m on the concave side. Discussion: The results agree with published data obtained from cadaver studies with and without axial preload. Intraoperatively acquired data with this method may serve as an input for mathematical models and contribute to the development of new implants and treatment strategies.
Resumo:
BACKGROUND: Chronic neck pain after whiplash injury is caused by cervical zygapophysial joints in 50% of patients. Diagnostic blocks of nerves supplying the joints are performed using fluoroscopy. The authors' hypothesis was that the third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. METHODS: In 14 volunteers, the authors placed a needle ultrasound-guided to the third occipital nerve on both sides of the neck. They punctured caudal and perpendicular to the 14-MHz transducer. In 11 volunteers, 0.9 ml of either local anesthetic or normal saline was applied in a randomized, double-blind, crossover manner. Anesthesia was controlled in the corresponding skin area by pinprick and cold testing. The position of the needle was controlled by fluoroscopy. RESULTS: The third occipital nerve could be visualized in all subjects and showed a median diameter of 2.0 mm. Anesthesia was missing after local anesthetic in only one case. There was neither anesthesia nor hyposensitivity after any of the saline injections. The C2-C3 joint, in a transversal plane visualized as a convex density, was identified correctly by ultrasound in 27 of 28 cases, and 23 needles were placed correctly into the target zone. CONCLUSIONS: The third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. The needles were positioned accurately in 82% of cases as confirmed by fluoroscopy; the nerve was blocked in 90% of cases. Because ultrasound is the only available technique today to visualize this nerve, it seems to be a promising new method for block guidance instead of fluoroscopy.
Resumo:
We consider nonparametric missing data models for which the censoring mechanism satisfies coarsening at random and which allow complete observations on the variable X of interest. W show that beyond some empirical process conditions the only essential condition for efficiency of an NPMLE of the distribution of X is that the regions associated with incomplete observations on X contain enough complete observations. This is heuristically explained by describing the EM-algorithm. We provide identifiably of the self-consistency equation and efficiency of the NPMLE in order to make this statement rigorous. The usual kind of differentiability conditions in the proof are avoided by using an identity which holds for the NPMLE of linear parameters in convex models. We provide a bivariate censoring application in which the condition and hence the NPMLE fails, but where other estimators, not based on the NPMLE principle, are highly inefficient. It is shown how to slightly reduce the data so that the conditions hold for the reduced data. The conditions are verified for the univariate censoring, double censored, and Ibragimov-Has'minski models.
Resumo:
A method is given for proving efficiency of NPMLE directly linked to empirical process theory. The conditions in general are appropriate consistency of the NPMLE, differentiability of the model, differentiability of the parameter of interest, local convexity of the parameter space, and a Donsker class condition for the class of efficient influence functions obtained by varying the parameters. For the case that the model is linear in the parameter and the parameter space is convex, as with most nonparametric missing data models, we show that the method leads to an identity for the NPMLE which almost says that the NPMLE is efficient and provides us straightforwardly with a consistency and efficiency proof. This identify is extended to an almost linear class of models which contain biased sampling models. To illustrate, the method is applied to the univariate censoring model, random truncation models, interval censoring case I model, the class of parametric models and to a class of semiparametric models.
Resumo:
PURPOSE: To test the hypothesis that the extension of areas with increased fundus autofluorescence (FAF) outside atrophic patches correlates with the rate of spread of geographic atrophy (GA) over time in eyes with age-related macular degeneration (AMD). METHODS: The database of the multicenter longitudinal natural history Fundus Autofluorescence in AMD (FAM) Study was reviewed for patients with GA recruited through the end of August 2003, with follow-up examinations within at least 1 year. Only eyes with sufficient image quality and with diffuse patterns of increased FAF surrounding atrophy were chosen. In standardized digital FAF images (excitation, 488 nm; emission, >500 nm), total size and spread of GA was measured. The convex hull (CH) of increased FAF as the minimum polygon encompassing the entire area of increased FAF surrounding the central atrophic patches was quantified at baseline. Statistical analysis was performed with the Spearman's rank correlation coefficient (rho). RESULTS: Thirty-nine eyes of 32 patients were included (median age, 75.0 years; interquartile range [IQR], 67.8-78.9); median follow-up, 1.87 years; IQR, 1.43-3.37). At baseline, the median total size of atrophy was 7.04 mm2 (IQR, 4.20-9.88). The median size of the CH was 21.47 mm2 (IQR, 15.19-28.26). The median rate of GA progression was 1.72 mm2 per year (IQR, 1.10-2.83). The area of increased FAF around the atrophy (difference between the CH and the total GA size at baseline) showed a positive correlation with GA enlargement over time (rho=0.60; P=0.0002). CONCLUSIONS: FAF characteristics that are not identified by fundus photography or fluorescein angiography may serve as a prognostic determinant in advanced atrophic AMD. As the FAF signal originates from lipofuscin (LF) in postmitotic RPE cells and since increased FAF indicates excessive LF accumulation, these findings would underscore the pathophysiological role of RPE-LF in AMD pathogenesis.
Resumo:
Several of multiasset derivatives like basket options or options on the weighted maximum of assets exhibit the property that their prices determine uniquely the underlying asset distribution. Related to that the question how to retrieve this distributions from the corresponding derivatives quotes will be discussed. On the contrary, the prices of exchange options do not uniquely determine the underlying distributions of asset prices and the extent of this non-uniqueness can be characterised. The discussion is related to a geometric interpretation of multiasset derivatives as support functions of convex sets. Following this, various symmetry properties for basket, maximum and exchange options are discussed alongside with their geometric interpretations and some decomposition results for more general payoff functions.
Resumo:
Nonlinear computational analysis of materials showing elasto-plasticity or damage relies on knowledge of their yield behavior and strengths under complex stress states. In this work, a generalized anisotropic quadric yield criterion is proposed that is homogeneous of degree one and takes a convex quadric shape with a smooth transition from ellipsoidal to cylindrical or conical surfaces. If in the case of material identification, the shape of the yield function is not known a priori, a minimization using the quadric criterion will result in the optimal shape among the convex quadrics. The convexity limits of the criterion and the transition points between the different shapes are identified. Several special cases of the criterion for distinct material symmetries such as isotropy, cubic symmetry, fabric-based orthotropy and general orthotropy are presented and discussed. The generality of the formulation is demonstrated by showing its degeneration to several classical yield surfaces like the von Mises, Drucker–Prager, Tsai–Wu, Liu, generalized Hill and classical Hill criteria under appropriate conditions. Applicability of the formulation for micromechanical analyses was shown by transformation of a criterion for porous cohesive-frictional materials by Maghous et al. In order to demonstrate the advantages of the generalized formulation, bone is chosen as an example material, since it features yield envelopes with different shapes depending on the considered length scale. A fabric- and density-based quadric criterion for the description of homogenized material behavior of trabecular bone is identified from uniaxial, multiaxial and torsional experimental data. Also, a fabric- and density-based Tsai–Wu yield criterion for homogenized trabecular bone from in silico data is converted to an equivalent quadric criterion by introduction of a transformation of the interaction parameters. Finally, a quadric yield criterion for lamellar bone at the microscale is identified from a nanoindentation study reported in the literature, thus demonstrating the applicability of the generalized formulation to the description of the yield envelope of bone at multiple length scales.
Resumo:
Greedy routing can be used in mobile ad-hoc networks as geographic routing protocol. This paper proposes to use greedy routing also in overlay networks by positioning overlay nodes into a multi-dimensional Euclidean space. Greedy routing can only be applied when a routing decision makes progress towards the final destination. Our proposed overlay network is built such that there will be always progress at each forwarding node. This is achieved by constructing at each node a so-called nearest neighbor convex set (NNCS). NNCSs can be used for various applications such as multicast routing, service discovery and Quality-of-Service routing. NNCS has been compared with Pastry, another topology-aware overlay network. NNCS has superior relative path stretches indicating the optimality of a path.
Resumo:
We propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. Our algorithm works by estimating the displacements from image patches to the (unknown) landmark positions and then integrating them via voting. The fundamental contribution is that, we jointly estimate the displacements from all patches to multiple landmarks together, by considering not only the training data but also geometric constraints on the test image. The various constraints constitute a convex objective function that can be solved efficiently. Validated on three challenging datasets, our method achieves high accuracy in landmark detection, and, combined with statistical shape model, gives a better performance in shape segmentation compared to the state-of-the-art methods.
Resumo:
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.