984 resultados para Distance convex simple graphs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report a novel and simple solution-based technique for depositing 2-D zinc oxide platelets at low temperature. Nanoplatelets that were mostly a-oriented associated with the Lotgering orientation factor of 0.65 were obtained by locating a glass substrate at a distance of about 5cm over the aqueous vapour of the boiling precursor. Experiments were carried out to optimize the coating parameters by placing the substrate at different positions, durations and the pH of the precursor. The X-ray diffraction studies confirmed the structure associated with the crystallites to be wurzite. The different morphology of the zinc oxide films and blue light emission were observed using scanning electron microscopy and fluorescence spectroscopy respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove that every isometry from the unit disk Delta in , endowed with the Poincar, distance, to a strongly convex bounded domain Omega of class in , endowed with the Kobayashi distance, is the composition of a complex geodesic of Omega with either a conformal or an anti-conformal automorphism of Delta. As a corollary we obtain that every isometry for the Kobayashi distance, from a strongly convex bounded domain of class in to a strongly convex bounded domain of class in , is either holomorphic or anti-holomorphic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reynolds averaged Navier-Stokes model performances in the stagnation and wake regions for turbulent flows with relatively large Lagrangian length scales (generally larger than the scale of geometrical features) approaching small cylinders (both square and circular) is explored. The effective cylinder (or wire) diameter based Reynolds number, ReW ≤ 2.5 × 103. The following turbulence models are considered: a mixing-length; standard Spalart and Allmaras (SA) and streamline curvature (and rotation) corrected SA (SARC); Secundov's νt-92; Secundov et al.'s two equation νt-L; Wolfshtein's k-l model; the Explicit Algebraic Stress Model (EASM) of Abid et al.; the cubic model of Craft et al.; various linear k-ε models including those with wall distance based damping functions; Menter SST, k-ω and Spalding's LVEL model. The use of differential equation distance functions (Poisson and Hamilton-Jacobi equation based) for palliative turbulence modeling purposes is explored. The performance of SA with these distance functions is also considered in the sharp convex geometry region of an airfoil trailing edge. For the cylinder, with ReW ≈ 2.5 × 103 the mixing length and k-l models give strong turbulence production in the wake region. However, in agreement with eddy viscosity estimates, the LVEL and Secundov νt-92 models show relatively little cylinder influence on turbulence. On the other hand, two equation models (as does the one equation SA) suggest the cylinder gives a strong turbulence deficit in the wake region. Also, for SA, an order or magnitude cylinder diameter decrease from ReW = 2500 to 250 surprisingly strengthens the cylinder's disruptive influence. Importantly, results for ReW ≪ 250 are virtually identical to those for ReW = 250 i.e. no matter how small the cylinder/wire its influence does not, as it should, vanish. Similar tests for the Launder-Sharma k-ε, Menter SST and k-ω show, in accordance with physical reality, the cylinder's influence diminishing albeit slowly with size. Results suggest distance functions palliate the SA model's erroneous trait and improve its predictive performance in wire wake regions. Also, results suggest that, along the stagnation line, such functions improve the SA, mixing length, k-l and LVEL results. For the airfoil, with SA, the larger Poisson distance function increases the wake region turbulence levels by just under 5%. © 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.

This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.

A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.

This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An account and review of the simple methods for determining the operational parameters of fishing gear, underwater, such a tilt of otter boards (outwards or inwards, forwards or afterwards), vertical height of net, its horizontal spread, angle of divergence at bosom, spread between wing tips, angle of inclination of danlenos, butterfly, slope of legs and sweep-line has been given. The relationship of distance between the otter boards spread and the vertical height of net has been obtained as generally linear. The possibilities of regulating the vertical height of net (dependent variate) and spread of otter boards (independent variate) for increasing the fishing efficiency has been discussed. The angle of attack of oval shaped otter boards used during the operations still remain undetermined, however, it has been explained how the best angle of attack for increasing the efficiency of gear can be obtained by regulating the ratio of depth to warp for a given net. The inadequacy of the mere indices of catch per hour of trawling in comparing the relative efficiency of trawls in gear research studies has been indicated. The importance of estimating the operational parameters and its application to commercial fisheries depending upon the distribution pattern of fish and in gear research has been discussed. The efficiency of the jelly bottle method has been compared statistically with the observations made on the trawl gear underwater with instruments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in de-convolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and general design procedure is presented for the polarisation diversity of arbitrary conformal arrays; this procedure is based on the mathematical framework of geometric algebra and can be solved optimally using convex optimisation. Aside from being simpler and more direct than other derivations in the literature, this derivation is also entirely general in that it expresses the transformations in terms of rotors in geometric algebra which can easily be formulated for any arbitrary conformal array geometry. Convex optimisation has a number of advantages; solvers are widespread and freely available, the process generally requires a small number of iterations and a wide variety of constraints can be readily incorporated. The study outlines a two-step approach for addressing polarisation diversity in arbitrary conformal arrays: first, the authors obtain the array polarisation patterns using geometric algebra and secondly use a convex optimisation approach to find the optimal weights for the polarisation diversity problem. The versatility of this approach is illustrated via simulations of a 7×10 cylindrical conformal array. © 2012 The Institution of Engineering and Technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preliminary lifetime values have been measured for a number of near-yrast states in the odd-A transitional nuclei 107Cd and 103Pd. The reaction used to populate the nuclei of interest was 98Mo( 12C,3nxα)107Cd, 103Pd, with the beam delivered by the tandem accelerator of the Wright Nuclear Structure Laboratory at an incident beam energy of 60 MeV. Our experiment was aimed at the investigation of collective excitations built on the unnatural parity, ν h11/2 orbital, specifically by measuring the B(E2) values of decays from the excited levels built on this intrinsic structure, using the Doppler Recoil Distance Method. We report lifetimes and associated transition probabilities for decays from the 15/2- and the 19/2- states in 107Cd and the first measurement of the 15/2- state in 103Pd. These results suggest that neither a simple rotational or vibrational interpretation is sufficient to explain the observed structures. © 2006 American Institute of Physics.