73 resultados para Bivariate geometric distributions
Resumo:
Ionic polymer-metal composites (IPMC), piezoelectric polymer composites and nematic elastomer composites are materials, which exhibit characteristics of both sensors and actuators. Large deformation and curvature are observed in these systems when electric potential is applied. Effects of geometric non-linearity due to the chargeinduced motion in these materials are poorly understood. In this paper, a coupled model for understanding the behavior of an ionic polymer beam undergoing large deformation and large curvature is presented. Maxwell's equations and charge transport equations are considered which couple the distribution of the ion concentration and the pressure gradient along length of a cantilever beam with interdigital electrodes. A nonlinear constitutive model is derived accounting for the visco-elasto-plastic behavior of these polymers and based on the hypothesis that the presence of electrical charge stretches/contracts bonds, which give rise to electrical field dependent softening/hardening. Polymer chain orientation in statistical sense plays a role on such softening or hardening. Elementary beam kinematics with large curvature is considered. A model for understanding the deformation due to electrostatic repulsion between asymmetrical charge distributions across the cross-sections is presented. Experimental evidence that Silver(Ag) nanoparticle coated IPMCs can be used for energy harvesting is reported. An IPMC strip is vibrated in different environments and the electric power against a resistive load is measured. The electrical power generated was observed to vary with the environment with maximum power being generated when the strip is in wet state. IPMC based energy harvesting systems have potential applications in tidal wave energy harvesting, residual environmental energy harvesting to power MEMS and NEMS devices.
Resumo:
This paper presents a novel method of representing rotation and its application to representing the ranges of motion of coupled joints in the human body, using planar maps. The present work focuses on the viability of this representation for situations that relied on maps on a unit sphere. Maps on a unit sphere have been used in diverse applications such as Gauss map, visibility maps, axis-angle and Euler-angle representations of rotation etc. Computations on a spherical surface are difficult and computationally expensive; all the above applications suffer from problems associated with singularities at the poles. There are methods to represent the ranges of motion of such joints using two-dimensional spherical polygons. The present work proposes to use multiple planar domain “cube” instead of a single spherical domain, to achieve the above objective. The parameterization on the planar domains is easy to obtain and convert to spherical coordinates. Further, there is no localized and extreme distortion of the parameter space and it gives robustness to the computations. The representation has been compared with the spherical representation in terms of computational ease and issues related to singularities. Methods have been proposed to represent joint range of motion and coupled degrees of freedom for various joints in digital human models (such as shoulder, wrist and fingers). A novel method has been proposed to represent twist in addition to the existing swing-swivel representation.
Resumo:
We give an efficient randomized algorithm to construct a box representation of any graph G on n vertices in $1.5 (\Delta + 2) \ln n$ dimensions, where $\Delta$ is the maximum degree of G. We also show that $\boxi(G) \le (\Delta + 2) \ln n$ for any graph G. Our bound is tight up to a factor of $\ln n$. We also show that our randomized algorithm can be derandomized to get a polynomial time deterministic algorithm. Though our general upper bound is in terms of maximum degree $\Delta$, we show that for almost all graphs on n vertices, its boxicity is upper bound by $c\cdot(d_{av} + 1) \ln n$ where d_{av} is the average degree and c is a small constant. Also, we show that for any graph G, $\boxi(G) \le \sqrt{8 n d_{av} \ln n}$, which is tight up to a factor of $b \sqrt{\ln n}$ for a constant b.
Resumo:
We consider evolving exponential RGGs in one dimension and characterize the time dependent behavior of some of their topological properties. We consider two evolution models and study one of them detail while providing a summary of the results for the other. In the first model, the inter-nodal gaps evolve according to an exponential AR(1) process that makes the stationary distribution of the node locations exponential. For this model we obtain the one-step conditional connectivity probabilities and extend it to the k-step case. Finite and asymptotic analysis are given. We then obtain the k-step connectivity probability conditioned on the network being disconnected. We also derive the pmf of the first passage time for a connected network to become disconnected. We then describe a random birth-death model where at each instant, the node locations evolve according to an AR(1) process. In addition, a random node is allowed to die while giving birth to a node at another location. We derive properties similar to those above.
Resumo:
A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We study the distribution of first passage time for Levy type anomalous diffusion. A fractional Fokker-Planck equation framework is introduced.For the zero drift case, using fractional calculus an explicit analytic solution for the first passage time density function in terms of Fox or H-functions is given. The asymptotic behaviour of the density function is discussed. For the nonzero drift case, we obtain an expression for the Laplace transform of the first passage time density function, from which the mean first passage time and variance are derived.
Resumo:
In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.
Resumo:
Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.
Resumo:
The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This work aims at dimensional reduction of non-linear isotropic hyperelastic plates in an asymptotically accurate manner. The problem is both geometrically and materially non-linear. The geometric non-linearity is handled by allowing for finite deformations and generalized warping while the material non-linearity is incorporated through hyperelastic material model. The development, based on the Variational Asymptotic Method (VAM) with moderate strains and very small thickness to shortest wavelength of the deformation along the plate reference surface as small parameters, begins with three-dimensional (3-D) non-linear elasticity and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a two-dimensional (2-D) plate analysis. Major contributions of this paper are derivation of closed-form analytical expressions for warping functions and stiffness coefficients and a set of recovery relations to express approximately the 3-D displacement, strain and stress fields. Consistent with the 2-D non-linear constitutive laws, 2-D plate theory and corresponding finite element program have been developed. Validation of present theory is carried out with a standard test case and the results match well. Distributions of 3-D results are provided for another test case. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.
Resumo:
The problem of identifying user intent has received considerable attention in recent years, particularly in the context of improving the search experience via query contextualization. Intent can be characterized by multiple dimensions, which are often not observed from query words alone. Accurate identification of Intent from query words remains a challenging problem primarily because it is extremely difficult to discover these dimensions. The problem is often significantly compounded due to lack of representative training sample. We present a generic, extensible framework for learning the multi-dimensional representation of user intent from the query words. The approach models the latent relationships between facets using tree structured distribution which leads to an efficient and convergent algorithm, FastQ, for identifying the multi-faceted intent of users based on just the query words. We also incorporated WordNet to extend the system capabilities to queries which contain words that do not appear in the training data. Empirical results show that FastQ yields accurate identification of intent when compared to a gold standard.