30 resultados para Almost-sectional paths

em Indian Institute of Science - Bangalore - Índia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study by means of experiments and Monte Carlo simulations, the scattering of light in random media, to determine the distance up to which photons travel along almost undeviated paths within a scattering medium, and are therefore capable of casting a shadow of an opaque inclusion embedded within the medium. Such photons are isolated by polarisation discrimination wherein the plane of linear polarisation of the input light is continuously rotated and the polarisation preserving component of the emerging light is extracted by means of a Fourier transform. This technique is a software implementation of lock-in detection. We find that images may be recovered to a depth far in excess of that predicted by the diffusion theory of photon propagation. To understand our experimental results, we perform Monte Carlo simulations to model the random walk behaviour of the multiply scattered photons. We present a. new definition of a diffusing photon in terms of the memory of its initial direction of propagation, which we then quantify in terms of an angular correlation function. This redefinition yields the penetration depth of the polarisation preserving photons. Based on these results, we have formulated a model to understand shadow formation in a turbid medium, the predictions of which are in good agreement with our experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given n is an element of Z(+) and epsilon > 0, we prove that there exists delta = delta(epsilon, n) > 0 such that the following holds: If (M(n),g) is a compact Kahler n-manifold whose sectional curvatures K satisfy -1 -delta <= K <= -1/4 and c(I)(M), c(J)(M) are any two Chern numbers of M, then |c(I)(M)/c(J)(M) - c(I)(0)/c(J)(0)| < epsilon, where c(I)(0), c(J)(0) are the corresponding characteristic numbers of a complex hyperbolic space form. It follows that the Mostow-Siu surfaces and the threefolds of Deraux do not admit Kahler metrics with pinching close to 1/4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cache analysis plays a very important role in obtaining precise Worst Case Execution Time (WCET) estimates of programs for real-time systems. While Abstract Interpretation based approaches are almost universally used for cache analysis, they fail to take advantage of its unique requirement: it is not necessary to find the guaranteed cache behavior that holds across all executions of a program. We only need the cache behavior along one particular program path, which is the path with the maximum execution time. In this work, we introduce the concept of cache miss paths, which allows us to use the worst-case path information to improve the precision of AI-based cache analysis. We use Abstract Interpretation to determine the cache miss paths, and then integrate them in the IPET formulation. An added advantage is that this further allows us to use infeasible path information for cache analysis. Experimentally, our approach gives more precise WCETs as compared to AI-based cache analysis, and we also provide techniques to trade-off analysis time with precision to provide scalability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the mechanics of tubular hydroforming under various types of loading conditions is investigated. The main objective is to contrast the effects of prescribing fluid pressure or volume flow rate, in conjunction with axial displacement, on the stress and strain histories experienced by the tube and the process of bulging. To this end, axisymmetric finite element simulations of free hydroforming (without external die contact) of aluminium alloy tubes are carried out. Hill’s normally anisotropic yield theory along with material properties determined in a previous experimental study [A. Kulkarni, P. Biswas, R. Narasimhan, A. Luo, T. Stoughton, R. Mishra, A.K. Sachdev, An experimental and numerical study of necking initiation in aluminium alloy tubes during hydroforming, Int. J. Mech. Sci. 46 (2004) 1727–1746] are employed in the computations. It is found that while prescribed fluid pressure leads to highly non-proportional strain paths, specified fluid volume flow rate may result in almost proportional ones for the predominant portion of loading. The peak pressure increases with axial compression for the former, while the reverse trend applies under the latter. The implication of these results on failure by localized necking of the tube wall is addressed in a subsequent investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Ball-Larus path-profiling algorithm is an efficient technique to collect acyclic path frequencies of a program. However, longer paths -those extending across loop iterations - describe the runtime behaviour of programs better. We generalize the Ball-Larus profiling algorithm for profiling k-iteration paths - paths that can span up to to k iterations of a loop. We show that it is possible to number suchk-iteration paths perfectly, thus allowing for an efficient profiling algorithm for such longer paths. We also describe a scheme for mixed-mode profiling: profiling different parts of a procedure with different path lengths. Experimental results show that k-iteration profiling is realistic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a multicommodity flow problem on a complete graph whose edges have random, independent, and identically distributed capacities. We show that, as the number of nodes tends to infinity, the maximumutility, given by the average of a concave function of each commodity How, has an almost-sure limit. Furthermore, the asymptotically optimal flow uses only direct and two-hop paths, and can be obtained in a distributed manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with a study of some of the properties of locally product and almost locally product structures on a differentiable manifold X n of class C k . Every locally product space has certain almost locally product structures which transform the local tangent space to X n at an arbitrary point P in a set fashion: this is studied in Theorem (2.2). Theorem (2.3) considers the nature of transformations that exist between two co-ordinate systems at a point whenever an almost locally product structure has the same local representation in each of these co-ordinate systems. A necessary and sufficient condition for X n to be a locally product manifold is obtained in terms of the pseudo-group of co-ordinate transformations on X n and the subpseudo-groups [cf., Theoren (2.1)]. Section 3 is entirely devoted to the study of integrable almost locally product structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a variant of the popular matching problem here. The input instance is a bipartite graph $G=(\mathcal{A}\cup\mathcal{P},E)$, where vertices in $\mathcal{A}$ are called applicants and vertices in $\mathcal{P}$ are called posts. Each applicant ranks a subset of posts in an order of preference, possibly involving ties. A matching $M$ is popular if there is no other matching $M'$ such that the number of applicants who prefer their partners in $M'$ to $M$ exceeds the number of applicants who prefer their partners in $M$ to $M'$. However, the “more popular than” relation is not transitive; hence this relation is not a partial order, and thus there need not be a maximal element here. Indeed, there are simple instances that do not admit popular matchings. The questions of whether an input instance $G$ admits a popular matching and how to compute one if it exists were studied earlier by Abraham et al. Here we study reachability questions among matchings in $G$, assuming that $G=(\mathcal{A}\cup\mathcal{P},E)$ admits a popular matching. A matching $M_k$ is reachable from $M_0$ if there is a sequence of matchings $\langle M_0,M_1,\dots,M_k\rangle$ such that each matching is more popular than its predecessor. Such a sequence is called a length-$k$ voting path from $M_0$ to $M_k$. We show an interesting property of reachability among matchings in $G$: there is always a voting path of length at most 2 from any matching to some popular matching. Given a bipartite graph $G=(\mathcal{A}\cup\mathcal{P},E)$ with $n$ vertices and $m$ edges and any matching $M_0$ in $G$, we give an $O(m\sqrt{n})$ algorithm to compute a shortest-length voting path from $M_0$ to a popular matching; when preference lists are strictly ordered, we have an $O(m+n)$ algorithm. This problem has applications in dynamic matching markets, where applicants and posts can enter and leave the market, and applicants can also change their preferences arbitrarily. After any change, the current matching may no longer be popular, in which case we are required to update it. However, our model demands that we switch from one matching to another only if there is consensus among the applicants to agree to the switch. Hence we need to update via a voting path that ends in a popular matching. Thus our algorithm has applications here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the behaviour of compacted expansive soils under swell-shrink cycles. Laboratory cyclic swell-shrink tests were conducted on compacted specimens of two expansive soils at surcharge pressures of 6.25, 50.00, and 100.00 kPa. The void ratio and water content of the specimens at several intermediate stages during swelling until the end of swelling and during shrinkage until the end of shrinkage were determined to trace the water content versus void ratio paths with an increasing number of swell-shrink cycles. The test results showed that the swell-shrink path was reversible once the soil reached an equilibrium stage where the vertical deformations during swelling and shrinkage were the same. This usually occurred after about four swell-shrink cycles. The swelling and shrinkage path of each specimen subjected to full swelling - full shrinkage cycles showed an S-shaped curve (two curvilinear portions and a linear portion). However, the swelling and shrinkage path occurred as a part of the S-shaped curve, when the specimen was subjected to full swelling - partial shrinkage cycles. More than 80% of the total volumetric change and more than 50% of the total vertical deformation occurred in the central linear portion of the S-shaped curve. The volumetric change was essentially parallel to the saturation line within a degree of saturation range of 50-80% for the equilibrium cycle. The primary value of the swell-shrink path is to provide information regarding the void ratio change that would occur for a given change in water content for any possible swell-shrink pattern. It is suggested that these swell-shrink paths can be established with a limited number of tests in the laboratory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.