973 resultados para Homography constraint


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the scattering of hard external particles in a heat bath in a real-time formalism for finite temperature QED. We investigate the distribution of the 4-momentum difference of initial and final hard particles in a fully covariant manner when the scale of the process, Q, is much larger than the temperature, T. Our computations are valid for all T subject to this constraint. We exponentiate the leading infra-red term at one-loop order through a resummation of soft (thermal) photon emissions and absorptions. For T > 0, we find that tensor structures arise which are not present at T = 0. These carry thermal signatures. As a result, external particles can serve as thermometers introduced into the heat bath. We investigate the phase space origin of log (Q/M) and log (Q/T) terms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The distribution of stars and gas in many galaxies is asymmetric. This so-called lopsidedness is expected to significantly affect the dynamics and evolution of the disc, including the star formation activity. Here, we measure the degree of lopsidedness for the gas distribution in a selected sample of 70 galaxies from the Westerbork Hi Survey of Spiral and Irregular Galaxies. This complements our earlier work (Paper I) where the kinematic lopsidedness was derived for the same galaxies. The morphological lopsidedness is measured by performing a harmonic decomposition of the surface density maps. The amplitude of lopsidedness A(1), the fractional value of the first Fourier component, is typically quite high (about 0.1) within the optical disc and has a constant phase. Thus, lopsidedness is a common feature in galaxies and indicates a global mode. We measure A(1) out to typically one to four optical radii, sometimes even further. This is, on average, four times larger than the distance to which lopsidedness was measured in the past using near-IR as a tracer of the old stellar component, and therefore provides a new, more stringent constraint on the mechanism for the origin of lopsidedness. Interestingly, the value of A(1) saturates beyond the optical radius. Furthermore, the plot of A(1) versus radius shows fluctuations that we argue are due to local spiral features. We also try to explain the physical origin of this observed disc lopsidedness. No clear trend is found when the degree of lopsidedness is compared to a measure of the isolation or interaction probability of the sample galaxies. However, this does not rule out a tidal origin if the lopsidedness is long-lived. In addition, we find that the early-type galaxies tend to be more morphologically lopsided than the late-type galaxies. Both results together indicate that lopsidedness has a tidal origin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report Doppler-only radar observations of Icarus at Goldstone at a transmitter frequency of 8510 MHz (3.5 cm wavelength) during 8-10 June 1996, the first radar detection of the object since 1968. Optimally filtered and folded spectra achieve a maximum opposite-circular (OC) polarization signal-to-noise ratio of about 10 and help to constrain Icarus' physical properties. We obtain an OC radar cross section of 0.05 km(2) (with a 35% uncertainty), which is less than values estimated by Goldstein (1969) and by Pettengill et al. (1969), and a circular polarization (SC/OC) ratio of 0.5+/-0.2. We analyze the echo power spectrum with a model incorporating the echo bandwidth B and a spectral shape parameter it, yielding a coupled constraint between B and n. We adopt 25 Hz as the lower bound on B, which gives a lower bound on the maximum pole-on breadth of about 0.6 km and upper bounds on the radar and optical albedos that are consistent with Icarus' tentative QS classification. The observed circular polarization ratio indicates a very rough near-surface at spatial scales of the order of the radar wavelength. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, power management algorithms for energy harvesting sensors (EHS) that operate purely based on energy harvested from the environment are proposed. To maintain energy neutrality, EHS nodes schedule their utilization of the harvested power so as to save/draw energy into/from an inefficient battery during peak/low energy harvesting periods, respectively. Under this constraint, one of the key system design goals is to transmit as much data as possible given the energy harvesting profile. For implementational simplicity, it is assumed that the EHS transmits at a constant data rate with power control, when the channel is sufficiently good. By converting the data rate maximization problem into a convex optimization problem, the optimal load scheduling (power management) algorithm that maximizes the average data rate subject to energy neutrality is derived. Also, the energy storage requirements on the battery for implementing the proposed algorithm are calculated. Further, robust schemes that account for the insufficiency of battery storage capacity, or errors in the prediction of the harvested power are proposed. The superior performance of the proposed algorithms over conventional scheduling schemes are demonstrated through computations using numerical data from solar energy harvesting databases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some basic results that help in determining the Diversity-Multiplexing Tradeoff (DMT) of cooperative multihop networks are first identified. As examples, the maximum achievable diversity gain is shown to equal the min-cut between source and sink, whereas the maximal multiplexing gain is shown to equal the minimum rank of the matrix characterizing the MIMO channel appearing across a cut in the network. Two multi-hop generalizations of the two-hop network are then considered, namely layered networks as well as a class of networks introduced here and termed as K-parallel-path (KPP) networks. The DMT of KPP networks is characterized for K > 3. It is shown that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for fully-connected, layered networks. Explicit coding schemes achieving the DMT that make use of cyclic-division-algebra-based distributed space-time codes underlie the above results. Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple, amplify-and-forward protocols are often sufficient to attain the optimal DMT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has finite support. The formulation in general is non-convex, but in several cases of interest it reduces to a convex program. The problem of uncertainty in kernel matrix is motivated from the real world problem of classifying proteins when the structures are provided with some uncertainty. The formulation derived here naturally incorporates such uncertainty in a principled manner leading to significant improvements over the state of the art. 1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of matrix microstructure on the fracture of Al-alloy composites with 60 vol% alumina particulates was studied. The matrix composition and microstructure were systematically varied by changing the infiltration temperature and heat treatment. Characterization was carried out by a combination of metallography, hardness measurements, and fracture studies conducted on compact tension specimens to study the fracture toughness and crack growth in the composites. The composites showed a rise in crack resistance with crack extension (R curves) due to bridges of intact matrix ligaments formed in the crack wake. The steady-state or plateau toughness reached upon stable crack growth was observed to be more sensitive to the process temperature rather than to the heat treatment. Fracture in the composites was predominantly by particle fracture, extensive deformation, and void nucleation in the matrix. Void nucleation occurred in the matrix in the as-solutionized and peak-aged conditions and preferentially near the interface in the underaged and overaged conditions. Micromechanical models based on crack bridging by intact ductile ligaments were modified by a plastic constraint factor from estimates of the plastic zone formed under indentations, and are shown to be adequate in predicting the steady-state toughness of the composite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fracture toughness and fracture mechanisms in Al2O3/Al composites are described. The unique flexibility offered by pressureless infiltration of molten Al alloys into porous alumina preforms was utilized to investigate the effect of microstructural scale and matrix properties on the fracture toughness and the shape of the crack resistance curves (R-curves). The results indicate that the observed increment in toughness is due to crack bridging by intact matrix ligaments behind the crack tip. The deformation behavior of the matrix, which is shown to be dependent on the microstructural constraints, is the key parameter that influences both the steady-state toughness and the shape of the R-curves. Previously proposed models based on crack bridging by intact ductile particles in a ceramic matrix have been modified by the inclusion of an experimentally determined plastic constraint factor (P) that determines the deformation of the ductile phase and are shown to be adequate in predicting the toughness increment in the composites. Micromechanical models to predict the crack tip profile and the bridge lengths (L) correlate well with the observed behavior and indicate that the composites can be classified as (i) short-range toughened and (ii) long-range toughened on the basis of their microstructural characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a method to statically balance a general treestructured,planar revolute-joint linkage loaded with linear springs or constant forces without using auxiliary links. The balancing methods currently documented in the literature use extra links; some do not apply when there are spring loads and some are restricted to only two-link serial chains. In our method, we suitably combine any non-zero-free-length load spring with another spring to result in an effective zero-free-length spring load. If a link has a single joint (with the parent link), we give a procedure to attach extra zero-free-length springs to it so that forces and moments are balanced for the link. Another consequence of this attachment is that the constraint force of the joint on the parent link becomes equivalent to a zero-free-length spring load. Hence, conceptually,for the parent link, the joint with its child is removed and replaced with the zero-free-length spring. This feature allows recursive application of this procedure from the end-branches of the tree down to the root, satisfying force and moment balance of all the links in the process. Furthermore, this method can easily be extended to the closed-loop revolute-joint linkages, which is also illustrated in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We determine the optimal allocation of power between the analog and digital sections of an RF receiver while meeting the BER constraint. Unlike conventional RF receiver designs, we treat the SNR at the output of the analog front end (SNRAD) as a design parameter rather than a specification to arrive at this optimal allocation. We first determine the relationship of the SNRAD to the resolution and operating frequency of the digital section. We then use power models for the analog and digital sections to solve the power minimization problem. As an example, we consider a 802.15.4 compliant low-IF receiver operating at 2.4 GHz in 0.13 μm technology with 1.2 V power supply. We find that the overall receiver power is minimized by having the analog front end provide an SNR of 1.3dB and the ADC and the digital section operate at 1-bit resolution with 18MHz sampling frequency while achieving a power dissipation of 7mW.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP)networks and layered networks.KPP networks, can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4.For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. We also give alternative and often simpler proofs of several existing results and show that codes achieving full diversity on a MIMO Rayleigh fading channel achieve full diversity on arbitrary fading channels. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper analytical expressions for optimal Vdd and Vth to minimize energy for a given speed constraint are derived. These expressions are based on the EKV model for transistors and are valid in both strong inversion and sub threshold regions. The effect of gate leakage on the optimal Vdd and Vth is analyzed. A new gradient based algorithm for controlling Vdd and Vth based on delay and power monitoring results is proposed. A Vdd-Vth controller which uses the algorithm to dynamically control the supply and threshold voltage of a representative logic block (sum of absolute difference computation of an MPEG decoder) is designed. Simulation results using 65 nm predictive technology models are given.