972 resultados para Contractive constraint
Resumo:
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter design for multiuser multi-input multi-output (MIMO) downlink that minimizes the sum mean square error (SMSE) in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas, and each user terminal is equipped with multiple receive antennas. The CSI is assumed to be perturbed by estimation error. The proposed transceiver design is based on jointly minimizing a modified function of the MSE, taking into account the statistics of the estimation error under a total transmit power constraint. An alternating optimization algorithm, wherein the optimization is performed with respect to the transmit precoder and the receive filter in an alternating fashion, is proposed. The robustness of the proposed algorithm to imperfections in CSI is illustrated through simulations.
Resumo:
We consider a wireless sensor network whose main function is to detect certain infrequent alarm events, and to forward alarm packets to a base station, using geographical forwarding. The nodes know their locations, and they sleep-wake cycle, waking up periodically but not synchronously. In this situation, when a node has a packet to forward to the sink, there is a trade-off between how long this node waits for a suitable neighbor to wake up and the progress the packet makes towards the sink once it is forwarded to this neighbor. Hence, in choosing a relay node, we consider the problem of minimizing average delay subject to a constraint on the average progress. By constraint relaxation, we formulate this next hop relay selection problem as a Markov decision process (MDP). The exact optimal solution (BF (Best Forward)) can be found, but is computationally intensive. Next, we consider a mathematically simplified model for which the optimal policy (SF (Simplified Forward)) turns out to be a simple one-step-look-ahead rule. Simulations show that SF is very close in performance to BF, even for reasonably small node density. We then study the end-to-end performance of SF in comparison with two extremal policies: Max Forward (MF) and First Forward (FF), and an end-to-end delay minimising policy proposed by Kim et al. 1]. We find that, with appropriate choice of one hop average progress constraint, SF can be tuned to provide a favorable trade-off between end-to-end packet delay and the number of hops in the forwarding path.
Resumo:
The eigenvalue and eigenstructure assignment procedure has found application in a wide variety of control problems. In this paper a method for assigning eigenstructure to a Linear time invariant multi-input system is proposed. The algorithm determines a matrix that has eigenvalues and eigenvectors at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenstructure. solution of the matrix equation, involving unknown controller gains, open-loop system matrices, and desired eigenvalues and eigenvectors, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint can easily be overcome by a negligible shift in the values. Application of the procedure is illustrated through the offset control of a satellite supported, from an orbiting platform, by a flexible tether,
Resumo:
The eigenvalue assignment/pole placement procedure has found application in a wide variety of control problems. The associated literature is rather extensive with a number of techniques discussed to that end. In this paper a method for assigning eigenvalues to a Linear Time Invariant (LTI) single input system is proposed. The algorithm determines a matrix, which has eigenvalues at the desired locations. It is obtained from the knowledge of the open-loop system and the desired eigenvalues. Solution of the matrix equation, involving unknown controller gains, open-loop system matrices and desired eigenvalues, results in the state feedback controller. The proposed algorithm requires the closed-loop eigenvalues to be different from those of the open-loop case. This apparent constraint is easily overcome by a negligible shift in the values. Two examples are considered to verify the proposed algorithm. The first one pertains to the in-plane libration of a Tethered Satellite System (TSS) while the second is concerned with control of the short period dynamics of a flexible airplane. Finally, the method is extended to determine the Controllability Grammian, corresponding to the specified closed-loop eigenvalues, without computing the controller gains.
Resumo:
We study the scattering of hard external particles in a heat bath in a real-time formalism for finite temperature QED. We investigate the distribution of the 4-momentum difference of initial and final hard particles in a fully covariant manner when the scale of the process, Q, is much larger than the temperature, T. Our computations are valid for all T subject to this constraint. We exponentiate the leading infra-red term at one-loop order through a resummation of soft (thermal) photon emissions and absorptions. For T > 0, we find that tensor structures arise which are not present at T = 0. These carry thermal signatures. As a result, external particles can serve as thermometers introduced into the heat bath. We investigate the phase space origin of log (Q/M) and log (Q/T) terms.
Resumo:
his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.
Resumo:
The distribution of stars and gas in many galaxies is asymmetric. This so-called lopsidedness is expected to significantly affect the dynamics and evolution of the disc, including the star formation activity. Here, we measure the degree of lopsidedness for the gas distribution in a selected sample of 70 galaxies from the Westerbork Hi Survey of Spiral and Irregular Galaxies. This complements our earlier work (Paper I) where the kinematic lopsidedness was derived for the same galaxies. The morphological lopsidedness is measured by performing a harmonic decomposition of the surface density maps. The amplitude of lopsidedness A(1), the fractional value of the first Fourier component, is typically quite high (about 0.1) within the optical disc and has a constant phase. Thus, lopsidedness is a common feature in galaxies and indicates a global mode. We measure A(1) out to typically one to four optical radii, sometimes even further. This is, on average, four times larger than the distance to which lopsidedness was measured in the past using near-IR as a tracer of the old stellar component, and therefore provides a new, more stringent constraint on the mechanism for the origin of lopsidedness. Interestingly, the value of A(1) saturates beyond the optical radius. Furthermore, the plot of A(1) versus radius shows fluctuations that we argue are due to local spiral features. We also try to explain the physical origin of this observed disc lopsidedness. No clear trend is found when the degree of lopsidedness is compared to a measure of the isolation or interaction probability of the sample galaxies. However, this does not rule out a tidal origin if the lopsidedness is long-lived. In addition, we find that the early-type galaxies tend to be more morphologically lopsided than the late-type galaxies. Both results together indicate that lopsidedness has a tidal origin.
Resumo:
We report Doppler-only radar observations of Icarus at Goldstone at a transmitter frequency of 8510 MHz (3.5 cm wavelength) during 8-10 June 1996, the first radar detection of the object since 1968. Optimally filtered and folded spectra achieve a maximum opposite-circular (OC) polarization signal-to-noise ratio of about 10 and help to constrain Icarus' physical properties. We obtain an OC radar cross section of 0.05 km(2) (with a 35% uncertainty), which is less than values estimated by Goldstein (1969) and by Pettengill et al. (1969), and a circular polarization (SC/OC) ratio of 0.5+/-0.2. We analyze the echo power spectrum with a model incorporating the echo bandwidth B and a spectral shape parameter it, yielding a coupled constraint between B and n. We adopt 25 Hz as the lower bound on B, which gives a lower bound on the maximum pole-on breadth of about 0.6 km and upper bounds on the radar and optical albedos that are consistent with Icarus' tentative QS classification. The observed circular polarization ratio indicates a very rough near-surface at spatial scales of the order of the radar wavelength. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, power management algorithms for energy harvesting sensors (EHS) that operate purely based on energy harvested from the environment are proposed. To maintain energy neutrality, EHS nodes schedule their utilization of the harvested power so as to save/draw energy into/from an inefficient battery during peak/low energy harvesting periods, respectively. Under this constraint, one of the key system design goals is to transmit as much data as possible given the energy harvesting profile. For implementational simplicity, it is assumed that the EHS transmits at a constant data rate with power control, when the channel is sufficiently good. By converting the data rate maximization problem into a convex optimization problem, the optimal load scheduling (power management) algorithm that maximizes the average data rate subject to energy neutrality is derived. Also, the energy storage requirements on the battery for implementing the proposed algorithm are calculated. Further, robust schemes that account for the insufficiency of battery storage capacity, or errors in the prediction of the harvested power are proposed. The superior performance of the proposed algorithms over conventional scheduling schemes are demonstrated through computations using numerical data from solar energy harvesting databases.
Resumo:
Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.
Resumo:
Some basic results that help in determining the Diversity-Multiplexing Tradeoff (DMT) of cooperative multihop networks are first identified. As examples, the maximum achievable diversity gain is shown to equal the min-cut between source and sink, whereas the maximal multiplexing gain is shown to equal the minimum rank of the matrix characterizing the MIMO channel appearing across a cut in the network. Two multi-hop generalizations of the two-hop network are then considered, namely layered networks as well as a class of networks introduced here and termed as K-parallel-path (KPP) networks. The DMT of KPP networks is characterized for K > 3. It is shown that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for fully-connected, layered networks. Explicit coding schemes achieving the DMT that make use of cyclic-division-algebra-based distributed space-time codes underlie the above results. Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple, amplify-and-forward protocols are often sufficient to attain the optimal DMT.
Resumo:
We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has finite support. The formulation in general is non-convex, but in several cases of interest it reduces to a convex program. The problem of uncertainty in kernel matrix is motivated from the real world problem of classifying proteins when the structures are provided with some uncertainty. The formulation derived here naturally incorporates such uncertainty in a principled manner leading to significant improvements over the state of the art. 1.
Resumo:
The role of matrix microstructure on the fracture of Al-alloy composites with 60 vol% alumina particulates was studied. The matrix composition and microstructure were systematically varied by changing the infiltration temperature and heat treatment. Characterization was carried out by a combination of metallography, hardness measurements, and fracture studies conducted on compact tension specimens to study the fracture toughness and crack growth in the composites. The composites showed a rise in crack resistance with crack extension (R curves) due to bridges of intact matrix ligaments formed in the crack wake. The steady-state or plateau toughness reached upon stable crack growth was observed to be more sensitive to the process temperature rather than to the heat treatment. Fracture in the composites was predominantly by particle fracture, extensive deformation, and void nucleation in the matrix. Void nucleation occurred in the matrix in the as-solutionized and peak-aged conditions and preferentially near the interface in the underaged and overaged conditions. Micromechanical models based on crack bridging by intact ductile ligaments were modified by a plastic constraint factor from estimates of the plastic zone formed under indentations, and are shown to be adequate in predicting the steady-state toughness of the composite.
Resumo:
Fracture toughness and fracture mechanisms in Al2O3/Al composites are described. The unique flexibility offered by pressureless infiltration of molten Al alloys into porous alumina preforms was utilized to investigate the effect of microstructural scale and matrix properties on the fracture toughness and the shape of the crack resistance curves (R-curves). The results indicate that the observed increment in toughness is due to crack bridging by intact matrix ligaments behind the crack tip. The deformation behavior of the matrix, which is shown to be dependent on the microstructural constraints, is the key parameter that influences both the steady-state toughness and the shape of the R-curves. Previously proposed models based on crack bridging by intact ductile particles in a ceramic matrix have been modified by the inclusion of an experimentally determined plastic constraint factor (P) that determines the deformation of the ductile phase and are shown to be adequate in predicting the toughness increment in the composites. Micromechanical models to predict the crack tip profile and the bridge lengths (L) correlate well with the observed behavior and indicate that the composites can be classified as (i) short-range toughened and (ii) long-range toughened on the basis of their microstructural characteristics.