54 resultados para Lp-PLA2
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
Linear stability and the nonmodal transient energy growth in compressible plane Couette flow are investigated for two prototype mean flows: (a) the uniform shear flow with constant viscosity, and (b) the nonuniform shear flow with stratified viscosity. Both mean flows are linearly unstable for a range of supersonic Mach numbers (M). For a given M, the critical Reynolds number (Re) is significantly smaller for the uniform shear flow than its nonuniform shear counterpart; for a given Re, the dominant instability (over all streamwise wave numbers, α) of each mean flow belongs to different modes for a range of supersonic M. An analysis of perturbation energy reveals that the instability is primarily caused by an excess transfer of energy from mean flow to perturbations. It is shown that the energy transfer from mean flow occurs close to the moving top wall for “mode I” instability, whereas it occurs in the bulk of the flow domain for “mode II.” For the nonmodal transient growth analysis, it is shown that the maximum temporal amplification of perturbation energy, Gmax, and the corresponding time scale are significantly larger for the uniform shear case compared to those for its nonuniform counterpart. For α=0, the linear stability operator can be partitioned into L∼L̅ +Re2 Lp, and the Re-dependent operator Lp is shown to have a negligibly small contribution to perturbation energy which is responsible for the validity of the well-known quadratic-scaling law in uniform shear flow: G(t∕Re)∼Re2. In contrast, the dominance of Lp is responsible for the invalidity of this scaling law in nonuniform shear flow. An inviscid reduced model, based on Ellingsen-Palm-type solution, has been shown to capture all salient features of transient energy growth of full viscous problem. For both modal and nonmodal instability, it is shown that the viscosity stratification of the underlying mean flow would lead to a delayed transition in compressible Couette flow.
Resumo:
In this paper, we address the reconstruction problem from laterally truncated helical cone-beam projections. The reconstruction problem from lateral truncation, though similar to that of interior radon problem, is slightly different from it as well as the local (lambda) tomography and pseudo-local tomography in the sense that we aim to reconstruct the entire object being scanned from a region-of-interest (ROI) scan data. The method proposed in this paper is a projection data completion approach followed by the use of any standard accurate FBP type reconstruction algorithm. In particular, we explore a windowed linear prediction (WLP) approach for data completion and compare the quality of reconstruction with the linear prediction (LP) technique proposed earlier.
Resumo:
With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.
Resumo:
Recently in, a framework was given to construct low ML decoding complexity Space-Time Block Codes (STBCs) via codes over the finite field F4. In this paper, we construct new full-diversity STBCs with cubic shaping property and low ML decoding complexity via codes over F4 for number of transmit antennas N = 2m, m >; 1, and rates R >; 1 complex symbols per channel use. The new codes have the least ML decoding complexity among all known codes for a large set of (N, R) pairs. The new full-rate codes of this paper (R = N) are not only information-lossless and fully diverse but also have the least known ML decoding complexity in the literature. For N ≥ 4, the new full-rate codes are the first instances of full-diversity, information-lossless STBCs with low ML decoding complexity. We also give a sufficient condition for STBCs obtainable from codes over F4 to have cubic shaping property, and a sufficient condition for any design to give rise to a full-diversity STBC when the symbols are encoded using rotated square QAM constellations.
Resumo:
He propose a new time domain method for efficient representation of the KCG and delineation of its component waves. The method is based on the multipulse Linear prediction (LP) coding which is being widely used in speech processing. The excitation to the LP synthesis filter consists of a few pulses defined by their locations and amplitudes. Based on the amplitudes and their distribution, the pulses are suitably combined to delineate the component waves. Beat to beat correlation in the ECG signal is used in QRS periodicity prediction. The method entails a data compression of 1 in 6. The method reconstructs the signal with an NMSE of less than 5%.
Resumo:
Quinoxaline antibiotics (Fig. 1a, b) form a useful group of compounds for the study of drug–nucleic acid interactions1,2. They consist of a cross-bridged cyclic octadepsipeptide, variously modified, bearing two quinoxaline chromophores. These antibiotics intercalate bifunctionally into DNA2,3 probably via the narrow groove, forming a complex in which, most probably, two base pairs are sandwiched between the chromophores4,5. Depending on the nature of their sulphur-containing cross-bridge and modifications to their amino acid side chains, they display characteristic patterns of nucleotide sequence selectivity when binding to DNAs of different base composition and to synthetic polydeoxynucleotides4,6,7. This specificity has been tentatively ascribed to specific hydrogen-bonding interactions between functional groups in the DNA and complementary moieties on the peptide ring2,4,5. Variations in selectivity have been attributed both to changes in the conformation of the peptide backbone6 and no modifications of the cross-bridge7. These suggestions were made, however, in the absence of firm knowledge about the three-dimensional structure and conformation of the antibiotic molecules. We now report the X-ray structure analysis of the synthetic analogue of the antibiotic triostin A, TANDEM (des-N-tetramethyl triostin A) (Fig. 1c), which binds preferentially to alternating adenine-thymine sequences7. The X-ray structure provides a starting point for exploring the origin of this specificity and suggests possible models for the binding of other members of the quinoxaline series.
Resumo:
The spectral characteristics of a diode laser are significantly affected due to interference caused between the laser diode output and the optical feedback in the external-cavity. This optical feedback effect is of practical use for linewidth reduction, tuning or for sensing applications. A sensor based on this effect is attractive due to its simplicity, low cost and compactness. This optical sensor has been used so far, in different configuration such as for sensing displacement induced by different parameters. In this paper we report a compact optical sensor consisting of a semiconductor laser coupled to an external cavity. Theoretical analysis of the self- mixing interference for optical sensing applications is given for moderate optical feedback case. A comparison is made with our experimental observations. Experimental results are in good agreement with the simulated power modulation based on self-mixing interference theory. Displacements as small as 10-4 nm have been measured using this sensor. The developed sensor showed a fringe sensitivity of one fringe per 400nm displacement for reflector distance of around 10cms. The sensor has also been tested for magnetic field and temperature induced displacement measurements.
Resumo:
A numerically stable sequential Primal–Dual LP algorithm for the reactive power optimisation (RPO) is presented in this article. The algorithm minimises the voltage stability index C 2 [1] of all the load buses to improve the system static voltage stability. Real time requirements such as numerical stability, identification of the most effective subset of controllers for curtailing the number of controllers and their movement can be handled effectively by the proposed algorithm. The algorithm has a natural characteristic of selecting the most effective subset of controllers (and hence curtailing insignificant controllers) for improving the objective. Comparison with transmission loss minimisation objective indicates that the most effective subset of controllers and their solution identified by the static voltage stability improvement objective is not the same as that of the transmission loss minimisation objective. The proposed algorithm is suitable for real time application for the improvement of the system static voltage stability.
Resumo:
Artificial Neural Networks (ANNs) have recently been proposed as an alterative method for salving certain traditional problems in power systems where conventional techniques have not achieved the desired speed, accuracy or efficiency. This paper presents application of ANN where the aim is to achieve fast voltage stability margin assessment of power network in an energy control centre (ECC), with reduced number of appropriate inputs. L-index has been used for assessing voltage stability margin. Investigations are carried out on the influence of information encompassed in input vector and target out put vector, on the learning time and test performance of multi layer perceptron (MLP) based ANN model. LP based algorithm for voltage stability improvement, is used for generating meaningful training patterns in the normal operating range of the system. From the generated set of training patterns, appropriate training patterns are selected based on statistical correlation process, sensitivity matrix approach, contingency ranking approach and concentric relaxation method. Simulation results on a 24 bus EHV system, 30 bus modified IEEE system, and a 82 bus Indian power network are presented for illustration purposes.
Resumo:
This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.
Resumo:
The Linear phase(LP) Finite Impulse Response(FIR) filters are widely used in many signal processing systems which are sensitive to phase distortion. In this article, we obtain a canonic lattice structure of an LP-FIR filter with a complex impulse response. This lattice structure is based on some novel lattice stages obtained from some properties of symmetric polynomials.This canonic lattice structure exploits the redundancy in the zeros of an LP-FIR filter.
Resumo:
Ranking problems have become increasingly important in machine learning and data mining in recent years, with applications ranging from information retrieval and recommender systems to computational biology and drug discovery. In this paper, we describe a new ranking algorithm that directly maximizes the number of relevant objects retrieved at the absolute top of the list. The algorithm is a support vector style algorithm, but due to the different objective, it no longer leads to a quadratic programming problem. Instead, the dual optimization problem involves l1, ∞ constraints; we solve this dual problem using the recent l1, ∞ projection method of Quattoni et al (2009). Our algorithm can be viewed as an l∞-norm extreme of the lp-norm based algorithm of Rudin (2009) (albeit in a support vector setting rather than a boosting setting); thus we refer to the algorithm as the ‘Infinite Push’. Experiments on real-world data sets confirm the algorithm’s focus on accuracy at the absolute top of the list.
Resumo:
The envelope protein (E1-E2) of Hepatitis C virus (HCV) is a major component of the viral structure. The glycosylated envelope protein is considered to be important for initiation of infection by binding to cellular receptor(s) and also known as one of the major antigenic targets to host immune response. The present study was aimed at identifying mouse monoclonal antibodies which inhibit binding of virus like particles of HCV to target cells. The first step in this direction was to generate recombinant HCV-like particles (HCV-LPs) specific for genotypes 3a of HCV (prevalent in India) using the genes encoding core, E1 and E2 envelop proteins in a baculovirus expression system. The purified HCV-LPs were characterized by ELISA and electron microscopy and were used to generate monoclonal antibodies (mAbs) in mice. Two monoclonal antibodies (E8G9 and H1H10) specific for the E2 region of envelope protein of HCV genotype 3a, were found to reduce the virus binding to Huh7 cells. However, the mAbs generated against HCV genotype 1b (D2H3, G2C7, E1B11) were not so effective. More importantly, mAb E8G9 showed significant inhibition of the virus entry in HCV JFH1 cell culture system. Finally, the epitopic regions on E2 protein which bind to the mAbs have also been identified. Results suggest a new therapeutic strategy and provide the proof of concept that mAb against HCV-LP could be effective in preventing virus entry into liver cells to block HCV replication.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.