914 resultados para Projections onto convex sets
Resumo:
This paper presents several algorithms for joint estimation of the target number and state in a time-varying scenario. Building on the results presented in [1], which considers estimation of the target number only, we assume that not only the target number, but also their state evolution must be estimated. In this context, we extend to this new scenario the Rao-Blackwellization procedure of [1] to compute Bayes recursions, thus defining reduced-complexity solutions for the multi-target set estimator. A performance assessmentis finally given both in terms of Circular Position Error Probability - aimed at evaluating the accuracy of the estimated track - and in terms of Cardinality Error Probability, aimed at evaluating the reliability of the target number estimates.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
The adsorption of As(V) onto alpha -Al2O3 was investigated at 25, 50 and 70 degreesC using batch adsorption experiments. Results indicate that As is strongly adsorbed at low pH and gets progressively released to the fluid with increasing pH above 7. At any pH, increasing temperature favors aqueous species of As over surface species. Surface complexation constants were determined at the experimental temperatures by fitting the adsorption data. Adsorption reactions were then converted to semi-isocolumbic reactions, i.e, reactions with balanced like-charged aqueous species. Intrinsic adsorption constants of semi-isocolumbic reactions change linearly when plotted against inverse temperature, suggesting that the heat capacity of these reactions remains constant over the temperature range considered. This permitted thermodynamic parameters of intrinsic surface complexation constants to be determined. Changes in surface complexation constants result in a change in the surface speciation with increasing temperature. This change is similar to the one observed for aqueous species, i.e. increasing temperature favors less negatively charged species below a pH of 9 and more negatively charged species above a pH of 10. Comparison with the stability of As surface complexes with Fe suggests that surface complexes with Al are more stable. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The visual cortex in each hemisphere is linked to the opposite hemisphere by axonal projections that pass through the splenium of the corpus callosum. Visual-callosal connections in humans and macaques are found along the V1/V2 border where the vertical meridian is represented. Here we identify the topography of V1 vertical midline projections through the splenium within six human subjects with normal vision using diffusion-weighted MR imaging and probabilistic diffusion tractography. Tractography seed points within the splenium were classified according to their estimated connectivity profiles to topographic subregions of V1, as defined by functional retinotopic mapping. First, we report a ventral-dorsal mapping within the splenium with fibers from ventral V1 (representing the upper visual field) projecting to the inferior-anterior corner of the splenium and fibers from dorsal V1 (representing the lower visual field) projecting to the superior-posterior end. Second, we also report an eccentricity gradient of projections from foveal-to-peripheral V1 subregions running in the anterior-superior to posterior-inferior direction, orthogonal to the dorsal-ventral mapping. These results confirm and add to a previous diffusion MRI study (Dougherty et al., 2005) which identified a dorsal/ventral mapping of human splenial fibers. These findings yield a more detailed view of the structural organization of the splenium than previously reported and offer new opportunities to study structural plasticity in the visual system.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
In the homogeneous case of one-dimensional objects, we show that any preference relation that is positive and homothetic can be represented by a quantitative utility function and unique bias. This bias may favor or disfavor the preference for an object. In the first case, preferences are complete but not transitive and an object may be preferred even when its utility is lower. In the second case, preferences are asymmetric and transitive but not negatively transitive and it may not be sufficient for an object to have a greater utility for be preferred. In this manner, the bias reflects the extent to which preferences depart from the maximization of a utility function.
Resumo:
The well--known Minkowski's? $(x)$ function is presented as the asymptotic distribution function of an enumeration of the rationals in (0,1] based on their continued fraction representation. Besides, the singularity of ?$(x)$ is clearly proved in two ways: by exhibiting a set of measure one in which ?ï$(x)$ = 0; and again by actually finding a set of measure one which is mapped onto a set of measure zero and viceversa. These sets are described by means of metrical properties of different systems for real number representation.
Resumo:
Constant interest rate (CIR) projections are often criticized on the grounds that they are inconsistent with the existence of a unique equilibrium in a variety of forward-looking models. This note shows howto construct CIR projections that are not subject to that criticism, using a standard New Keynesian model as a reference framework.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.