46 resultados para LINEAR-ANALYSIS
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.
Resumo:
It is well known the relationship between source separation and blind deconvolution: If a filtered version of an unknown i.i.d. signal is observed, temporal independence between samples can be used to retrieve the original signal, in the same manner as spatial independence is used for source separation. In this paper we propose the use of a Genetic Algorithm (GA) to blindly invert linear channels. The use of GA is justified in the case of small number of samples, where other gradient-like methods fails because of poor estimation of statistics.
Resumo:
Artifacts are present in most of the electroencephalography (EEG) recordings, making it difficult to interpret or analyze the data. In this paper a cleaning procedure based on a multivariate extension of empirical mode decomposition is used to improve the quality of the data. This is achieved by applying the cleaning method to raw EEG data. Then, a synchrony measure is applied on the raw and the clean data in order to compare the improvement of the classification rate. Two classifiers are used, linear discriminant analysis and neural networks. For both cases, the classification rate is improved about 20%.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
Linear spaces consisting of σ-finite probability measures and infinite measures (improper priors and likelihood functions) are defined. The commutative group operation, called perturbation, is the updating given by Bayes theorem; the inverse operation is the Radon-Nikodym derivative. Bayes spaces of measures are sets of classes of proportional measures. In this framework, basic notions of mathematical statistics get a simple algebraic interpretation. For example, exponential families appear as affine subspaces with their sufficient statistics as a basis. Bayesian statistics, in particular some well-known properties of conjugated priors and likelihood functions, are revisited and slightly extended
Resumo:
Correspondència referida a l'article de R. Giannetti, publicat ibid. vol.49 p.87-88
Resumo:
In this paper we develop a new linear approach to identify the parameters of a moving average (MA) model from the statistics of the output. First, we show that, under some constraints, the impulse response of the system can be expressed as a linear combination of cumulant slices. Then, thisresult is used to obtain a new well-conditioned linear methodto estimate the MA parameters of a non-Gaussian process. Theproposed method presents several important differences withexisting linear approaches. The linear combination of slices usedto compute the MA parameters can be constructed from dif-ferent sets of cumulants of different orders, providing a generalframework where all the statistics can be combined. Further-more, it is not necessary to use second-order statistics (the autocorrelation slice), and therefore the proposed algorithm stillprovides consistent estimates in the presence of colored Gaussian noise. Another advantage of the method is that while mostlinear methods developed so far give totally erroneous estimates if the order is overestimated, the proposed approach doesnot require a previous estimation of the filter order. The simulation results confirm the good numerical conditioning of thealgorithm and the improvement in performance with respect to existing methods.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
This paper describes the fluctuations of temporal criteria dynamics in the context of professional sport. Specifically, we try to verify the underlying deterministic patterns in the outcomes of professional basketball players. We use a longitudinal approach based on the analysis of the outcomes of 94 basketball players over ten years, covering practically players" entire career development. Time series were analyzed with techniques derived from nonlinear dynamical systems theory. These techniques analyze the underlying patterns in outcomes without previous shape assumptions (linear or nonlinear). These techniques are capable of detecting an intermediate situation between randomness and determinism, called chaos. So they are very useful for the study of dynamic criteria in organizations. We have found most players (88.30%) have a deterministic pattern in their outcomes, and most cases are chaotic (81.92%). Players with chaotic patterns have higher outcomes than players with linear patterns. Moreover, players with power forward and center positions achieve better results than other players. The high number of chaotic patterns found suggests caution when appraising individual outcomes, when coaches try to find the appropriate combination of players to design a competitive team, and other personnel decisions. Management efforts must be made to assume this uncertainty.
Resumo:
ABSTRACT Dual-trap optical tweezers are often used in high-resolution measurements in single-molecule biophysics. Such measurements can be hindered by the presence of extraneous noise sources, the most prominent of which is the coupling of fluctuations along different spatial directions, which may affect any optical tweezers setup. In this article, we analyze, both from the theoretical and the experimental points of view, the most common source for these couplings in dual-trap optical-tweezers setups: the misalignment of traps and tether. We give criteria to distinguish different kinds of misalignment, to estimate their quantitative relevance and to include them in the data analysis. The experimental data is obtained in a, to our knowledge, novel dual-trap optical-tweezers setup that directly measures forces. In the case in which misalignment is negligible, we provide a method to measure the stiffness of traps and tether based on variance analysis. This method can be seen as a calibration technique valid beyond the linear trap region. Our analysis is then employed to measure the persistence length of dsDNA tethers of three different lengths spanning two orders of magnitude. The effective persistence length of such tethers is shown to decrease with the contour length, in accordance with previous studies.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features