788 resultados para Recursive logit
Resumo:
Le nombre important de véhicules sur le réseau routier peut entraîner des problèmes d'encombrement et de sécurité. Les usagers des réseaux routiers qui nous intéressent sont les camionneurs qui transportent des marchandises, pouvant rouler avec des véhicules non conformes ou emprunter des routes interdites pour gagner du temps. Le transport de matières dangereuses est réglementé et certains lieux, surtout les ponts et les tunnels, leur sont interdits d'accès. Pour aider à faire appliquer les lois en vigueur, il existe un système de contrôles routiers composé de structures fixes et de patrouilles mobiles. Le déploiement stratégique de ces ressources de contrôle mise sur la connaissance du comportement des camionneurs que nous allons étudier à travers l'analyse de leurs choix de routes. Un problème de choix de routes peut se modéliser en utilisant la théorie des choix discrets, elle-même fondée sur la théorie de l'utilité aléatoire. Traiter ce type de problème avec cette théorie est complexe. Les modèles que nous utiliserons sont tels, que nous serons amenés à faire face à des problèmes de corrélation, puisque plusieurs routes partagent probablement des arcs. De plus, puisque nous travaillons sur le réseau routier du Québec, le choix de routes peut se faire parmi un ensemble de routes dont le nombre est potentiellement infini si on considère celles ayant des boucles. Enfin, l'étude des choix faits par un humain n'est pas triviale. Avec l'aide du modèle de choix de routes retenu, nous pourrons calculer une expression de la probabilité qu'une route soit prise par le camionneur. Nous avons abordé cette étude du comportement en commençant par un travail de description des données collectées. Le questionnaire utilisé par les contrôleurs permet de collecter des données concernant les camionneurs, leurs véhicules et le lieu du contrôle. La description des données observées est une étape essentielle, car elle permet de présenter clairement à un analyste potentiel ce qui est accessible pour étudier les comportements des camionneurs. Les données observées lors d'un contrôle constitueront ce que nous appellerons une observation. Avec les attributs du réseau, il sera possible de modéliser le réseau routier du Québec. Une sélection de certains attributs permettra de spécifier la fonction d'utilité et par conséquent la fonction permettant de calculer les probabilités de choix de routes par un camionneur. Il devient alors possible d'étudier un comportement en se basant sur des observations. Celles provenant du terrain ne nous donnent pas suffisamment d'information actuellement et même en spécifiant bien un modèle, l'estimation des paramètres n'est pas possible. Cette dernière est basée sur la méthode du maximum de vraisemblance. Nous avons l'outil, mais il nous manque la matière première que sont les observations, pour continuer l'étude. L'idée est de poursuivre avec des observations de synthèse. Nous ferons des estimations avec des observations complètes puis, pour se rapprocher des conditions réelles, nous continuerons avec des observations partielles. Ceci constitue d'ailleurs un défi majeur. Nous proposons pour ces dernières, de nous servir des résultats des travaux de (Bierlaire et Frejinger, 2008) en les combinant avec ceux de (Fosgerau, Frejinger et Karlström, 2013). Bien qu'elles soient de nature synthétiques, les observations que nous utilisons nous mèneront à des résultats tels, que nous serons en mesure de fournir une proposition concrète qui pourrait aider à optimiser les décisions des responsables des contrôles routiers. En effet, nous avons réussi à estimer, sur le réseau réel du Québec, avec un seuil de signification de 0,05 les valeurs des paramètres d'un modèle de choix de routes discrets, même lorsque les observations sont partielles. Ces résultats donneront lieu à des recommandations sur les changements à faire dans le questionnaire permettant de collecter des données.
Resumo:
This paper considers the optimal linear estimates recursion problem for discrete-time linear systems in its more general formulation. The system is allowed to be in descriptor form, rectangular, time-variant, and with the dynamical and measurement noises correlated. We propose a new expression for the filter recursive equations which presents an interesting simple and symmetric structure. Convergence of the associated Riccati recursion and stability properties of the steady-state filter are provided. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We develop a new iterative filter diagonalization (FD) scheme based on Lanczos subspaces and demonstrate its application to the calculation of bound-state and resonance eigenvalues. The new scheme combines the Lanczos three-term vector recursion for the generation of a tridiagonal representation of the Hamiltonian with a three-term scalar recursion to generate filtered states within the Lanczos representation. Eigenstates in the energy windows of interest can then be obtained by solving a small generalized eigenvalue problem in the subspace spanned by the filtered states. The scalar filtering recursion is based on the homogeneous eigenvalue equation of the tridiagonal representation of the Hamiltonian, and is simpler and more efficient than our previous quasi-minimum-residual filter diagonalization (QMRFD) scheme (H. G. Yu and S. C. Smith, Chem. Phys. Lett., 1998, 283, 69), which was based on solving for the action of the Green operator via an inhomogeneous equation. A low-storage method for the construction of Hamiltonian and overlap matrix elements in the filtered-basis representation is devised, in which contributions to the matrix elements are computed simultaneously as the recursion proceeds, allowing coefficients of the filtered states to be discarded once their contribution has been evaluated. Application to the HO2 system shows that the new scheme is highly efficient and can generate eigenvalues with the same numerical accuracy as the basic Lanczos algorithm.
Resumo:
Despite its widespread use, the Coale-Demeny model life table system does not capture the extensive variation in age-specific mortality patterns observed in contemporary populations, particularly those of the countries of Eastern Europe and populations affected by HIV/AIDS. Although relational mortality models, such as the Brass logit system, can identify these variations, these models show systematic bias in their predictive ability as mortality levels depart from the standard. We propose a modification of the two-parameter Brass relational model. The modified model incorporates two additional age-specific correction factors (gamma(x), and theta(x)) based on mortality levels among children and adults, relative to the standard. Tests of predictive validity show deviations in age-specific mortality rates predicted by the proposed system to be 30-50 per cent lower than those predicted by the Coale-Demeny system and 15-40 per cent lower than those predicted using the original Brass system. The modified logit system is a two-parameter system, parameterized using values of l(5) and l(60).
Resumo:
O objectivo central deste trabalho e avaliar o efeito de se considerarem medições aproximadas das exposições na incidência de doenças resultantes de impactos ambientais. Dado existirem quase exclusivamente estatísticas de incidência das doenças, de- senvolvemos modelos que, a partir dessas estatísticas, permitam abordar o nosso problema. Em particular, mostrámos que, quando se utilizam modelos logit, o uso de medidas aproximadas dos impactos ambientais leva a uma distorção, por defeito, dos coeficientes angulares das rectas ajustadas. Estudámos ainda limites para essa distorção, utilizando para tal a aproximação de Edgeworth. Os nossos resultados permitiram-nos também esquematizar diversos cenários para delineamento de trabalhos de campo, indispensáveis ao aprofundamento do nosso objectivo.
Resumo:
Wythoff Queens is a classical combinatorial game related to very interesting mathematical results. An amazing one is the fact that the P-positions are given by (⌊├ φn⌋┤┤,├ ├ ⌊φ┤^2 n⌋) and (⌊├ φ^2 n⌋┤┤,├ ├ ⌊φ┤n⌋) where φ=(1+√5)/2. In this paper, we analyze a different version where one player (Left) plays with a chess bishop and the other (Right) plays with a chess knight. The new game (call it Chessfights) lacks a Beatty sequence structure in the P-positions as in Wythoff Queens. However, it is possible to formulate and prove some general results of a general recursive law which is a particular case of a Partizan Subtraction game.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Traditionally, it is assumed that the population size of cities in a country follows a Pareto distribution. This assumption is typically supported by nding evidence of Zipf's Law. Recent studies question this nding, highlighting that, while the Pareto distribution may t reasonably well when the data is truncated at the upper tail, i.e. for the largest cities of a country, the log-normal distribution may apply when all cities are considered. Moreover, conclusions may be sensitive to the choice of a particular truncation threshold, a yet overlooked issue in the literature. In this paper, then, we reassess the city size distribution in relation to its sensitivity to the choice of truncation point. In particular, we look at US Census data and apply a recursive-truncation approach to estimate Zipf's Law and a non-parametric alternative test where we consider each possible truncation point of the distribution of all cities. Results con rm the sensitivity of results to the truncation point. Moreover, repeating the analysis over simulated data con rms the di culty of distinguishing a Pareto tail from the tail of a log-normal and, in turn, identifying the city size distribution as a false or a weak Pareto law.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Solving multi-stage oligopoly models by backward induction can easily become a com- plex task when rms are multi-product and demands are derived from a nested logit frame- work. This paper shows that under the assumption that within-segment rm shares are equal across segments, the analytical expression for equilibrium pro ts can be substantially simpli ed. The size of the error arising when this condition does not hold perfectly is also computed. Through numerical examples, it is shown that the error is rather small in general. Therefore, using this assumption allows to gain analytical tractability in a class of models that has been used to approach relevant policy questions, such as for example rm entry in an industry or the relation between competition and location. The simplifying approach proposed in this paper is aimed at helping improving these type of models for reaching more accurate recommendations.