994 resultados para static models
Resumo:
This paper presents two strategies for the upgrade of set-up generation systems for tandem cold mills. Even though these mills have been modernized mainly due to quality requests, their upgrades may be made intending to replace pre-calculated reference tables. In this case, Bryant and Osborn mill model without adaptive technique is proposed. As a more demanding modernization, Bland and Ford model including adaptation is recommended, although it requires a more complex computational hardware. Advantages and disadvantages of these two systems are compared and discussed and experimental results obtained from an industrial cold mill are shown.
Resumo:
Environmental change research often relies on simplistic, static models of human behaviour in social-ecological systems. This limits understanding of how social-ecological change occurs. Integrative, process-based behavioural models, which include feedbacks between action, and social and ecological system structures and dynamics, can inform dynamic policy assessment in which decision making is internalised in the model. These models focus on dynamics rather than states. They stimulate new questions and foster interdisciplinarity between and within the natural and social sciences.
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
Este trabalho discute a racionalidade econômica para o desenvolvimento de um sistema de metas sociais como forma do governo federal aumentar a eficiência na utilização dos recursos sociais transferidos para os municípios. O trabalho desenvolve algumas extensões do modelo de agente-principal incluindo abordagens estáticas com e sem informação imperfeita. Os resultados dos modelos estáticos indicam que o uso de critérios usuais de focalização onde localidades mais pobres recebem mais recursos podem levar a incentivos adversos para a erradicação da pobreza. Demonstramos que transferências incondicionais do governo federal deslocam gastos sociais locais. O trabalho argumenta em favor do uso de contratos onde quanto maior for a melhora no indicador social escolhido, mais recursos o município receberia. A introdução de informação imperfeita neste modelo basicamente gera uma penalidade aos segmentos pobres de áreas onde os governos demonstram ser menos avessos a pobreza. O trabalho também aborda o problema de favoritismo político onde determinados grupos sociais têm maior, ou menor, atenção por parte de governos locais. O resultado é que as políticas sociais acabam privilegiando determinados setores em detrimento de outros. Com o estabelecimento de metas sociais é possível, se não eliminar o problema, ao menos criar incentivos corretos para que os gastos sociais sejam distribuídos de forma mais equânime.
Resumo:
Este trabalho discute a racionalidade econômica para o desenvolvimento de um sistema de metas sociais como forma do governo federal aumentar a eficiência na utilização dos recursos sociais transferidos para os municípios. O trabalho desenvolve algumas extensões do modelo de agente-principal incluindo abordagens estáticas com e sem informação imperfeita. Os resultados dos modelos estáticos indicam que o uso de critérios usuais de focalização onde localidades mais pobres recebem mais recursos podem levar a incentivos adversos para a erradicação da pobreza. Demonstramos que transferências incondicionais do governo federal deslocam gastos sociais locais. O trabalho argumenta em favor do uso de contratos onde quanto maior for a melhora no indicador social escolhido, mais recursos o município receberia. A introdução de informação imperfeita neste modelo basicamente gera uma penalidade aos segmentos pobres de áreas onde os governos demonstram ser menos avessos a pobreza. O trabalho também aborda o problema de favoritismo político onde determinados grupos sociais têm maior, ou menor, atenção por parte de governos locais. O resultado é que as políticas sociais acabam privilegiando determinados setores em detrimento de outros. Com o estabelecimento de metas sociais é possível, se não eliminar o problema, ao menos criar incentivos corretos para que os gastos sociais sejam distribuídos de forma mais equânime.
Resumo:
Este trabalho discute a racionalidade econômica para o desenvolvimento de um sistema de metas sociais como forma de o governo federal aumentar a eficiência na utilização dos recursos sociais transferidos para os municípios. O trabalho desenvolve algumas extensões do modelo de agente-principal incluindo abordagens estáticas com e sem informação imperfeita e abordagens dinâmicas com contratos perfeitos e imperfeitos. Os resultados dos modelos estáticos indicam que o uso de critérios usuais de focalização onde localidades mais pobres recebem mais recursos pode levar a incentivos adversos para a erradicação da pobreza. Nós também mostramos que transferências incondicionais do governo federal deslocam gastos sociais locais. O trabalho argumenta em favor do uso de contratos onde quanto maior for a melhora no indicador social escolhido, mais recursos o município receberia. A introdução de informação imperfeita neste modelo basicamente gera uma penalidade aos segmentos pobres de áreas onde os governos demonstram ser menos avessos a pobreza. O trabalho também aborda o problema de favoritismo político onde determinados grupos sociais têm maior, ou menor, atenção por parte de governos locais. O resultado é que as políticas sociais acabam privilegiando determinados setores em detrimento de outros. Com o estabelecimento de metas sociais é possível, se não eliminar o problema, ao menos criar incentivos corretos para que os gastos sociais sejam distribuídos de forma mais equânime. Também desenvolvemos modelos dinâmicos com diferentes possibilidades de renegociação ao longo do tempo. Demonstramos que a melhor forma de aumentar a eficiência alocativa dos fundos seria criar mecanismos institucionais garantindo a impossibilidade de renegociações bilaterais. Esse contrato ótimo reproduz a seqüência de metas e transferências de vários períodos encontrada na solução do modelo estático. Entretanto, esse resultado- desaparece quando incorporamos contratos incompletos. Nesse caso, as ineficiências ex-ante criadas pela possibilidade de renegociação devem ser comparadas com as ineficiências ex-post criadas por não se usar a informação nova revelada ao longo do processo. Finalmente, introduzimos a possibilidade do resultado social observado depender não só do investimento realizado, mas também da presença de choques. Nesse caso, tanto o governo quanto o município aumentam as suas metas de investimento na área social. Contratos lineares na presença de choques negativos fazem com que os municípios recebem menos recursos justamente em situações adversas. Para contornar esse problema, mostramos a importância da utilização de contratos com comparação de performance.
Resumo:
The present work develops a methodology to establish a 3D digital static models petroleum reservoir analogue using LIDAR and GEORADAR technologies. Therefore, this work introduce The methodolgy as a new paradigm in the outcrop study, to purpose a consistent way to integrate plani-altimetric data, geophysics data, and remote sensing products, allowing 2D interpretation validation in contrast with 3D, complexes depositional geometry visualization, including in environmental immersive virtual reality. For that reason, it exposes the relevant questions of the theory of two technologies, and developed a case study using TerraSIRch SIR System-3000 made for Geophysical Survey Systems, and HDS3000 Leica Geosystems, using the two technologies, integrating them GOCAD software. The studied outcrop is plain to the view, and it s located at southeast Bacia do Parnaíba, in the Parque Nacional da Serra das Confusões. The methodology embraces every steps of the building process shows a 3D digital static models petroleum reservoir analogue, provide depositional geometry data, in several scales for Simulation petroleum reservoir
Resumo:
This project was developed as a partnership between the Laboratory of Stratigraphical Analyses of the Geology Department of UFRN and the company Millennium Inorganic Chemicals Mineração Ltda. This company is located in the north end of the paraiban coast, in the municipal district of Mataraca. Millennium has as main prospected product, heavy minerals as ilmenita, rutilo and zircon presents in the sands of the dunes. These dunes are predominantly inactive, and overlap the superior portion of Barreiras Formation rocks. The mining happens with the use of a dredge that is emerged at an artificial lake on the dunes. This dredge removes sand dunes of the bottom lake (after it disassembles of the lake borders with water jets) and directs for the concentration plant, through piping where the minerals are then separate. The present work consisted in the acquisition external geometries of the dunes, where in the end a 3D Static Model could be set up of these sedimentary deposits with emphasis in the behavior of the structural top of Barreiras Formation rocks (inferior limit of the deposit). The knowledge of this surface is important in the phase of the plowing planning for the company, because a calculation mistake can do with that the dredge works too close of this limit, taking the risk that fragments can cause obstruction in the dredge generating a financial damage so much in the equipment repair as for the stopped days production. During the field stages (accomplished in 2006 and 2007) topographical techniques risings were used with Total Station and Geodesic GPS as well as shallow geophysical acquisitions with GPR (Ground Penetrating Radar). It was acquired almost 10,4km of topography and 10km of profiles GPR. The Geodesic GPS was used for the data geopositioning and topographical rising of a traverse line with 630m of extension in the stage of 2007. The GPR was shown a reliable method, ecologically clean, fast acquisition and with a low cost in relation to traditional methods as surveys. The main advantage of this equipment is obtain a continuous information to superior surface Barreiras Formation rocks. The static models 3D were elaborated starting from the obtained data being used two specific softwares for visualization 3D: GoCAD 2.0.8 and Datamine. The visualization 3D allows a better understanding of the Barreiras surface behavior as well as it makes possible the execution of several types of measurements, favoring like calculations and allowing that procedures used for mineral extraction is used with larger safety
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
The friction phenomena is present in mechanical systems with two surfaces that are in contact, which can cause serious damage to structures. Your understanding in many dynamic problems became the target of research due to its nonlinear behavior. It is necessary to know and thoroughly study each existing friction model found in the literature and nonlinear methods to define what will be the most appropriate to the problem in question. One of the most famous friction model is the Coulomb Friction, which is considered in the studied problems in the French research center Laboratoire de Mécanique des Structures et des Systèmes Couplés (LMSSC), where this search began. Regarding the resolution methods, the Harmonic Balance Method is generally used. To expand the knowledge about the friction models and the nonlinear methods, a study was carried out to identify and study potential methodologies that can be applied in the existing research lines in LMSSC and then obtain better final results. The identified friction models are divided into static and dynamic. Static models can be Classical Models, Karnopp Model and Armstrong Model. The dynamic models are Dahl Model, Bliman and Sorine Model and LuGre Model. Concerning about nonlinear methods, we study the Temporal Methods and Approximate Methods. The friction models analyzed with the help of Matlab software are verified from studies in the literature demonstrating the effectiveness of the developed programming
Resumo:
OBJECTIVE: To review systematically and critically, evidence used to derive estimates of costs and cost effectiveness of chlamydia screening. METHODS: Systematic review. A search of 11 electronic bibliographic databases from the earliest date available to August 2004 using keywords including chlamydia, pelvic inflammatory disease, economic evaluation, and cost. We included studies of chlamydia screening in males and/or females over 14 years, including studies of diagnostic tests, contact tracing, and treatment as part of a screening programme. Outcomes included cases of chlamydia identified and major outcomes averted. We assessed methodological quality and the modelling approach used. RESULTS: Of 713 identified papers we included 57 formal economic evaluations and two cost studies. Most studies found chlamydia screening to be cost effective, partner notification to be an effective adjunct, and testing with nucleic acid amplification tests, and treatment with azithromycin to be cost effective. Methodological problems limited the validity of these findings: most studies used static models that are inappropriate for infectious diseases; restricted outcomes were used as a basis for policy recommendations; and high estimates of the probability of chlamydia associated complications might have overestimated cost effectiveness. Two high quality dynamic modelling studies found opportunistic screening to be cost effective but poor reporting or uncertainty about complication rates make interpretation difficult. CONCLUSION: The inappropriate use of static models to study interventions to prevent a communicable disease means that uncertainty remains about whether chlamydia screening programmes are cost effective or not. The results of this review can be used by health service managers in the allocation of resources, and health economists and other researchers who are considering further research in this area.