977 resultados para Binary choice models
Resumo:
Os modelos hazard, também conhecidos por modelos de tempo até a falência ou duração, são empregados para determinar quais variáveis independentes têm maior poder explicativo na previsão de falência de empresas. Consistem em uma abordagem alternativa aos modelos binários logit e probit, e à análise discriminante. Os modelos de duração deveriam ser mais eficientes que modelos de alternativas discretas, pois levam em consideração o tempo de sobrevivência para estimar a probabilidade instantânea de falência de um conjunto de observações sobre uma variável independente. Os modelos de alternativa discreta tipicamente ignoram a informação de tempo até a falência, e fornecem apenas a estimativa de falhar em um dado intervalo de tempo. A questão discutida neste trabalho é como utilizar modelos hazard para projetar taxas de inadimplência e construir matrizes de migração condicionadas ao estado da economia. Conceitualmente, o modelo é bastante análogo às taxas históricas de inadimplência e mortalidade utilizadas na literatura de crédito. O Modelo Semiparamétrico Proporcional de Cox é testado em empresas brasileiras não pertencentes ao setor financeiro, e observa-se que a probabilidade de inadimplência diminui sensivelmente após o terceiro ano da emissão do empréstimo. Observa-se também que a média e o desvio-padrão das probabilidades de inadimplência são afetados pelos ciclos econômicos. É discutido como o Modelo Proporcional de Cox pode ser incorporado aos quatro modelos mais famosos de gestão de risco .de crédito da atualidade: CreditRisk +, KMV, CreditPortfolio View e CreditMetrics, e as melhorias resultantes dessa incorporação
Resumo:
Um modelo bayesiano de regressão binária é desenvolvido para predizer óbito hospitalar em pacientes acometidos por infarto agudo do miocárdio. Métodos de Monte Carlo via Cadeias de Markov (MCMC) são usados para fazer inferência e validação. Uma estratégia para construção de modelos, baseada no uso do fator de Bayes, é proposta e aspectos de validação são extensivamente discutidos neste artigo, incluindo a distribuição a posteriori para o índice de concordância e análise de resíduos. A determinação de fatores de risco, baseados em variáveis disponíveis na chegada do paciente ao hospital, é muito importante para a tomada de decisão sobre o curso do tratamento. O modelo identificado se revela fortemente confiável e acurado, com uma taxa de classificação correta de 88% e um índice de concordância de 83%.
Resumo:
We propose alternative approaches to analyze residuals in binary regression models based on random effect components. Our preferred model does not depend upon any tuning parameter, being completely automatic. Although the focus is mainly on accommodation of outliers, the proposed methodology is also able to detect them. Our approach consists of evaluating the posterior distribution of random effects included in the linear predictor. The evaluation of the posterior distributions of interest involves cumbersome integration, which is easily dealt with through stochastic simulation methods. We also discuss different specifications of prior distributions for the random effects. The potential of these strategies is compared in a real data set. The main finding is that the inclusion of extra variability accommodates the outliers, improving the adjustment of the model substantially, besides correctly indicating the possible outliers.
Resumo:
External circumstances and internal bodily states often change and require organisms to flexibly adapt valuation processes to select the optimal action in a given context. Here, we investigate the neurobiology of context-dependent valuation in 22 human subjects using functional magnetic resonance imaging. Subjects made binary choices between visual stimuli with three attributes (shape, color, and pattern) that were associated with monetary values. Context changes required subjects to deviate from the default shape valuation and to integrate a second attribute in order to comply with the goal to maximize rewards. Critically, this binary choice task did not involve any conflict between opposing monetary, temporal, or social preferences. We tested the hypothesis that interactions between regions of dorsolateral and ventromedial prefrontal cortex (dlPFC; vmPFC) implicated in self-control choices would also underlie the more general function of context-dependent valuation. Consistent with this idea, we found that the degree to which stimulus attributes were reflected in vmPFC activity varied as a function of context. In addition, activity in dlPFC increased when context changes required a reweighting of stimulus attribute values. Moreover, the strength of the functional connectivity between dlPFC and vmPFC was associated with the degree of context-specific attribute valuation in vmPFC at the time of choice. Our findings suggest that functional interactions between dlPFC and vmPFC are a key aspect of context-dependent valuation and that the role of this network during choices that require self-control to adjudicate between competing outcome preferences is a specific application of this more general neural mechanism.
Resumo:
During the last years cities around the world have invested important quantities of money in measures for reducing congestion and car-trips. Investments which are nothing but potential solutions for the well-known urban sprawl phenomenon, also called the “development trap” that leads to further congestion and a higher proportion of our time spent in slow moving cars. Over the path of this searching for solutions, the complex relationship between urban environment and travel behaviour has been studied in a number of cases. The main question on discussion is, how to encourage multi-stop tours? Thus, the objective of this paper is to verify whether unobserved factors influence tour complexity. For this purpose, we use a data-base from a survey conducted in 2006-2007 in Madrid, a suitable case study for analyzing urban sprawl due to new urban developments and substantial changes in mobility patterns in the last years. A total of 943 individuals were interviewed from 3 selected neighbourhoods (CBD, urban and suburban). We study the effect of unobserved factors on trip frequency. This paper present the estimation of an hybrid model where the latent variable is called propensity to travel and the discrete choice model is composed by 5 alternatives of tour type. The results show that characteristics of the neighbourhoods in Madrid are important to explain trip frequency. The influence of land use variables on trip generation is clear and in particular the presence of commercial retails. Through estimation of elasticities and forecasting we determine to what extent land-use policy measures modify travel demand. Comparing aggregate elasticities with percentage variations, it can be seen that percentage variations could lead to inconsistent results. The result shows that hybrid models better explain travel behavior than traditional discrete choice models.
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
A tanárok pályaelhagyási döntését vizsgálva, a tanulmány a következő két kérdésre keresi a választ. Milyen szerepet játszanak e döntésekben a keresetek, alternatív kereseti lehetőségek? Hogyan hatott a tanárok pályaelhagyására a 2002. évi közalkalmazotti béremelés? Az elemzéshez az OEP-ONYF-FH összekapcsolt nagymintás adatbázis felhasználásával kétféle modellt becsült a szerző: 1. két lehetőséget megkülönböztetve (elhagyja a tanári pályát/nem hagyja el) Cox-féle arányos hazárdfüggvényeket, 2. a pályaelhagyás okai között a más állásba kerülést és az egyéb pályaelhagyási okokat megkülönböztetve versengő kockázati modelleket. Az eredmények azt mutatják, hogy a kereseti lehetőségek hatnak a pályaelhagyási döntésekre. A magasabb jövedelem és magasabb relatív kereset csökkenti annak valószínűségét, hogy egy tanár elhagyja a pályát, és más pályán helyezkedjen el, vagy nem foglalkoztatotti státusba kerüljön. A közalkalmazotti béremelés átmenetileg csökkentette a pályaelhagyás valószínűségét a fiatal tanárok körében, de a hatás egy-két év alatt eltűnt. Az 51 évesnél idősebb tanárokat pedig inkább a pályán tartotta a béremelés, csökkentette annak valószínűségét is, hogy más pályán helyezkedjenek el, vagy hogy nem foglalkoztatotti státusba kerüljenek. ______ The paper investigates teachers decisions to leave the profession. It first examines the role in such decisions of pay compared with earnings in alternative occupations, and then discusses how the public-sector pay increase of 2002 af-fected exit decisions by teachers. Duration models were estimated using large merged administrative data sets. First binary-choice Cox proportional hazard models (leaving teaching profession or not), then competing risk models that distinguish exits to another occupation and exits to a non-working state. Results show that earnings matter. Higher wages reduce the probability of exiting teacher profession to go to another occupation or to non-employment. The public-sector pay increase decreased the probability of inexperienced teachers leaving the teacher profession temporarily, but the effect disappeared after one or two years. For experienced teachers over 51 years old, the wage increase was found to reduce attrition.
Resumo:
My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.
The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.
The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading
due to misclassification of unemployment.
Resumo:
Electoral researchers are so much accustomed to analyzing the choice of the single most preferred party as the left-hand side variable of their models of electoral behavior that they often ignore revealed preference data. Drawing on random utility theory, their models predict electoral behavior at the extensive margin of choice. Since the seminal work of Luce and others on individual choice behavior, however, many social science disciplines (consumer research, labor market research, travel demand, etc.) have extended their inventory of observed preference data with, for instance, multiple paired comparisons, complete or incomplete rankings, and multiple ratings. Eliciting (voter) preferences using these procedures and applying appropriate choice models is known to considerably increase the efficiency of estimates of causal factors in models of (electoral) behavior. In this paper, we demonstrate the efficiency gain when adding additional preference information to first preferences, up to full ranking data. We do so for multi-party systems of different sizes. We use simulation studies as well as empirical data from the 1972 German election study. Comparing the practical considerations for using ranking and single preference data results in suggestions for choice of measurement instruments in different multi-candidate and multi-party settings.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
Shoaling with familiar individuals may have many benefits including enhanced escape responses or increased foraging efficiency. This study describes the results of two complimentary experiments. The first utilised a simple binary choice experiment to determine if rainbowfish (Melanotaenia spp.) preferred to shoal with familiar individuals or with strangers. The second experiment used a free range situation where familiar and unfamiliar individuals were free to intermingle and were then exposed to a predator threat. Like many other small species of fish, rainbowfish were capable of identifying and distinguishing between individuals and choose to preferentially associate with familiar individuals as opposed to strangers. Contrary to expectations. however. rainbowrish did not significantly increase their preference for familiar individuals in the presence of a stationary predator model. Griffiths [J Fish Biol (1997) 51:489-4951 conducted similar studies under semi-natural conditions examining, the shoaling preferences of European minnows and showed similar results. Both the current study and that of Griffiths were conducted using predator wary populations of fish. It is suggested that, in predator sympatric populations, the benefits of shoaling with familiar individuals are such that it always pays to stay close to familiar individuals even when the probability If predator attack is remote.
Resumo:
RESUMO - Objetivos: Anualmente morrem cerca de 1,3 milhões de pessoas, a nível mundial, devido aos acidentes de viação. Também mais de 20 milhões de pessoas sofrem ferimentos ligeiros ou graves devido aos acidentes de viação que resultam em incapacidade temporária ou permanente. Desta forma, consideram-se os acidentes de viação, um grave problema de saúde pública, com custos elevados para as sociedades afetando a saúde das populações e economias de cada país. Este estudo pretendeu descrever e caracterizar os condutores de veículos ligeiros, residentes em Portugal Continental, abrangendo características sociodemográficas, experiência de condução e questões relativas a atitudes, opiniões e comportamentos. Por outro lado procurou-se analisar a associação entre as opiniões, atitudes e comportamentos, auto reportados e a ocorrência de um acidente de viação nos últimos três anos a fim de construir um modelo final preditivo do risco de sofrer um acidente de viação. Método: Foi realizado um estudo observacional analítico transversal baseado num questionário traduzido para a língua portuguesa e com origem no projeto europeu SARTRE 4. A população-alvo foram todos os condutores de veículos ligeiros possuidores de uma licença de condução e residentes em Portugal Continental, baseado numa amostra de igual dimensão à definida no estudo europeu SARTRE 4 (600 condutores de veículos ligeiros). Das 52 perguntas existentes, selecionaram-se pela análise de componentes principais (ACP) variáveis potencialmente independentes e complementares para as componentes opiniões, atitudes e comportamentos. Para além das medidas descritivas usuais, recorreu-se à regressão logística binária para analisar associações e obter um modelo que permitisse estimar a probabilidade de sofrer um acidente rodoviário em função das variáveis selecionadas referentes às opiniões, atitudes e comportamentos auto reportados. Resultados: Dos 612 condutores inquiridos, 62,7% (383) responderam não ter sofrido nenhum acidente de viação nos últimos três anos enquanto 37,3% (228) respondeu ter estado envolvido em pelo menos um acidente de viação com danos materiais ou feridos, no mesmo período. De uma forma geral, o típico condutor que referiu ter sofrido um acidente nos últimos três anos é homem com mais de 65 anos de idade, com o 1º ensino básico, viúvo e sem filhos, não empregado e reside numa área urbana. Os condutores residentes numa área suburbana apresentaram um risco 5,368 mais elevado de sofrer um acidente de viação em relação aos condutores que habitam numa zona rural (IC 95%: 2,344-12,297; p<0,001). Os condutores que foram apenas submetidos uma vez a um controlo de álcool, nos últimos três anos, durante o exercício da condução apresentaram um risco 3,009 superior de sofrer um acidente de viação em relação aos condutores que nunca foram fiscalizados pela polícia (IC 95%: 1,949-4,647, p<0,001). Os condutores que referiram muito frequentemente parar para dormir quando se sentem cansados a conduzir têm uma probabilidade inferior de 81% de sofrer um acidente de viação em relação aos condutores que nunca o fazem (IC 95%: 0,058-0,620; p=0,006). Os condutores que quando cansados raramente bebem um café/bebida energética têm um risco de 4,829 superior de sofrer um acidente de viação do que os condutores que sempre o referiram fazer (IC 95%:1,807-12,903; p=0,002). Conclusões: Os resultados obtidos em relação aos fatores comportamentais vão ao encontro da maioria dos fatores de risco associados aos acidentes de viação referidos na literatura. Ainda assim, foram identificadas novas associações entre o risco de sofrer um acidente e as opiniões e as atitudes auto reportadas que através de estudos de maiores dimensões populacionais poderão vir a ser mais exploradas. Este trabalho vem reforçar a necessidade urgente de novas estratégias de intervenção, principalmente na componente comportamental, direcionadas aos grupos de risco, mantendo as existentes.
Resumo:
In most European countries Social Security (SS) systems are characterized as Pay-asyou- go systems. Their sustainability is being challenged with demographic changes, namely population ageing. Portugal’s population is ageing rapidly being one of the countries where this problem is more critical. With the growing debate on this topic several public choice models have been developed so as to explain SS size. In this work project there is an attempt to understand whether these models contribute to better explain Social security expenditure with pensions (SSEP) and to establish the need of finding ways to reduce present commitment with pension expenditure in Portugal.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.