961 resultados para Random coefficient logit models
Resumo:
Different outcomes of the effect of catechin-caffeine mixtures and caffeine-only supplementation on energy expenditure and fat oxidation have been reported in short-term studies. Therefore, a meta-analysis was conducted to elucidate whether catechin-caffeine mixtures and caffeine-only supplementation indeed increase thermogenesis and fat oxidation. First, English-language studies measuring daily energy expenditure and fat oxidation by means of respiration chambers after catechin-caffeine mixtures and caffeine-only supplementation were identified through PubMed. Six articles encompassing a total of 18 different conditions fitted the inclusion criteria. Second, results were aggregated using random/mixed-effects models and expressed in terms of the mean difference in 24 h energy expenditure and fat oxidation between the treatment and placebo conditions. Finally, the influence of moderators such as BMI and dosage on the results was examined as well. The catechin-caffeine mixtures and caffeine-only supplementation increased energy expenditure significantly over 24 h (428.0 kJ (4.7%); P < 0.001 and 429.1 kJ (4.8%); P < 0.001, respectively). However, 24 h fat oxidation was only increased by catechin-caffeine mixtures (12.2 g (16.0%); P < 0.02 and 9.5 g (12.4%); P = 0.11, respectively). A dose-response effect on 24 h energy expenditure and fat oxidation occurred with a mean increase of 0.53 kJ mg(-1) (P < 0.01) and 0.02 g mg(-1) (P < 0.05) for catechin-caffeine mixtures and 0.44 kJ mg(-1) (P < 0.001) and 0.01 g mg(-1) (P < 0.05) for caffeine-only. In conclusion, catechin-caffeine mixtures or a caffeine-only supplementation stimulates daily energy expenditure dose-dependently by 0.4-0.5 kJ mg(-1) administered. Compared with placebo, daily fat-oxidation was only significantly increased after catechin-caffeine mixtures ingestion.
Resumo:
Until recently farm management made little use of accounting and agriculture has been largely excluded from the scope of accounting standards. This article examines the current use of accounting in agriculture and points theneed to establish accounting standards for agriculture. Empirical evidence shows that accounting can make a significant contribution to agricultural management and farm viability and could also be important for other agents involved in agricultural decision making. Existing literature on failureprediction models and farm viability prediction studies provide the starting point for our research, in which two dichotomous logit models were applied to subsamples of viable and unviable farms in Catalonia, Spain. The firstmodel considered only non-financial variables, while the other also considered financial ones. When accounting variables were added to the model, a significant reduction in deviance was observed.
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
Due to the overwhelming international evidence that stock prices drop by less than the dividend paid on ex-dividend days, the ex-dividend day anomaly is considered a stylized fact. Two main approaches have emerged to explain this empirical regularity: the tax-clientele hypothesis and the microstructure of financial markets. Although the most widely accepted explanation for this fact relies on taxes, the ex-dividend day anomaly has been reported even in countries where neither dividends nor capital gains are taxed. The 2006 tax reform in Spain established the same tax rate for dividends and capital gains. This paper investigates stock returns on ex-dividend days in the Spanish stock market after the 2006 tax reform using a random coefficient model. Contrary to previous research, we do not observe an ex-dividend day anomaly. Unlike previous investigations, which are mostly concerned with suggesting explanations as to why this anomaly has occurred, we are in the somewhat strange position of discussing why this anomaly has not occurred. Our findings are robust across companies and stock dividend yields, thus supporting a tax--based explanation for the ex-dividend day anomaly.
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
L'objectif principal de ce travail est d’étudier en profondeur certaines techniques biostatistiques avancées en recherche évaluative en chirurgie cardiaque adulte. Les études ont été conçues pour intégrer les concepts d'analyse de survie, analyse de régression avec “propensity score”, et analyse de coûts. Le premier manuscrit évalue la survie après la réparation chirurgicale de la dissection aigüe de l’aorte ascendante. Les analyses statistiques utilisées comprennent : analyses de survie avec régression paramétrique des phases de risque et d'autres méthodes paramétriques (exponentielle, Weibull), semi-paramétriques (Cox) ou non-paramétriques (Kaplan-Meier) ; survie comparée à une cohorte appariée pour l’âge, le sexe et la race utilisant des tables de statistiques de survie gouvernementales ; modèles de régression avec “bootstrapping” et “multinomial logit model”. L'étude a démontrée que la survie s'est améliorée sur 25 ans en lien avec des changements dans les techniques chirurgicales et d’imagerie diagnostique. Le second manuscrit est axé sur les résultats des pontages coronariens isolés chez des patients ayant des antécédents d'intervention coronarienne percutanée. Les analyses statistiques utilisées comprennent : modèles de régression avec “propensity score” ; algorithme complexe d'appariement (1:3) ; analyses statistiques appropriées pour les groupes appariés (différences standardisées, “generalized estimating equations”, modèle de Cox stratifié). L'étude a démontrée que l’intervention coronarienne percutanée subie 14 jours ou plus avant la chirurgie de pontages coronariens n'est pas associée à des résultats négatifs à court ou long terme. Le troisième manuscrit évalue les conséquences financières et les changements démographiques survenant pour un centre hospitalier universitaire suite à la mise en place d'un programme de chirurgie cardiaque satellite. Les analyses statistiques utilisées comprennent : modèles de régression multivariée “two-way” ANOVA (logistique, linéaire ou ordinale) ; “propensity score” ; analyses de coûts avec modèles paramétriques Log-Normal. Des modèles d’analyse de « survie » ont également été explorés, utilisant les «coûts» au lieu du « temps » comme variable dépendante, et ont menés à des conclusions similaires. L'étude a démontrée que, après la mise en place du programme satellite, moins de patients de faible complexité étaient référés de la région du programme satellite au centre hospitalier universitaire, avec une augmentation de la charge de travail infirmier et des coûts.
Resumo:
Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement, nous évaluons l’efficacité numérique des méthodes proposées dans le cadre de l’estimation de modèles de choix discrets, en particulier les modèles logit mélangés.
Resumo:
Comme la plupart des villes en Afrique subsaharienne, Ouagadougou, capitale du Burkina Faso, a connu au cours de ces dernières décennies une croissance démographique rapide. Cette situation pose de nombreux problèmes d’ordre sanitaire et environnemental. Pourtant, les liens entre la santé et l’environnement immédiat sont encore faiblement étudiés du fait de la qualité des données qui, lorsqu’elles existent, se révèlent peu appropriées. La présente thèse vise à analyser les liens entre l’environnement immédiat et certains symptômes de maladies, plus spécifiquement la fièvre et la diarrhée ; deux problèmes majeurs de santé liés à l’environnement chez les enfants de moins de 5 ans dans les villes d’Afrique subsaharienne. Cette étude se base sur des données de l’Observatoire de population de Ouagadougou (OPO) recueillies entre 2009 et 2010 dans l’objectif d’étudier les inégalités de santé en milieu urbain (notamment les données de l’enquête santé portant sur 950 enfants de moins de 5 ans recueillies en 2010). La thèse décrit d’abord la santé environnementale en milieu urbain en dépassant l’opposition classique quartiers lotis/quartiers non lotis (zones d’habitation formelles/zones d’habitation informelles). Elle s’intéresse ensuite à l’évaluation plus fine des liens entre l’environnement immédiat et la fièvre en tenant compte des facteurs démographiques et socio-économiques pertinents dans l’estimation. Enfin, la thèse approfondit les analyses sur la co-occurrence de la diarrhée et de la fièvre en mettant en évidence les effets conjoints des facteurs environnementaux et démographiques. À l’aide des analyses spatiales basées sur la distance euclidienne, des analyses factorielles et de classification, cette étude décrit le contexte sanitaire des quartiers formels et informels et analyse la pertinence de la dichotomie entre les quartiers lotis et les quartiers non lotis dans les zones suivies par l’OPO. L’étude effectue également des analyses multivariées en recourant respectivement aux modèles logit simple et ordonné pour estimer les effets propres de l’environnement immédiat sur la fièvre et la co-occurrence de la diarrhée et de la fièvre chez les enfants. Les résultats de l’étude montrent que les risques environnementaux sont variables d’un quartier à l’autre, et que les quartiers lotis bien qu’étant les plus pourvus en services urbains de base sont les plus exposés aux dangers environnementaux. Néanmoins, ce constat ne suffit pas pour rendre compte de la vulnérabilité des enfants dans les quartiers lotis que dans les quartiers non lotis, puisque l’accès à l’eau, l’assainissement, la nature du sol, et le niveau d’éducation de la mère sont des facteurs clés dans l’occurrence des symptômes liés à l’environnement immédiat. On note également une hétérogénéité concernant la santé environnementale, notamment dans les zones non loties. En considérant les effets de l’environnement immédiat sur la fièvre chez les enfants, les résultats montrent que ces effets baissent après la prise en compte des variables démographiques, socio-économiques et du quartier de résidence. Les facteurs de l’environnement tels que la gestion des ordures ménagères et celle des eaux usées discriminent significativement la fièvre. Les enfants à Nioko 2 (quartier non loti), par exemple, ont deux fois plus de risque d’avoir eu de la fièvre par rapport à Kilwin (quartier loti). Les effets conjoints des facteurs environnementaux et démographiques sont également mis en exergue dans la co-occurrence de la diarrhée et de la fièvre, même si ces effets diminuent régulièrement avec le nombre des symptômes chez les enfants. Le fait d’être dans un ménage insalubre ou d’avoir le sol extérieur en terre augmente la propension de co-occurrence de la diarrhée et de la fièvre. En revanche, cette co-occurrence chez les enfants diminue significativement avec l’âge. Les effets de l’environnement sur l’occurrence ou la co-occurrence des symptômes existent, quand bien même ces effets diminuent avec la prise en compte des facteurs démographiques et socio-économiques et du quartier de résidence. Les résultats de la thèse plaident pour un effort méthodologique, afin d’affiner la définition des variables de l’environnement en milieu urbain.
Resumo:
La reforma colombiana al sistema de salud (Ley 100 de 1993) estableció, como estrategia para facilitar el acceso, la universalidad de un seguro de salud que se adquiere mediante la cotización en el régimen contributivo o mediante la afiliación gratuita al régimen subsidiado, con la meta de cubrir a toda la población con un plan de beneficios único que comprende servicios de todos los niveles de atención. En el documento se analizan los principales hechos estilizados de la reforma en cuanto a cobertura del seguro y acceso y, mediante modelos logit, se estiman los determinantes de la afiliación y del acceso, con datos de las encuestas de calidad de vida de 1997 y 2003. Se destaca que la cobertura pasó del 20% de la población en 1993 al 60% en 2004, aunque parece imposible alcanzar la universalidad; la estructura y evolución de la cobertura muestran que los dos regímenes son complementarios, de modo que mientras el contributivo tiene mayor presencia en las ciudades y entre la población con empleo formal, el subsidiado tiene mayor peso entre la población rural y con bajos niveles de ingresos; por otra parte, el seguro tiene ventajas para la población subsidiada, con una mayor probabilidad de utilización de servicios, aunque el plan es inferior al del contributivo y existen barreras para el acceso.
Resumo:
[1] In many practical situations where spatial rainfall estimates are needed, rainfall occurs as a spatially intermittent phenomenon. An efficient geostatistical method for rainfall estimation in the case of intermittency has previously been published and comprises the estimation of two independent components: a binary random function for modeling the intermittency and a continuous random function that models the rainfall inside the rainy areas. The final rainfall estimates are obtained as the product of the estimates of these two random functions. However the published approach does not contain a method for estimation of uncertainties. The contribution of this paper is the presentation of the indicator maximum likelihood estimator from which the local conditional distribution of the rainfall value at any location may be derived using an ensemble approach. From the conditional distribution, representations of uncertainty such as the estimation variance and confidence intervals can be obtained. An approximation to the variance can be calculated more simply by assuming rainfall intensity is independent of location within the rainy area. The methodology has been validated using simulated and real rainfall data sets. The results of these case studies show good agreement between predicted uncertainties and measured errors obtained from the validation data.
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
Nesta dissertação foi analisada se há uma relação significante entre estruturas de governança (estrutura e composição de conselho) e financial distress. Este trabalho focou neste tema porque os estudos acadêmicos em governança corporativa e sua relação com financial distress ainda são pouco explorados. Além disso, o tema tem relevância no mundo corporativo, pois entender quais estruturas e composições de conselho seriam mais eficientes para evitar financial distress é interessante para diversos stakeholders, principalmente para os acionistas e os credores. Para verificar a existência dessa relação, foram utilizados dados de empresas brasileiras de capital aberto e foram desenvolvidos modelos logit de financial distress. Sendo a variável resposta financial distress, partiu-se de um modelo base com variáveis financeiras de controle e, por etapas, foram adicionadas novos determinantes e combinações dessas variáveis para montar modelos intermediários. Por fim, o modelo final contou com todas as variáveis explicativas mais relevantes. As variáveis de estudo podem ser classificadas em variáveis de estrutura de governança (DUA, GOV e COF), qualidade do conselho (QUA) e estrutura de propriedade (PRO1 e PRO2). Os modelos base utilizados foram: Daily e Dalton (1994a) e um próprio, desenvolvido para modelar melhor financial distress e sua relação com as variáveis de estrutura de governança. Nos diversos modelos testados foram encontradas relações significativas no percentual de conselheiros dependentes (GOV), percentual de conselheiros da elite educacional (QUA), percentual de ações discriminadas (PRO1) e percentual de ações de acionista estatal relevante (PRO2). Portanto, não se descartam as hipóteses de que mais conselheiros dependentes, menos conselheiros da elite educacional e estrutura de propriedade menos concentrada contribuem para uma situação de financial distress futura. Entretanto, as variáveis dummy de dualidade (DUA) e de conselho fiscal (COF) não apresentaram significância estatística.
Resumo:
Nesta dissertação foi analisada se há uma relação significante entre estruturas de governança (estrutura e composição de conselho) e financial distress. Este trabalho focou neste tema porque os estudos acadêmicos em governança corporativa e sua relação com financial distress ainda são pouco explorados. Além disso, o tema tem relevância no mundo corporativo, pois entender quais estruturas e composições de conselho seriam mais eficientes para evitar financial distress é interessante para diversos stakeholders, principalmente para os acionistas e os credores. Para verificar a existência dessa relação, foram utilizados dados de empresas brasileiras de capital aberto e foram desenvolvidos modelos logit de financial distress. Sendo a variável resposta financial distress, partiu-se de um modelo base com variáveis financeiras de controle e, por etapas, foram adicionadas novos determinantes e combinações dessas variáveis para montar modelos intermediários. Por fim, o modelo final contou com todas as variáveis explicativas mais relevantes. As variáveis de estudo podem ser classificadas em variáveis de estrutura de governança (DUA, GOV e COF), qualidade do conselho (QUA) e estrutura de propriedade (PRO1 e PRO2). Os modelos base utilizados foram: Daily e Dalton (1994a) e um próprio, desenvolvido para modelar melhor financial distress e sua relação com as variáveis de estrutura de governança. Nos diversos modelos testados foram encontradas relações significativas no percentual de conselheiros dependentes (GOV), percentual de conselheiros da elite educacional (QUA), percentual de ações discriminadas (PRO1) e percentual de ações de acionista estatal relevante (PRO2). Portanto, não se descartam as hipóteses de que mais conselheiros dependentes, menos conselheiros da elite educacional e estrutura de propriedade menos concentrada contribuem para uma situação de financial distress futura. Entretanto, as variáveis dummy de dualidade (DUA) e de conselho fiscal (COF) não apresentaram significância estatística
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this paper, a methodology is proposed for the geometric refinement of laser scanning building roof contours using high-resolution aerial images and Markov Random Field (MRF) models. The proposed methodology takes for granted that the 3D description of each building roof reconstructed from the laser scanning data (i.e., a polyhedron) is topologically correct and that it is only necessary to improve its accuracy. Since roof ridges are accurately extracted from laser scanning data, our main objective is to use high-resolution aerial images to improve the accuracy of roof outlines. In order to meet this goal, the available roof contours are first projected onto the image-space. After that, the projected polygons and the straight lines extracted from the image are used to establish an MRF description, which is based on relations ( relative length, proximity, and orientation) between the two sets of straight lines. The energy function associated with the MRF is minimized by using a modified version of the brute force algorithm, resulting in the grouping of straight lines for each roof object. Finally, each grouping of straight lines is topologically reconstructed based on the topology of the corresponding laser scanning polygon projected onto the image-space. The preliminary results showed that the proposed methodology is promising, since most sides of the refined polygons are geometrically better than corresponding projected laser scanning straight lines.