890 resultados para monotone estimating
Resumo:
[EN]The chloride mass balance method was used to estimate the average diffuse groundwater recharge on northeastern Gran Canaria (Canary Islands), where the largest recharge to the volcanic island aquifer occurs. Rainwater was sampled monthly in ten rainwater collectors to determine the bulk deposition rate of chloride for the 2008–2014 period. Average chloride deposition decreases inwardly from more than 10 g·m−2 ·year−1 to about 4 g·m−2 ·year−1 . The application of the chloride mass balance method resulted in an estimated average recharge of about 28 hm3 /year or 92 mm/year (24% of precipitation) in the study area after subtracting chloride loss with surface runoff.
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
The purpose of this thesis is to conduct a comparative study in order to estimate the impact of the financial crisis to the GNI of Greece and Iceland. By applying synthetic control matching (a relatively new methodology) the study intends to compare the two countries, thus deducting conclusions about good or bad measures adopted. The results indicate that in both cases the adopted measures were not the optimal ones, since the synthetic counterfactual appear to perform better than the actual Greece and Iceland. Moreover, it is shown that Iceland reacted better to the shock it was exposed. However, different characteristics of the two countries impede the application of Icelandic actions in the Greek case.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
This paper presents a scientific development to address the current absence of a convenient technique to identify the ductile to brittle transition of bentonite clay mats. The instrumented indentation and 3-point bending tests were performed on different liquid polymer hydrated bentonite clay mats at varying moisture content. Properties measured include modified Brinell Hardness Number (BHN) and elastic structural stiffness (EI). The dependence of flexural stiffness on moisture content is demonstrated to conform to a best power function variation. The ductile to brittle transition of clay mat is affected primarily by the change in the moisture content and for the clay mat to remain flexible, critical moisture content of 1.7 times of its plastic limit is required. Results also indicate that a strong correlation between indentation hardness and the structural stiffness. The subsequent outcome in the development of a portable quality control device to monitor the acceptable moisture content level to ensure flexibility of the clay mats was also described in this paper.
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Resumo:
In Queensland the subtropical strawberry (Fragaria ×ananassa) breeding program aims to combine traits into new genotypes that increase production efficiency. The contribution of individual plant traits to cost and income under subtropical Queensland conditions has been investigated. The study adapted knowledge of traits and the production and marketing system to assess the economic impact (gross margin) of new cultivars on the system, with the overall goal of improving the profitability of the industry through the release of new strawberry cultivars. Genotypes varied widely in their effect on gross margin, from 48% above to 10% below the base value. The advantage of a new genotype was also affected by the proportion of total area allocated to the new genotype. The largest difference in gross margin between that at optimum allocation (8% increase in gross margin) and an all of industry allocation (20% decrease in gross margin) of area to the genotype was 28%. While in other cases the all of industry allocation was also the optimum allocation, with one genotype giving a 48% benefit in gross margin.
Resumo:
AIMS: Renal dysfunction is a powerful predictor of adverse outcomes in patients hospitalized for acute coronary syndrome. Three new glomerular filtration rate (GFR) estimating equations recently emerged, based on serum creatinine (CKD-EPIcreat), serum cystatin C (CKD-EPIcyst) or a combination of both (CKD-EPIcreat/cyst), and they are currently recommended to confirm the presence of renal dysfunction. Our aim was to analyse the predictive value of these new estimated GFR (eGFR) equations regarding mid-term mortality in patients with acute coronary syndrome, and compare them with the traditional Modification of Diet in Renal Disease (MDRD-4) formula. METHODS AND RESULTS: 801 patients admitted for acute coronary syndrome (age 67.3±13.3 years, 68.5% male) and followed for 23.6±9.8 months were included. For each equation, patient risk stratification was performed based on eGFR values: high-risk group (eGFR<60ml/min per 1.73m2) and low-risk group (eGFR⩾60ml/min per 1.73m2). The predictive performances of these equations were compared using area under each receiver operating characteristic curves (AUCs). Overall risk stratification improvement was assessed by the net reclassification improvement index. The incidence of the primary endpoint was 18.1%. The CKD-EPIcyst equation had the highest overall discriminate performance regarding mid-term mortality (AUC 0.782±0.20) and outperformed all other equations (ρ<0.001 in all comparisons). When compared with the MDRD-4 formula, the CKD-EPIcyst equation accurately reclassified a significant percentage of patients into more appropriate risk categories (net reclassification improvement index of 11.9% (p=0.003)). The CKD-EPIcyst equation added prognostic power to the Global Registry of Acute Coronary Events (GRACE) score in the prediction of mid-term mortality. CONCLUSION: The CKD-EPIcyst equation provides a novel and improved method for assessing the mid-term mortality risk in patients admitted for acute coronary syndrome, outperforming the most widely used formula (MDRD-4), and improving the predictive value of the GRACE score. These results reinforce the added value of cystatin C as a risk marker in these patients.
Resumo:
Plantings of mixed native species (termed 'environmental plantings') are increasingly being established for carbon sequestration whilst providing additional environmental benefits such as biodiversity and water quality. In Australia, they are currently one of the most common forms of reforestation. Investment in establishing and maintaining such plantings relies on having a cost-effective modelling approach to providing unbiased estimates of biomass production and carbon sequestration rates. In Australia, the Full Carbon Accounting Model (FullCAM) is used for both national greenhouse gas accounting and project-scale sequestration activities. Prior to undertaking the work presented here, the FullCAM tree growth curve was not calibrated specifically for environmental plantings and generally under-estimated their biomass. Here we collected and analysed above-ground biomass data from 605 mixed-species environmental plantings, and tested the effects of several planting characteristics on growth rates. Plantings were then categorised based on significant differences in growth rates. Growth of plantings differed between temperate and tropical regions. Tropical plantings were relatively uniform in terms of planting methods and their growth was largely related to stand age, consistent with the un-calibrated growth curve. However, in temperate regions where plantings were more variable, key factors influencing growth were planting width, stand density and species-mix (proportion of individuals that were trees). These categories provided the basis for FullCAM calibration. Although the overall model efficiency was only 39-46%, there was nonetheless no significant bias when the model was applied to the various planting categories. Thus, modelled estimates of biomass accumulation will be reliable on average, but estimates at any particular location will be uncertain, with either under- or over-prediction possible. When compared with the un-calibrated yield curves, predictions using the new calibrations show that early growth is likely to be more rapid and total above-ground biomass may be higher for many plantings at maturity. This study has considerably improved understanding of the patterns of growth in different types of environmental plantings, and in modelling biomass accumulation in young (<25. years old) plantings. However, significant challenges remain to understand longer-term stand dynamics, particularly with temporal changes in stand density and species composition. © 2014.
Resumo:
International audience
Estimating glomerular filtration rate in kidney transplantation: Still searching for the best marker
Resumo:
Kidney transplantation is the treatment of choice for end-stage renal disease. The evaluation of graft function is mandatory in the management of renal transplant recipients. Glomerular filtration rate (GFR), is generally considered the best index of graft function and also a predictor of graft and patient survival. However GFR measurement using inulin clearance, the gold standard for its measurement and exogenous markers such as radiolabeled isotopes ((51)Cr EDTA, (99m)Tc DTPA or (125)I Iothalamate) and non-radioactive contrast agents (Iothalamate or Iohexol), is laborious as well as expensive, being rarely used in clinical practice. Therefore, endogenous markers, such as serum creatinine or cystatin C, are used to estimate kidney function, and equations using these markers adjusted to other variables, mainly demographic, are an attempt to improve accuracy in estimation of GFR (eGFR). Nevertheless, there is some concern about the inability of the available eGFR equations to accurately identify changes in GFR, in kidney transplant recipients. This article will review and discuss the performance and limitations of these endogenous markers and their equations as estimators of GFR in the kidney transplant recipients, and their ability in predicting significant clinical outcomes.
Resumo:
The American woodcock (Scolopax minor) population index in North America has declined 0.9% a year since 1968 prompting managers to identify priority information and management needs for the species (Sauer et al 2008). Managers identified a need for a population model that better informs on the status of American woodcock populations (Case et al. 2010). Population reconstruction techniques use long-term age-at-harvest data and harvest effort to estimate abundances with error estimates. Four new models were successfully developed using survey data (1999 to 2013). The optimal model estimates sex specific harvest probability for adult females at 0.148 (SE = 0.017) and all other age-sex cohorts at 0.082 (SE = 0.008) for the most current year 2013. The model estimated a yearly survival rate of 0.528 (SE = 0.008). Total abundance ranged from 5,206,000 woodcock in 2007 to 6,075,800 woodcock in 1999. This study represents the first population estimates of woodcock populations.
Resumo:
ABSTRACT Researchers frequently have to analyze scales in which some participants have failed to respond to some items. In this paper we focus on the exploratory factor analysis of multidimensional scales (i.e., scales that consist of a number of subscales) where each subscale is made up of a number of Likert-type items, and the aim of the analysis is to estimate participants' scores on the corresponding latent traits. We propose a new approach to deal with missing responses in such a situation that is based on (1) multiple imputation of non-responses and (2) simultaneous rotation of the imputed datasets. We applied the approach in a real dataset where missing responses were artificially introduced following a real pattern of non-responses, and a simulation study based on artificial datasets. The results show that our approach (specifically, Hot-Deck multiple imputation followed of Consensus Promin rotation) was able to successfully compute factor score estimates even for participants that have missing data.