973 resultados para Estimating models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study includes an analysis of the applicability of current models used for estimating the mechanical properties of conventional concrete to self-compacting concrete. The mechanical properties evaluated are: modulus of elasticity, tensile strength, and modulus of rupture. An extensive database which included the dosifications and the mechanical properties of 627 mixtures from 138 different references, was used. The models considered are: ACI, EC-2, NZS 3101:2006 (New Zealand code) and the CSA A23.3-04 (Canadian code). The precision in estimating the modulus of elasticity and tensile strength is acceptable for all models; however, all models are less precise in estimating the modulus of rupture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data augmentation is a powerful technique for estimating models with latent or missing data, but applications in agricultural economics have thus far been few. This paper showcases the technique in an application to data on milk market participation in the Ethiopian highlands. There, a key impediment to economic development is an apparently low rate of market participation. Consequently, economic interest centers on the “locations” of nonparticipants in relation to the market and their “reservation values” across covariates. These quantities are of policy interest because they provide measures of the additional inputs necessary in order for nonparticipants to enter the market. One quantity of primary interest is the minimum amount of surplus milk (the “minimum efficient scale of operations”) that the household must acquire before market participation becomes feasible. We estimate this quantity through routine application of data augmentation and Gibbs sampling applied to a random-censored Tobit regression. Incorporating random censoring affects markedly the marketable-surplus requirements of the household, but only slightly the covariates requirements estimates and, generally, leads to more plausible policy estimates than the estimates obtained from the zero-censored formulation

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most empirical disciplines promote the reuse and sharing of datasets, as it leads to greater possibility of replication. While this is increasingly the case in Empirical Software Engineering, some of the most popular bug-fix datasets are now known to be biased. This raises two significants concerns: first, that sample bias may lead to underperforming prediction models, and second, that the external validity of the studies based on biased datasets may be suspect. This issue has raised considerable consternation in the ESE literature in recent years. However, there is a confounding factor of these datasets that has not been examined carefully: size. Biased datasets are sampling only some of the data that could be sampled, and doing so in a biased fashion; but biased samples could be smaller, or larger. Smaller data sets in general provide less reliable bases for estimating models, and thus could lead to inferior model performance. In this setting, we ask the question, what affects performance more? bias, or size? We conduct a detailed, large-scale meta-analysis, using simulated datasets sampled with bias from a high-quality dataset which is relatively free of bias. Our results suggest that size always matters just as much bias direction, and in fact much more than bias direction when considering information-retrieval measures such as AUC and F-score. This indicates that at least for prediction models, even when dealing with sampling bias, simply finding larger samples can sometimes be sufficient. Our analysis also exposes the complexity of the bias issue, and raises further issues to be explored in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El presente trabajo se propone determinar la distribución de tamaño y número de partículas nanométricas provenientes de motores diésel con equipos embarcados en tráfico extraurbano. Para ello, se utilizaron equipos de medición de última generación en condiciones promedio de conducción en tráfico extraurbano por más de 800 km a lo largo del trayecto Madrid-Badajoz-Madrid mediante un vehículo característico del parque automotor español y se implementaron métodos novedosos y pioneros en el registro de este tipo de emisiones. Todo ello abre el camino para líneas de investigación y desarrollo que contribuirán a entender, dimensionar y cualificar el comportamiento de las partículas, así como su impacto en la calidad de vida de la población. El estudio hace dos grandes aportes al campo. Primero, permite registrar las emisiones en condiciones transitorias propias del tráfico real. Segundo, permite mantener controladas las condiciones de medición y evita la formación aleatoria de partículas provenientes de material volátil, gracias al sistema de adecuación de la muestra de gases de escape incorporado. Como resultado, se obtuvo una muestra abundante y confiable que permitió construir modelos matemáticos para calcular la emisión de partículas nanométricas, ultrafinas, finas y totales sobre las bases volumétrica, espacial y temporal en función de la pendiente del perfil orográfico de la carretera, siempre y cuando esté dentro del intervalo ±5.0%. Estos modelos de cálculo de emisiones reducen tanto los costos de experimentación como la complejidad de los equipos necesarios, y fundamentaron el desarrollo de la primera versión de una aplicación informática que calcula las partículas emitidas por un motor diésel en condiciones de tráfico extraurbano ("Partículas Emitidas por Motores Diésel, PEMDI). ABSTRACT The purpose of this research is to determine the distribution of size and number of nanometric particles that come from diesel engines by means of on-board equipment in extra-urban traffic. In order to do this, cutting-edge measuring equipment was used under average driving conditions in extra-urban traffic for more than 800 km along the Madrid-Badajoz-Madrid route using a typical vehicle from Spain's automotive population and innovative, groundbreaking registering methods for this type of emissions were used. All this paves the way for lines of research and development which should help understand, measure and characterize the behavior of such particles, as well as their impact in the quality of life of the general population. The study makes two important contributions to the field. First, it makes it possible to register emissions under transient conditions, which are characteristic to real traffic. Secondly, it provides a means to keep the measuring conditions under control and prevents the random formation of particles of volatile origin through the built-in adjustment system of the exhaust gas sample. As a result, an abundant and reliable sample was gathered, which enabled the building of mathematical models to estimate the emission of nanometric, ultrafine, fine and total particles on volumetric, spatial and temporal bases as a function of the orographic outline of the road within a ±5.0% range. These emission estimating models lower both the experimentation costs and the required equipment's complexity, and they provided the basis for the development of a first software application version that estimates the particles emitted from diesel engines under extra-urban traffic conditions (Partículas Emitidas por Motores Diésel, PEMDI).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes a methodology to estimate the coefficients, to test specification hypothesesand to conduct policy exercises in multi-country VAR models with cross unit interdependencies, unit specific dynamics and time variations in the coefficients. The framework of analysis is Bayesian: a prior flexibly reduces the dimensionality of the model and puts structure on the time variations; MCMC methods are used to obtain posterior distributions; and marginal likelihoods to check the fit of various specifications. Impulse responses and conditional forecasts are obtained with the output of MCMC routine. The transmission of certain shocks across countries is analyzed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

International industry data permits testing whether the industry-specific impact of cross-countrydifferences in institutions or policies is consistent with economic theory. Empirical implementationrequires specifying the industry characteristics that determine impact strength. Most of the literature has been using US proxies of the relevant industry characteristics. We show that usingindustry characteristics in a benchmark country as a proxy of the relevant industry characteristicscan result in an attenuation bias or an amplification bias. We also describe circumstances allowingfor an alternative approach that yields consistent estimates. As an application, we reexamine theinfluential conjecture that financial development facilitates the reallocation of capital from decliningto expanding industries.