948 resultados para Single Equation Models
Resumo:
Computational fluid dynamics (CFD) modeling is an important tool in designing new combustion systems. By using CFD modeling, entire combustion systems can be modeled and the emissions and the performance can be predicted. CFD modeling can also be used to develop new and better combustion systems from an economical and environmental point of view. In CFD modeling of solid fuel combustion, the combustible fuel is generally treated as single fuel particles. One of the limitations with the CFD modeling concerns the sub-models describing the combustion of single fuel particles. Available models in the scientific literature are in many cases not suitable as submodels for CFD modeling since they depend on a large number of input parameters and are computationally heavy. In this thesis CFD-applicable models are developed for the combustion of single fuel particles. The single particle models can be used to improve the combustion performance in various combustion devices or develop completely new technologies. The investigated fields are oxidation of carbon (C) and nitrogen (N) in char residues from solid fuels. Modeled char-C oxidation rates are compared to experimental oxidation rates for a large number of pulverized solid fuel chars under relevant combustion conditions. The experiments have been performed in an isothermal plug flow reactor operating at 1123-1673 K and 3-15 vol.% O2. In the single particle model, the char oxidation is based on apparent kinetics and depends on three fuel specific parameters: apparent pre-exponential factor, apparent activation energy, and apparent reaction order. The single particle model can be incorporated as a sub-model into a CFD code. The results show that the modeled char oxidation rates are in good agreement with experimental char oxidation rates up to around 70% of burnout. Moreover, the results show that the activation energy and the reaction order can be assumed to be constant for a large number of bituminous coal chars under conditions limited by the combined effects of chemical kinetics and pore diffusion. Based on this, a new model based on only one fuel specific parameter is developed (Paper III). The results also show that reaction orders of bituminous coal chars and anthracite chars differ under similar conditions (Paper I and Paper II); reaction orders of bituminous coal chars were found to be one, while reaction orders of anthracite chars were determined to be zero. This difference in reaction orders has not previously been observed in the literature and should be considered in future char oxidation models. One of the most frequently used comprehensive char oxidation models could not explain the difference in the reaction orders. In the thesis (Paper II), a modification to the model is suggested in order to explain the difference in reaction orders between anthracite chars and bituminous coal chars. Two single particle models are also developed for the NO formation and reduction during the oxidation of single biomass char particles. In the models the char-N is assumed to be oxidized to NO and the NO is partly reduced inside the particle. The first model (Paper IV) is based on the concentration gradients of NO inside and outside the particle and the second model is simplified to such an extent that it is based on apparent kinetics and can be incorporated as a sub-model into a CFD code (Paper V). Modeled NO release rates from both models were in good agreement with experimental measurements from a single particle reactor of quartz glass operating at 1173-1323 K and 3-19 vol.% O2. In the future, the models can be used to reduce NO emissions in new combustion systems.
Resumo:
Passive solar building design is the process of designing a building while considering sunlight exposure for receiving heat in winter and rejecting heat in summer. The main goal of a passive solar building design is to remove or reduce the need of mechanical and electrical systems for cooling and heating, and therefore saving energy costs and reducing environmental impact. This research will use evolutionary computation to design passive solar buildings. Evolutionary design is used in many research projects to build 3D models for structures automatically. In this research, we use a mixture of split grammar and string-rewriting for generating new 3D structures. To evaluate energy costs, the EnergyPlus system is used. This is a comprehensive building energy simulation system, which will be used alongside the genetic programming system. In addition, genetic programming will also consider other design and geometry characteristics of the building as search objectives, for example, window placement, building shape, size, and complexity. In passive solar designs, reducing energy that is needed for cooling and heating are two objectives of interest. Experiments show that smaller buildings with no windows and skylights are the most energy efficient models. Window heat gain is another objective used to encourage models to have windows. In addition, window and volume based objectives are tried. To examine the impact of environment on designs, experiments are run on five different geographic locations. Also, both single floor models and multi-floor models are examined in this research. According to the experiments, solutions from the experiments were consistent with respect to materials, sizes, and appearance, and satisfied problem constraints in all instances.
Resumo:
This paper addresses the question of whether R&D should be carried out by an independent research unit or be produced in-house by the firm marketing the innovation. We define two organizational structures. In an integrated structure, the firm that markets the innovation also carries out and finances research leading to the innovation. In an independent structure, the firm that markets the innovation buys it from an independent research unit which is financed externally. We compare the two structures under the assumption that the research unit has some private information about the real cost of developing the new product. When development costs are negatively correlated with revenues from the innovation, the integrated structure dominates. The independent structure dominates in the opposite case.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.
Resumo:
Selon les lignes directrices de traitement de l'asthme pendant la grossesse, les beta2-agonistes inhalés à courte durée d’action (SABA) sont les médicaments de choix pour tous les types d’asthme [intermittent, persistant, léger, modéré et sévère] comme médicaments de secours rapide et dans la gestion des exacerbations aiguës. D’autre part, les beta2-agonistes inhalés à longue durée d’action (LABA) sont utilisés pour les patients atteints d'asthme persistant, modéré à sévère, qui ne sont pas entièrement contrôlés par des corticostéroïdes inhalés seuls. Malgré que plusieurs études aient examinées l’association entre les LABA, les SABA et les malformations congénitales chez les nouveau-nés, les risques réels restent controversés en raison de résultats contradictoires et des difficultés inhérentes à la réalisation d'études épidémiologiques chez les femmes enceintes. L'objectif de cette étude était d'évaluer l'association entre l'exposition maternelle aux SABA et LABA pendant le premier trimestre de grossesse et le risque de malformations congénitales chez les nouveau-nés de femmes asthmatiques. Une cohorte de grossesses de femmes asthmatiques ayant accouchées entre le 1er janvier 1990 et le 31 décembre 2002 a été formée en croisant trois banques de données administratives de la province de Québec (Canada). Les issues principales de cette étude étaient les malformations congénitales majeures de touts types. Comme issues secondaires, nous avons considéré des malformations congénitales spécifiques. L'exposition principale était la prise de SABA et/ou de LABA au cours du premier trimestre de grossesse. L'exposition secondaire étudiée était le nombre moyen de doses de SABA par semaine au cours du premier trimestre. L'association entre les malformations congénitales et la prise de SABA et de LABA a été évaluée en utilisant des modèles d’équations généralisées (GEE) en ajustant pour plusieurs variables confondantes reliées à la grossesse, l’asthme de la mère et la santé de la mère et du foetus. Dans la cohorte formée de 13 117 grossesses de femmes asthmatiques, nous avons identifié 1 242 enfants avec une malformation congénitale (9,5%), dont 762 avaient une malformation majeure (5,8%). Cinquante-cinq pour cent des femmes ont utilisé des SABA et 1,3% ont utilisé des LABA pendant le premier trimestre. Les rapports de cotes ajustées (IC à 95%) pour une malformation congénitale associée à l'utilisation des SABA et des LABA étaient de 1,0 (0,9-1,2) et 1,3 (0,9-2,1), respectivement. Les résultats correspondants étaient de 0,9 (0,8-1,1) et 1,3 (0,8-2,4) pour les malformations majeures. Concernant le nombre moyen de doses de SABA par semaine, les rapports de cotes ajustées (IC à 95%) pour une malformation congénitale était de 1.1 (1.0-1.3), 1.1 (0.9-1.3), et 0.9 (0.7-1.1) pour les doses >0-3, >3-10, and >10 respectivement. Les résultats correspondants étaient de 1.0 (0.8-1.2), 0.8 (0.7-1.1), et 0.7 (0.5-1.0) pour les malformations majeures. D'autre part, des rapports de cotes (IC à 95%) statistiquement significatifs ont été observés pour les malformations cardiaques (2.4 (1.1-5.1)), les malformations d'organes génitaux (6.8 (2.6-18.1)), et d'autres malformations congénitales (3.4 (1.4 à 8.5)), en association avec les LABA pris pendant le premier trimestre. Notre étude procure des données rassurantes pour l’utilisation des SABA pendant la grossesse, ce qui est en accord avec les lignes directrices de traitement de l’asthme. Toutefois, d'autres études sont nécessaires avant de pouvoir se prononcer sur l’innocuité des LABA pendant la grossesse.
Resumo:
Cette étude longitudinale visait à vérifier si les traits de personnalité (selon le modèle en cinq facteurs, « Big Five ») au début de l’adolescence (12-13 ans) permettent de prédire les symptômes intériorisés deux ans plus tard (14-15 ans), en contrôlant pour le niveau initial de symptômes intériorisés ainsi que l’influence de plusieurs facteurs de risque connus. Les données employées proviennent d’une étude longitudinale prospective. L’échantillon compte 1036 adolescents provenant de huit écoles secondaires québécoises. Les adolescents ont répondu à un questionnaire autorévélé. Des modèles d’équations structurales ont d’abord démontré la pertinence de conceptualiser les symptômes intériorisés comme une variable latente. D’autres modèles ont démontré que certains traits de personnalité prédisent effectivement les symptômes intériorisés ultérieurs. Cependant, contrairement aux études effectuées auprès d’adultes, le rôle de la Stabilité émotionnelle et de l’Extraversion n’est pas significatif après que l’influence de facteurs de risque connus et du sexe ait été contrôlée. Ce sont plutôt le Contrôle et l’Amabilité qui sont significativement reliés aux symptômes intériorisés ultérieurs dans la présente étude. Les résultats soulignent également le rôle important des facteurs de risque liés aux relations avec les pairs. Finalement, des modèles d’équations structurales multi-groupes ont mis en évidence des différences sexuelles significatives dans les relations prédictives. Cette étude confirme que les traits de personnalité des adolescents peuvent jouer un rôle dans le développement des symptômes intériorisés, ce qui leur confère une pertinence théorique et clinique.
Resumo:
Cette étude longitudinale visait à évaluer si les traits de personnalité des adolescents permettent de prédire leurs comportements antisociaux ultérieurs, après avoir contrôlé pour l’effet du niveau initial du comportement antisocial ainsi que celui de plusieurs facteurs de risque connus de ces comportements. L’échantillon utilisé compte 1036 adolescents provenant de huit écoles secondaires québécoises. Les adolescents ont été évalués à deux reprises, soit en secondaire 1 (12-13 ans) et en secondaire 3 (14-15 ans). Ils ont répondu à un questionnaire autorévélé. Des modèles d’équations structurales ont d’abord confirmé que la covariation entre différents comportements antisociaux des adolescents peut être expliquée par une variable latente. Les résultats ont confirmé que les traits de personnalité des adolescents à 12 et 13 ans prédisent leurs comportements antisociaux à 14 et 15 ans. En accord avec les études antérieures, l’Extraversion, le Contrôle et la Stabilité émotionnelle prédisent les comportements antisociaux futurs. Toutefois, l’effet de l’Amabilité disparait une fois que le niveau initial est contrôlé. Finalement, des modèles d’équations structurales multi-groupes ont permis de démontrer que certaines relations prédictives sont différentes selon le sexe. Les résultats de cette étude soulignent l’importance des traits de personnalité pour les théories du comportement antisocial ainsi que pour la pratique clinique.
Resumo:
La douleur chronique non cancéreuse (DCNC) est un phénomène complexe et des interventions multimodales qui abordent à la fois ses dimensions biologiques et psychosociales sont considérées comme l’approche optimale pour traiter ce type de désordre. La prescription d'opioïdes pour la DCNC a augmenté d’une façon fulgurante au cours des deux dernières décennies, mais les preuves supportant l'efficacité à long terme de ce type de médicament en termes de réduction de la sévérité de la douleur et d’amélioration de la qualité de vie des patients souffrant de DCNC sont manquantes. L'objectif de cette étude était d'investiguer dans un contexte de vraie vie l'efficacité à long terme des opioïdes pour réduire l’intensité et l’impact de la douleur et améliorer la qualité de vie reliée à la santé des patients souffrant de DCNC sur une période d’une année. Méthodes: Les participants à cette étude étaient 1490 patients (âge moyen = 52,37 (écart-type = 13,9); femmes = 60,9%) enrôlés dans le Registre Québec Douleur entre octobre 2008 et Avril 2011 et qui ont complété une série de questionnaires avant d'initier un traitement dans un centre multidisciplinaire tertiaire de gestion de la douleur ainsi qu’à 6 et 12 mois plus tard. Selon leur profil d'utilisation d'opioïdes (PUO), les patients ont été classés en 1) non-utilisateurs, 2) utilisateurs non persistants, et 3) utilisateurs persistants. Les données ont été analysées à l'aide du modèle d'équation d'estimation généralisée. Résultats: Chez les utilisateurs d’opioïdes, 52% en ont cessé la prise à un moment ou à un autre pendant la période de suivi. Après ajustement pour l'âge et le sexe, le PUO a prédit d’une manière significative l’intensité de la douleur ressentie en moyenne sur des périodes de 7 jours (p <0,001) ainsi que la qualité de vie physique (pQDV) dans le temps (p <0,001). Comparés aux non-utilisateurs, les utilisateurs persistants avaient des niveaux significativement plus élevés d'intensité de douleur et une moins bonne pQDV. Une interaction significative a été trouvée entre le PUO et le temps dans la prédiction de l’intensité de douleur ressentie à son maximum (p = 0,001), les utilisateurs persistants sont ceux rapportant les scores les plus élevés à travers le temps. Une interaction significative a aussi été observée entre le PUO et le type de douleur dans la prédiction de l'impact de la douleur dans diverses sphères de la vie quotidienne (p = 0,048) et de la mQDV (p = 0,042). Indépendamment du type de douleur, les utilisateurs persistants ont rapporté des scores plus élevés d'interférence de douleur ainsi qu’une moins bonne mQDV par rapport aux non-utilisateurs. Cependant, la magnitude de ces effets était de petite taille (d de Cohen <0,5), une observation qui remet en question la puissance et la signification clinique des différences observées entre ces groupes. Conclusion: Nos résultats contribuent à maintenir les doutes sur l'efficacité d’une thérapie à long terme à base d’opioïdes et remettent ainsi en question le rôle que peut jouer ce type de médicament dans l'arsenal thérapeutique pour la gestion de la DCNC.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
Common Loon (Gavia immer) is considered an emblematic and ecologically important example of aquatic-dependent wildlife in North America. The northern breeding range of Common Loon has contracted over the last century as a result of habitat degradation from human disturbance and lakeshore development. We focused on the state of New Hampshire, USA, where a long-term monitoring program conducted by the Loon Preservation Committee has been collecting biological data on Common Loon since 1976. The Common Loon population in New Hampshire is distributed throughout the state across a wide range of lake-specific habitats, water quality conditions, and levels of human disturbance. We used a multiscale approach to evaluate the association of Common Loon and breeding habitat within three natural physiographic ecoregions of New Hampshire. These multiple scales reflect Common Loon-specific extents such as territories, home ranges, and lake-landscape influences. We developed ecoregional multiscale models and compared them to single-scale models to evaluate model performance in distinguishing Common Loon breeding habitat. Based on information-theoretic criteria, there is empirical support for both multiscale and single-scale models across all three ecoregions, warranting a model-averaging approach. Our results suggest that the Common Loon responds to both ecological and anthropogenic factors at multiple scales when selecting breeding sites. These multiscale models can be used to identify and prioritize the conservation of preferred nesting habitat for Common Loon populations.
Resumo:
The paper develops a measure of consumer welfare losses associated with withholding it formation about a possible link between BSE and vCJD. The Cost of Ignorance (COI) is measured by comparing the utility of the informed choice with the utility of the uninformed choice, under conditions of improved information. Unlike previous work that is largely based on a single equation demand model, the measure is obtained retrieving a cost,function from a dynamic Almost Ideal Demand System. The estimated perceived loss for Italian consumers due to delayed information ranges from 12 percent to 54 percent of total meat expenditure, depending on the month assumed to embody correct beliefs about the safety level of beef.
Resumo:
Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.