896 resultados para Team Evaluation Models
Resumo:
Nasal congestion is one of the most troublesome symptoms of many upper airways diseases. We characterized the effect of selective α2c-adrenergic agonists in animal models of nasal congestion. In porcine mucosa tissue, compound A and compound B contracted nasal veins with only modest effects on arteries. In in vivo experiments, we examined the nasal decongestant dose-response characteristics, pharmacokinetic/pharmacodynamic relationship, duration of action, potential development of tolerance, and topical efficacy of α2c-adrenergic agonists. Acoustic rhinometry was used to determine nasal cavity dimensions following intranasal compound 48/80 (1%, 75 µl). In feline experiments, compound 48/80 decreased nasal cavity volume and minimum cross-sectional areas by 77% and 40%, respectively. Oral administration of compound A (0.1-3.0 mg/kg), compound B (0.3-5.0 mg/kg), and d-pseudoephedrine (0.3 and 1.0 mg/kg) produced dose-dependent decongestion. Unlike d-pseudoephedrine, compounds A and B did not alter systolic blood pressure. The plasma exposure of compound A to produce a robust decongestion (EC(80)) was 500 nM, which related well to the duration of action of approximately 4.0 hours. No tolerance to the decongestant effect of compound A (1.0 mg/kg p.o.) was observed. To study the topical efficacies of compounds A and B, the drugs were given topically 30 minutes after compound 48/80 (a therapeutic paradigm) where both agents reversed nasal congestion. Finally, nasal-decongestive activity was confirmed in the dog. We demonstrate that α2c-adrenergic agonists behave as nasal decongestants without cardiovascular actions in animal models of upper airway congestion.
Resumo:
This paper contributes to the understanding of lime-mortar masonry strength and deformation (which determine durability and allowable stresses/stiffness in design codes) by measuring the mechanical properties of brick bound with lime and lime-cement mortars. Based on the regression analysis of experimental results, models to estimate lime-mortar masonry compressive strength are proposed (less accurate for hydrated lime (CL90s) masonry due to the disparity between mortar and brick strengths). Also, three relationships between masonry elastic modulus and its compressive strength are proposed for cement-lime; hydraulic lime (NHL3.5 and 5); and hydrated/feebly hydraulic lime masonries respectively.
Disagreement between the experimental results and former mathematical prediction models (proposed primarily for cement masonry) is caused by a lack of provision for the significant deformation of lime masonry and the relative changes in strength and stiffness between mortar and brick over time (at 6 months and 1 year, the NHL 3.5 and 5 mortars are often stronger than the brick). Eurocode 6 provided the best predictions for the compressive strength of lime and cement-lime masonry based on the strength of their components. All models vastly overestimated the strength of CL90s masonry at 28 days however, Eurocode 6 became an accurate predictor after 6 months, when the mortar had acquired most of its final strength and stiffness.
The experimental results agreed with former stress-strain curves. It was evidenced that mortar strongly impacts masonry deformation, and that the masonry stress/strain relationship becomes increasingly non-linear as mortar strength lowers. It was also noted that, the influence of masonry stiffness on its compressive strength becomes smaller as the mortar hydraulicity increases.
Resumo:
The estimates of the zenith wet delay resulting from the analysis of data from space techniques, such as GPS and VLBI, have a strong potential in climate modeling and weather forecast applications. In order to be useful to meteorology, these estimates have to be converted to precipitable water vapor, a process that requires the knowledge of the weighted mean temperature of the atmosphere, which varies both in space and time. In recent years, several models have been proposed to predict this quantity. Using a database of mean temperature values obtained by ray-tracing radiosonde profiles of more than 100 stations covering the globe, and about 2.5 year’s worth of data, we have analyzed several of these models. Based on data from the European region, we have concluded that the models provide identical levels of precision, but different levels of accuracy. Our results indicate that regionally-optimized models do not provide superior performance compared to the global models.
Resumo:
This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
Performance of any continuous speech recognition system is dependent on the accuracy of its acoustic model. Hence, preparation of a robust and accurate acoustic model lead to satisfactory recognition performance for a speech recognizer. In acoustic modeling of phonetic unit, context information is of prime importance as the phonemes are found to vary according to the place of occurrence in a word. In this paper we compare and evaluate the effect of context dependent tied (CD tied) models, context dependent (CD) and context independent (CI) models in the perspective of continuous speech recognition of Malayalam language. The database for the speech recognition system has utterance from 21 speakers including 11 female and 10 males. Our evaluation results show that CD tied models outperforms CI models over 21%.
Resumo:
An irreverent sideways look at new business models and their effect on the world around us. Designed with Years 9 and 10 in mind.
Resumo:
RothC and Century are two of the most widely used soil organic matter (SOM) models. However there are few examples of specific parameterisation of these models for environmental conditions in East Africa. The aim of this study was therefore, to evaluate the ability of RothC and the Century to estimate changes in soil organic carbon (SOC) resulting from varying land use/management practices for the climate and soil conditions found in Kenya. The study used climate, soils and crop data from a long term experiment (1976-2001) carried out at The Kabete site at The Kenya National Agricultural Research Laboratories (NARL, located in a semi-humid region) and data from a 13 year experiment carried out in Machang'a (Embu District, located in a semi-arid region). The NARL experiment included various fertiliser (0, 60 and 120 kg of N and P2O5 ha(-1)), farmyard manure (FYM - 5 and 10 t ha(-1)) and plant residue treatments, in a variety of combinations. The Machang'a experiment involved a fertiliser (51 kg N ha(-1)) and a FYM (0, 5 and 10 t ha(-1)) treatment with both monocropping and intercropping. At Kabete both models showed a fair to good fit to measured data, although Century simulations for treatments with high levels of FYM were better than those without. At the Machang'a site with monocrops, both models showed a fair to good fit to measured data for all treatments. However, the fit of both models (especially RothC) to measured data for intercropping treatments at Machang'a was much poorer. Further model development for intercrop systems is recommended. Both models can be useful tools in soil C Predictions, provided time series of measured soil C and crop production data are available for validating model performance against local or regional agricultural crops. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Across Europe, elevated phosphorus (P) concentrations in lowland rivers have made them particularly susceptible to eutrophication. This is compounded in southern and central UK by increasing pressures on water resources, which may be further enhanced by the potential effects of climate change. The EU Water Framework Directive requires an integrated approach to water resources management at the catchment scale and highlights the need for modelling tools that can distinguish relative contributions from multiple nutrient sources and are consistent with the information content of the available data. Two such models are introduced and evaluated within a stochastic framework using daily flow and total phosphorus concentrations recorded in a clay catchment typical of many areas of the lowland UK. Both models disaggregate empirical annual load estimates, derived from land use data, as a function of surface/near surface runoff, generated using a simple conceptual rainfall-runoff model. Estimates of the daily load from agricultural land, together with those from baseflow and point sources, feed into an in-stream routing algorithm. The first model assumes constant concentrations in runoff via surface/near surface pathways and incorporates an additional P store in the river-bed sediments, depleted above a critical discharge, to explicitly simulate resuspension. The second model, which is simpler, simulates P concentrations as a function of surface/near surface runoff, thus emphasising the influence of non-point source loads during flow peaks and mixing of baseflow and point sources during low flows. The temporal consistency of parameter estimates and thus the suitability of each approach is assessed dynamically following a new approach based on Monte-Carlo analysis. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Thirty‐three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation phase and duration of above‐freezing air temperatures are shown to be major influences on divergence and convergence of modeled estimates of the subcanopy snowpack. When models are considered collectively at all locations, comparisons with observations show that it is harder to model SWE at forested sites than open sites. There is no universal “best” model for all sites or locations, but comparison of the consistency of individual model performances relative to one another at different sites shows that there is less consistency at forest sites than open sites, and even less consistency between forest and open sites in the same year. A good performance by a model at a forest site is therefore unlikely to mean a good model performance by the same model at an open site (and vice versa). Calibration of models at forest sites provides lower errors than uncalibrated models at three out of four locations. However, benefits of calibration do not translate to subsequent years, and benefits gained by models calibrated for forest snow processes are not translated to open conditions.