997 resultados para Biomass estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suite à un stage avec la compagnie Hatch, nous possédons des jeux de données composés de séries chronologiques de vitesses de vent mesurées à divers sites dans le monde, sur plusieurs années. Les ingénieurs éoliens de la compagnie Hatch utilisent ces jeux de données conjointement aux banques de données d’Environnement Canada pour évaluer le potentiel éolien afin de savoir s’il vaut la peine d’installer des éoliennes à ces endroits. Depuis quelques années, des compagnies offrent des simulations méso-échelle de vitesses de vent, basées sur divers indices environnementaux de l’endroit à évaluer. Les ingénieurs éoliens veulent savoir s’il vaut la peine de payer pour ces données simulées, donc si celles-ci peuvent être utiles lors de l’estimation de la production d’énergie éolienne et si elles pourraient être utilisées lors de la prévision de la vitesse du vent long terme. De plus, comme l’on possède des données mesurées de vitesses de vent, l’on en profitera pour tester à partir de diverses méthodes statistiques différentes étapes de l’estimation de la production d’énergie. L’on verra les méthodes d’extrapolation de la vitesse du vent à la hauteur d’une turbine éolienne et l’on évaluera ces méthodes à l’aide de l’erreur quadratique moyenne. Aussi, on étudiera la modélisation de la vitesse du vent par la distributionWeibull et la variation de la distribution de la vitesse dans le temps. Finalement, l’on verra à partir de la validation croisée et du bootstrap si l’utilisation de données méso-échelle est préférable à celle de données des stations de référence, en plus de tester un modèle où les deux types de données sont utilisées pour prédire la vitesse du vent. Nous testerons la méthodologie globale présentement utilisée par les ingénieurs éoliens pour l’estimation de la production d’énergie d’un point de vue statistique, puis tenterons de proposer des changements à cette méthodologie, qui pourraient améliorer l’estimation de la production d’énergie annuelle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article it is proved that the stationary Markov sequences generated by minification models are ergodic and uniformly mixing. These results are used to establish the optimal properties of estimators for the parameters in the model. The problem of estimating the parameters in the exponential minification model is discussed in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes different estimators for the parameters of SemiPareto and Pareto autoregressive minification processes The asymptotic properties of the estimators are established by showing that the SemiPareto process is α-mixing. Asymptotic variances of different moment and maximum likelihood estimators are compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on 'n' complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time 'T'. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study is intended to provide a new scientific approach to the solution of the worlds cost engineering problems encountered in the chemical industries in our nation. The problem is that of cost estimation of equipments especially of pressure vessels when setting up chemical industries .The present study attempts to develop a model for such cost estimation. This in turn is hoped would go a long way to solve this and related problems in forecasting the cost of setting up chemical plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The microalgae gained importance as food and feed as well as source of fine chemicals since the l960’s. Spirulina became the trend setter due to its easily culturable properties as well as nutritional composition. A rapid expansion of microalgal industry occurred in the Asia-Pacific region as microalgae came to stay as a health food supplement. Microalgae have been an integral component of oxidation ponds usually incorporated with wastewater treatment. Over the last few decades, efforts have been made to apply intensive microalgal cultures to perform biological tertiary treatment of secondary effluents. Given the limited number of species still available for commercial exploitation, it is imperative to isolate and cultivate those photosynthetic organisms with high growth rate and biomass accumulation, which could play the dual role of cleaning the wastewater and also providing useful biomass. This has been the objective of this study ie. 0 To develop pure cultures of local isolates of Cyanobacteria for extraction of biochemicals of commercial value 0 To couple biomass production with effluent treatment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipids constitute a significant portion of the biomass of earth and lipolytic enzymes play a very important role in lipid turn over. Apart from their biological significance, lipolytic enzymes are also very important in the fields of nutrition, food technology, medicine and preparative and analytical lipid biochemistry. Recent developments in the study of proteins and enzymes have largely benefited the study of lipolytic enzymes, that some of these enzymes were isolated in pure form. Even today there is a continuous search for new and potent sources of these lipolytic enzymes. The zest for elucidating the structure and mechanism of action of the enzymes obtained in pure form for biochemist still remains unabated. The literature shows no record of such an effort for the study of lipases from marine sources. The fact that many fishes like oil sardine, mackerel, cat fish, seer etc. contains large amounts of lipid shows the possibility of the existence of lipases in significant amounts necessitating their exhaustive study. Such a study will, not only provide alternate sources for lipase but also will provide methods to curb lipolysis and the resultant rancidity and off flavor development in fish and fishery products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estuaries are highly productive ecosystems and characteristically are more productive than the adjacent river or sea. Estuarine producers which include planktonic algae, periphyton, herpobenthos as well as macrophytes are capable of nearly year round photosynthesis. Productivity of an environment is mainly the contribution of various groups of autotrophic flora. Any quantitative estimation excluding any one of these would be an underestimation. Periphyton plays a very important role in the productivity of estuarine and coastal waters. It has been reported that periphytic algae attain high biomass (Moss, 1968; Hansson, 1988a) and may contribute up to 80% of the primary production (Persson gt gtt, 1977); Considerable amount of work has been done on the productivity in Cochin backwaters by different investigators (Qasim, 1973, 1979; Nair gt gtt, 1975; Gopi— nathan gt gtt, 1984). All of them have estimated the primary production based only on phytoplankton of the estuary. Considering the contribution of other autotrophic components of the estuary such as periphyton (haptobenthos), sediment flora (herpebenthos) and macropytes, the productivity estimated by earlier authors were essentially underestimations. The present work is an attempt inter glig to assess the contribution of periphytic flora towards the total organic production in the estuary

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An accurate mass formula at finite temperature has been used to obtain a more precise estimation of temperature effects on fission barriers calculated within the liquid drop model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Beta-glucosidases are critical enzymes in biomass hydrolysis process and is important in creating highly efficient enzyme cocktails for the bio-ethanol industry. Among the two strategies proposed for overcoming the glucose inhibition of commercial cellulases, one is to use heavy dose of BGL in the enzyme blends and the second is to do simultaneous saccharification and fermentation where glucose is converted to alcohol as soon as it is being generated. While the former needs extremely high quantities of enzyme, the latter is inefficient since the conditions for hydrolysis and fermentation are different. This makes the process technically challenging and also in this case, the alcohol generation is lesser, making its recovery difficult. A third option is to use glucose tolerant β-glucosidases which can work at elevated glucose concentrations. However, there are very few reports on such enzymes from microbial sources especially filamentous fungi which can be cultivated on cheap biomass as raw material. There has been very less number of studies directed at this, though there is every possibility that filamentous fungi that are efficient degraders of biomass may harbor such enzymes. The study therefore aimed at isolating a fungus capable of secreting glucose tolerant β- glucosidase enzyme. Production, characterization of β-glucosidases and application of BGL for bioethanol production were attempted.