873 resultados para Stand-Alone and Grid Connected PV applications
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
This study aimed to evaluate the cytotoxic and genotoxic potential of food flavorings (Strawberry, Condensed Milk and Chocolate) on Allium cepa meristematic root cells, with exposure times of 24 and 48 hours. Cytotoxic and mutagenic potential were evaluated separately at doses of 0.2, 0.4 and 0.6 ml and in combination, in which for each dose, the same dose of one other flavoring was combined. The results were analyzed by the Chi-square test (p <0.05). The Strawberry flavor in both exposure times and the three studied doses, the Condensed Milk at 0.6 ml in the 48 hour exposure time, the Chocolate flavor at 0.4 ml, exposure time of 48 hours, and at 0.6 ml, in both exposure times and all treatments with combined doses, significantly reduced the cell division rate, proving to be cytotoxic. No treatment resulted in a significant number of cellular aberrations in A. cepa cells, therefore, the flavorings, under the conditions studied, were non- mutagenic.
Resumo:
Changes in the configuration of a tree stern result insignificant differences in its total volume and in the proportion of that volume that is merchantable timber. Tree allometry, as represented by stem-fo~, is the result of the vertical force of gravity and the horizontal force of wind. The effect of wind force is demonstrated in the relationship between stem-form, standclosure and site-conditions. An increase in wind force on the individual tree due to a decrease in stand density should produce a more tapered tree. The density of the stand is determined by the conditions that the trees are growing under. The ability of the tree to respond to increased wind force may also be a function of these conditions . This stem-form/stand-closure/site-conditions relationship was examined using a pre-existing database from westcentral Alberta. This database consisted of environmental, vegetation, soils and timber data covering a wide range of sites. There were 653 sample trees with 82 variables that formed the basis of the analysis. There were eight tree species consisting of Pinus contorta, Picea mariana, Picea engelmannii x glauca, Abies lasiocarpa, Larix laricina, Populus tremuloides, Betula papyrifera and Populus balsamifera plus a comprehensive all-species data set. As the actual conformation of the stern is very individual, stem-fo~was represented by the diameter at breast height to total height r~tio. The four stand-closure variables, crown closure, total basal area, total volume and total number of stems were reduced to total basal area and total number of stems utilizing a bivariate correlation matrix by species. Site-conditions were subdivided into macro, meso and micro variables and reduced in number 3 using cross-tabulations, bivariate correlation and principal components analysis as screening tools. The stem-fo~/stand-closure relationship was examined using bivariate correlation coefficients for stem-fo~ with total number of stems and stem-fo~ with total basal area. The stem-fo~/site-conditions and the stand-closure/site- conditions relationships were examined using multiple correlation coefficients. The stem-form/stand-closure/site-conditions relationship was examined using multiple correlation coefficients in separate analyses for both total number of stems and total basal area. An increase in stand-closure produced a decrease in stem-form for both total number of stems and total basal area for most species. There was a significant relationship between stem-form and site-conditions and between stand-closure and site-conditions for both total number of stems and total basal area for most species. There was a significant relationship between the stemform and site-conditions, including the stand-closure, for most species; total number of stems was involved independently of the site-conditions in the prediction of stem-form and total basal area was not. Larix laricina and Betula papyrifera were the exceptions to the trends observed with most species. The influence of both stand-closure (total number of stems in particular) and site-conditions (elevation in particular) suggest that forest management practices should include these- ecological parameters in determining appropriate restocking levels.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Dans ce mémoire, nous abordons le problème de l’ensemble dominant connexe de cardinalité minimale. Nous nous penchons, en particulier, sur le développement de méthodes pour sa résolution basées sur la programmation par contraintes et la programmation en nombres entiers. Nous présentons, en l’occurrence, une heuristique et quelques méthodes exactes pouvant être utilisées comme heuristiques si on limite leur temps d’exécution. Nous décrivons notamment un algorithme basé sur l’approche de décomposition de Benders, un autre combinant cette dernière avec une stratégie d’investigation itérative, une variante de celle-ci utilisant la programmation par contraintes, et enfin une méthode utilisant uniquement la programmation par contraintes. Des résultats expérimentaux montrent que ces méthodes sont efficaces puisqu’elles améliorent les méthodes connues dans la littérature. En particulier, la méthode de décomposition de Benders avec une stratégie d’investigation itérative fournit les résultats les plus performants.
Resumo:
Plasticized poly(vinyl chloride) (pPVC), although a major player in the medical field, is at present facing lot of criticism due to some of its limitations like the leaching out of the toxic plasticizer, di ethylhexyl phthalate (DEHP) to the medium and the emission of an environmental pollutant,dioxin gas,at the time of the post use disposal of PVC Products by incineration. Due to these reasons, efforts are on to reduce the use of pPVC considerably in the medical field and to find viable alternative materials. The present study has been undertaken in this context to find a suitable material for the manufacture of medical aids in place of pPVC. The main focus of this study has been to find out a non-DEHP material as plasticizer for pPVC and another suitable material for the complete repalcement of pPVC for blood/ blood component storage applications.Two approaches have been undertaken for this purpose-(1)the controversial plasticizer, DEHP has been partially replaced by polymeric plasticizers(2) an alternative material, namely, metallocene polyolefin (mPO) has been used and suitably modified to match the properties of flexible PVC used for blood and blood component storage applications.
Resumo:
Laser engineering is an area in which developments in the existing design concepts and technology appear at an alarming rate. Now—a-days, emphasis has shifted from innovation to cost reduction and system improvement. To a major extent, these studies are aimed at attaining larger power densities, higher system efficiency and identification of new lasing media and new lasing wavelengths. Todate researchers have put to use all the ditferent Forms of matter as lasing material. Laser action was observed For the first time in a gaseous system - the He-Ne system. This was Followed by a variety of solidstate and gas laser systems. Uarious organic dyes dissolved in suitable solvents were found to lase when pumped optically. Broad band emission characteristics of these dye molecules made wavelength tuning possible using optical devices. Laser action was also observed in certain p-n junctions of semiconductor materials and some of these systems are also tunable. The recent addition to this list was the observation of laser action from certain laser produced plasmas. The purpose of this investigation was to examine the design and Fabrication techniques of pulsed Nitrogen lasers and high power Nd: Glass laserso Attempt was also made to put the systems developed into certain related experiments
Resumo:
The main focus of the present study was to develop ideal low band gap D-A copolymers for photoconducting and non-linear optical applications. This chapter summarizes the overall research work done. Designed copolymers were synthesized via direct arylation or Suzuki coupling reactions. Copolymers were characterized by theoretical and experimental methods. The suitability of these copolymers in photoconducting and optical limiting devices has been investigated.The results suggest that the copolymers investigated in the present study have a good non-linear optical response and are comparable to or even better than the D-A copolymers reported in the literature and hence could be chosen as ideal candidates with potential applications for non-linear optics. The results also show that the structures of the polymers have great impact on NLO properties. Copolymers studied here exhibits good optical limiting property at 532 nm wavelength due to two-photon absorption (TPA) process. The results revealed that the two copolymers, (P(EDOT-BTSe) and P(PH-TZ)) exhibited strong two-photon absorption and superior optical power limiting properties, which are much better than that of others.
Extraction of tidal channel networks from aerial photographs alone and combined with laser altimetry
Resumo:
Tidal channel networks play an important role in the intertidal zone, exerting substantial control over the hydrodynamics and sediment transport of the region and hence over the evolution of the salt marshes and tidal flats. The study of the morphodynamics of tidal channels is currently an active area of research, and a number of theories have been proposed which require for their validation measurement of channels over extensive areas. Remotely sensed data provide a suitable means for such channel mapping. The paper describes a technique that may be adapted to extract tidal channels from either aerial photographs or LiDAR data separately, or from both types of data used together in a fusion approach. Application of the technique to channel extraction from LiDAR data has been described previously. However, aerial photographs of intertidal zones are much more commonly available than LiDAR data, and most LiDAR flights now involve acquisition of multispectral images to complement the LiDAR data. In view of this, the paper investigates the use of multispectral data for semiautomatic identification of tidal channels, firstly from only aerial photographs or linescanner data, and secondly from fused linescanner and LiDAR data sets. A multi-level, knowledge-based approach is employed. The algorithm based on aerial photography can achieve a useful channel extraction, though may fail to detect some of the smaller channels, partly because the spectral response of parts of the non-channel areas may be similar to that of the channels. The algorithm for channel extraction from fused LiDAR and spectral data gives an increased accuracy, though only slightly higher than that obtained using LiDAR data alone. The results illustrate the difficulty of developing a fully automated method, and justify the semi-automatic approach adopted.
Resumo:
We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.