1000 resultados para thermoeconomic optimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement, nous évaluons l’efficacité numérique des méthodes proposées dans le cadre de l’estimation de modèles de choix discrets, en particulier les modèles logit mélangés.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4×5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4×5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant results of our experimental investigations on the dependence of pH on real time transmission characteristics on recording media fabricated by doping PVC with complexed methylene blue are presented. The optimum pH value for faster bleaching was found to be 4 . 5. In typical applications, the illumination from one side, normal to the surface of this material, initiates a chemical sequence that records the incident light pattern in the polymer. Thus direct imaging can be successfully done on this sample. The recorded letters were very legible with good contrast and no scattering centres. Diffraction efficiency measurements were also carried out on this material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis deals with the preparation and dielectric characterization of Poly aniline and its analogues in ISM band frequency of 2-4 GHz that includes part of the microwave region (300 MHz to 300 GHz) of the electromagnetic spectrum and an initial dielectric study in the high frequency [O.05MHz-13 MHz]. PolyaniIine has been synthesized by an in situ doping reaction under different temperature and in the presence of inorganic dopants such as HCl H2S04, HN03, HCl04 and organic dopants such as camphorsulphonic acid [CSA], toluenesulphonic acid {TSA) and naphthalenesulphonic acid [NSA]. The variation in dielectric properties with change in reaction temperature, dopants and frequency has been studied. The effect of codopants and microemulsions on the dielectric properties has also been studied in the ISM band. The ISM band of frequencies (2-4 GHz) is of great utility in Industrial, Scientific and Medical (ISM) applications. Microwave heating is a very efficient method of heating dielectric materials and is extensively used in industrial as well as household heating applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main source of protein for human and animal consumption is from the agricultural sector, where the production is vulnerable to diseases, fluctuations in climatic conditions and deteriorating hydrological conditions due to water pollution. Therefore Single Cell Protein (SCP) production has evolved as an excellent alternative. Among all sources of microbial protein, yeast has attained global acceptability and has been preferred for SCP production. The screening and evaluation of nutritional and other culture variables of microorganisms are very important in the development of a bioprocess for SCP production. The application of statistical experimental design in bioprocess development can result in improved product yields, reduced process variability, closer confirmation of the output response to target requirements and reduced development time and overall cost.The present work was undertaken to develop a bioprocess technology for the mass production of a marine yeast, Candida sp.S27. Yeasts isolated from the offshore waters of the South west coast of India and maintained in the Microbiology Laboratory were subjected to various tests for the selection of a potent strain for biomass production. The selected marine yeast was identified based on ITS sequencing. Biochemical/nutritional characterization of Candida sp.S27 was carried out. Using Response Surface Methodology (RSM) the process parameters (pH, temperature and salinity) were optimized. For mass production of yeast biomass, a chemically defined medium (Barnett and Ingram, 1955) and a crude medium (Molasses-Yeast extract) were optimized using RSM. Scale up of biomass production was done in a Bench top Fermenter using these two optimized media. Comparative efficacy of the defined and crude media were estimated besides nutritional evaluation of the biomass developed using these two optimized media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two stage processes consisting of precursor preparation by thermal evaporation followed by chalcogenisation in the required atmosphere is found to be a feasible technique for the PV materials such as n-Beta In2S3, p-CulnSe2, p-CulnS2 and p-CuIn(Sel_xSx)2. The growth parameters such as chalcogenisation temperature and duration of chalcogenisation etc have been optimised in the present study.Single phase Beta-In2S3 thin films can be obtained by sulfurising the indium films above 300°C for 45 minutes. Low sulfurisation temperatures required prolonged annealing after the sulfurisation to obtain single phase Beta-1n2S3, which resulted in high material loss. The maximum band gap of 2.58 eV was obtained for the nearly stoichiometric Beta-In2S3 film which was sulfurised at 350°C. This wider band gap, n type Beta-In2S3 can be used as an alternative to toxic CdS as window layer in photovoltaics .The systematic study on the structural optical and electrical properties of CuInSe2 films by varying the process parameters such as the duration of selenization and the selenization temperature led to the conclusion that for the growth of single-phase CuInSe2, the optimum selenization temperature is 350°C and duration is 3 hours. The presence of some binary phases in films for shorter selenization period and lower selenization temperature may be due to the incomplete reaction and indium loss. Optical band gap energy of 1.05 eV obtained for the films under the optimum condition.In order to obtain a closer match to the solar spectrum it is desirable to increase the band gap of the CulnSe2 by a few meV . Further research works were carried out to produce graded band gap CuIn(Se,S)2 absorber films by incorporation of sulfur into CuInSe2. It was observed that when the CulnSe2 prepared by two stage process were post annealed in sulfur atmosphere, the sulfur may be occupying the interstitial positions or forming a CuInS2 phase along with CuInSe2 phase. The sulfur treatment during the selenization process OfCu11 ln9 precursors resulted in Culn (Se,S)2 thin films. A band gap of 1.38 eV was obtained for the CuIn(Se,S)2.The optimised thin films n-beta 1n2S3, p-CulnSe2 and p-Culn(Sel-xSx)2 can be used for fabrication of polycrystalline solar cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Controlling the inorganic nitrogen by manipulating carbon / nitrogen ratio is a method gaining importance in aquaculture systems. Nitrogen control is induced by feeding bacteria with carbohydrates and through the subsequent uptake of nitrogen from the water for the synthesis of microbial proteins. The relationship between addition of carbohydrates, reduction of ammonium and the production of microbial protein depends on the microbial conversion coefficient. The carbon / nitrogen ratio in the microbial biomass is related to the carbon contents of the added material. The addition of carbonaceous substrate was found to reduce inorganic nitrogen in shrimp culture ponds and the resultant microbial proteins are taken up by shrimps. Thus, part of the feed protein is replaced and feeding costs are reduced in culture systems.The use of various locally available substrates for periphyton based aquaculture practices increases production and profitability .However, these techniques for extensive shrimp farming have not so far been evaluated. Moreover, an evaluation of artificial substrates together with carbohydrate source based farming system in reducing inorganic nitrogen production in culture systems has not yet been carried-out. Furthermore, variations in water and soil quality, periphyton production and shrimp production of the whole system have also not been determined so-far.This thesis starts with a general introduction , a brief review of the most relevant literature, results of various experiments and concludes with a summary (Chapter — 9). The chapters are organised conforming to the objectives of the present study. The major objectives of this thesis are, to improve the sustainability of shrimp farming by carbohydrate addition and periphyton substrate based shrimp production and to improve the nutrient utilisation in aquaculture systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faculty of Marine Sciences,Cochin University of Science and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.