866 resultados para OPTIMIZATION MODEL


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As it is well known, competitive electricity markets require new computing tools for power companies that operate in retail markets in order to enhance the management of its energy resources. During the last years there has been an increase of the renewable penetration into the micro-generation which begins to co-exist with the other existing power generation, giving rise to a new type of consumers. This paper develops a methodology to be applied to the management of the all the aggregators. The aggregator establishes bilateral contracts with its clients where the energy purchased and selling conditions are negotiated not only in terms of prices but also for other conditions that allow more flexibility in the way generation and consumption is addressed. The aggregator agent needs a tool to support the decision making in order to compose and select its customers' portfolio in an optimal way, for a given level of profitability and risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the extensive literature in finding new models to replace the Markowitz model or trying to increase the accuracy of its input estimations, there is less studies about the impact on the results of using different optimization algorithms. This paper aims to add some research to this field by comparing the performance of two optimization algorithms in drawing the Markowitz Efficient Frontier and in real world investment strategies. Second order cone programming is a faster algorithm, appears to be more efficient, but is impossible to assert which algorithm is better. Quadratic Programming often shows superior performance in real investment strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, a significant number of banks in Portugal are facing a bank-branch restructuring problem, and Millennium BCP is not an exception. The closure of branches is a major component of profit maximization through the reduction in operational and personnel costs but also an opportunity to approach the idea of “baking of future” and start thinking on the benefits of the digital era. This dissertation centers on a current high-impact organizational problem addressed by the company and consists in a proposal of optimization to the model that Millennium BCP uses. Even though measures of performance are usually considered the most important elements in evaluating the viability of branches, there is evidence suggesting that other general factors can be important to assess branch potential, such as the influx on branches, business dimensions of a branch and its location, which will be addressed in this project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement, nous évaluons l’efficacité numérique des méthodes proposées dans le cadre de l’estimation de modèles de choix discrets, en particulier les modèles logit mélangés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Arbeit umfaßt das elektromechanische Design und die Designoptimierung von weit durchstimmbaren optischen multimembranbasierten Bauelementen, mit vertikal orientierten Kavitäten, basierend auf der Finiten Element Methode (FEM). Ein multimembran InP/Luft Fabry-Pérot optischer Filter wird dargestellt und umfassend analysiert. In dieser Arbeit wird ein systematisches strukturelles Designverfahren dargestellt. Genaue analytische elektromechanischer Modelle für die Bauelemente sind abgeleitet worden. Diese können unschätzbare Werkzeuge sein, um am Anfang der Designphase schnell einen klaren Einblick zur Verfügung zu stellen. Mittels des FEM Programms ist der durch die nicht-lineare Verspannung hervorgerufene versteifende Effekt nachgeforscht und sein Effekt auf die Verlängerung der mechanischen Durchstimmungsstrecke der Bauelemente demonstriert worden. Interessant war auch die Beobachtung, dass die normierte Relation zwischen Ablenkung und Spannung ein unveränderliches Profil hat. Die Deformation der Membranflächen der in dieser Arbeit dargestellten Bauelementformen erwies sich als ein unerwünschter, jedoch manchmal unvermeidbarer Effekt. Es zeigt sich aber, dass die Wahl der Größe der strukturellen Dimensionen den Grad der Membrandeformation im Falle der Aktuation beeinflusst. Diese Arbeit stellt ein elektromechanisches in FEMLAB implementierte quasi-3D Modell, das allgemein für die Modellierung dünner Strukturen angewendet werden kann, dar; und zwar indem man diese als 2D-Objekte betrachtet und die dritte Dimension als eine konstante Größe (z.B. die Schichtdicke) oder eine Größe, welche eine mathematische Funktion ist, annimmt. Diese Annahme verringert drastisch die Berechnungszeit sowie den erforderlichen Arbeitsspeicherbedarf. Weiter ist es für die Nachforschung des Effekts der Skalierung der durchstimmbaren Bauelemente verwendet worden. Eine neuartige Skalierungstechnik wurde abgeleitet und verwendet. Die Ergebnisse belegen, dass das daraus resultierende, skalierte Bauelement fast genau die gleiche mechanische Durchstimmung wie das unskalierte zeigt. Die Einbeziehung des Einflusses von axialen Verspannungen und Gradientenverspannungen in die Berechnungen erforderte die Änderung der Standardimplementierung des 3D Mechanikberechnungsmodus, der mit der benutzten FEM Software geliefert wurde. Die Ergebnisse dieser Studie zeigen einen großen Einfluss der Verspannung auf die Durchstimmungseigenschaften der untersuchten Bauelemente. Ferner stimmten die Ergebnisse der theoretischen Modellrechnung mit den experimentellen Resultaten sehr gut überein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates a method for human-robot interaction (HRI) in order to uphold productivity of industrial robots like minimization of the shortest operation time, while ensuring human safety like collision avoidance. For solving such problems an online motion planning approach for robotic manipulators with HRI has been proposed. The approach is based on model predictive control (MPC) with embedded mixed integer programming. The planning strategies of the robotic manipulators mainly considered in the thesis are directly performed in the workspace for easy obstacle representation. The non-convex optimization problem is approximated by a mixed-integer program (MIP). It is further effectively reformulated such that the number of binary variables and the number of feasible integer solutions are drastically decreased. Safety-relevant regions, which are potentially occupied by the human operators, can be generated online by a proposed method based on hidden Markov models. In contrast to previous approaches, which derive predictions based on probability density functions in the form of single points, such as most likely or expected human positions, the proposed method computes safety-relevant subsets of the workspace as a region which is possibly occupied by the human at future instances of time. The method is further enhanced by combining reachability analysis to increase the prediction accuracy. These safety-relevant regions can subsequently serve as safety constraints when the motion is planned by optimization. This way one arrives at motion plans that are safe, i.e. plans that avoid collision with a probability not less than a predefined threshold. The developed methods have been successfully applied to a developed demonstrator, where an industrial robot works in the same space as a human operator. The task of the industrial robot is to drive its end-effector according to a nominal sequence of grippingmotion-releasing operations while no collision with a human arm occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary - Cooking banana is one of the most important crops in Uganda; it is a staple food and source of household income in rural areas. The most common cooking banana is locally called matooke, a Musa sp triploid acuminate genome group (AAA-EAHB). It is perishable and traded in fresh form leading to very high postharvest losses (22-45%). This is attributed to: non-uniform level of harvest maturity, poor handling, bulk transportation and lack of value addition/processing technologies, which are currently the main challenges for trade and export, and diversified utilization of matooke. Drying is one of the oldest technologies employed in processing of agricultural produce. A lot of research has been carried out on drying of fruits and vegetables, but little information is available on matooke. Drying of matooke and milling it to flour extends its shelf-life is an important means to overcome the above challenges. Raw matooke flour is a generic flour developed to improve shelf stability of the fruit and to find alternative uses. It is rich in starch (80 - 85%db) and subsequently has a high potential as a calorie resource base. It possesses good properties for both food and non-food industrial use. Some effort has been done to commercialize the processing of matooke but there is still limited information on its processing into flour. It was imperative to carry out an in-depth study to bridge the following gaps: lack of accurate information on the maturity window within which matooke for processing into flour can be harvested leading to non-uniform quality of matooke flour; there is no information on moisture sorption isotherm for matooke from which the minimum equilibrium moisture content in relation to temperature and relative humidity is obtainable, below which the dry matooke would be microbiologically shelf-stable; and lack of information on drying behavior of matooke and standardized processing parameters for matooke in relation to physicochemical properties of the flour. The main objective of the study was to establish the optimum harvest maturity window and optimize the processing parameters for obtaining standardized microbiologically shelf-stable matooke flour with good starch quality attributes. This research was designed to: i) establish the optimum maturity harvest window within which matooke can be harvested to produce a consistent quality of matooke flour, ii) establish the sorption isotherms for matooke, iii) establish the effect of process parameters on drying characteristics of matooke, iv) optimize the drying process parameters for matooke, v) validate the models of maturity and optimum process parameters and vi) standardize process parameters for commercial processing of matooke. Samples were obtained from a banana plantation at Presidential Initiative on Banana Industrial Development (PIBID), Technology Business Incubation Center (TBI) at Nyaruzunga – Bushenyi in Western Uganda. A completely randomized design (CRD) was employed in selecting the banana stools from which samples for the experiments were picked. The cultivar Mbwazirume which is soft cooking and commonly grown in Bushenyi was selected for the study. The static gravitation method recommended by COST 90 Project (Wolf et al., 1985), was used for determination of moisture sorption isotherms. A research dryer developed for this research. All experiments were carried out in laboratories at TBI. The physiological maturity of matooke cv. mbwazirume at Bushenyi is 21 weeks. The optimum harvest maturity window for commercial processing of matooke flour (Raw Tooke Flour - RTF) at Bushenyi is between 15-21 weeks. The finger weight model is recommended for farmers to estimate harvest maturity for matooke and the combined model of finger weight and pulp peel ratio is recommended for commercial processors. Matooke isotherms exhibited type II curve behavior which is characteristic of foodstuffs. The GAB model best described all the adsorption and desorption moisture isotherms. For commercial processing of matooke, in order to obtain a microbiologically shelf-stable dry product. It is recommended to dry it to moisture content below or equal to 10% (wb). The hysteresis phenomenon was exhibited by the moisture sorption isotherms for matooke. The isoteric heat of sorption for both adsorptions and desorption isotherms increased with decreased moisture content. The total isosteric heat of sorption for matooke: adsorption isotherm ranged from 4,586 – 2,386 kJ/kg and desorption isotherm from 18,194– 2,391 kJ/kg for equilibrium moisture content from 0.3 – 0.01 (db) respectively. The minimum energy required for drying matooke from 80 – 10% (wb) is 8,124 kJ/kg of water removed. Implying that the minimum energy required for drying of 1 kg of fresh matooke from 80 - 10% (wb) is 5,793 kJ. The drying of matooke takes place in three steps: the warm-up and the two falling rate periods. The drying rate constant for all processing parameters ranged from 5,793 kJ and effective diffusivity ranged from 1.5E-10 - 8.27E-10 m2/s. The activation energy (Ea) for matooke was 16.3kJ/mol (1,605 kJ/kg). Comparing the activation energy (Ea) with the net isosteric heat of sorption for desorption isotherm (qst) (1,297.62) at 0.1 (kg water/kg dry matter), indicated that Ea was higher than qst suggesting that moisture molecules travel in liquid form in matooke slices. The total color difference (ΔE*) between the fresh and dry samples, was lowest for effect of thickness of 7 mm, followed by air velocity of 6 m/s, and then drying air temperature at 70˚C. The drying system controlled by set surface product temperature, reduced the drying time by 50% compared to that of a drying system controlled by set air drying temperature. The processing parameters did not have a significant effect on physicochemical and quality attributes, suggesting that any drying air temperature can be used in the initial stages of drying as long as the product temperature does not exceed gelatinization temperature of matooke (72˚C). The optimum processing parameters for single-layer drying of matooke are: thickness = 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode. From practical point of view it is recommended that for commercial processing of matooke, to employ multi-layer drying of loading capacity equal or less than 7 kg/m², thickness 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode.