948 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Globaalin talouden rakenteet muuttuvat jatkuvasti. Yritykset toimivat kansainvälisillä markkinoilla aiempaa enemmän. Tuotannon lisäämiseksi monet yritykset ovat ulkoistaneet tuotteidensa tuki- ja ylläpitotoiminnot halvan työvoiman maihin. Yritykset voivat tällöin keskittää toimintansa ydinosamiseensa. Vapautuneita resursseja voidaan käyttää yrityksen sisäisessä tuotekehityksessä ja panostaa seuraavan sukupolven tuotteiden ja teknologioiden kehittämiseen. Diplomityö esittelee Globaalisti hajautetun toimitusmallin Internet-palveluntarjoajalle jossa tuotteiden tuki- ja ylläpito on ulkoistettu Intiaan. Teoriaosassa esitellään erilaisia toimitusmalleja ja keskitytään erityisesti hajautettuun toimitusmalliin. Tämän lisäksi luetellaan valintakriteerejä joilla voidaan arvioida projektin soveltuvuutta ulkoistettavaksi sekä esitellään mahdollisuuksia ja uhkia jotka sisältyvät globaaliin ulkoistusprosessiin. Käytäntöosassa esitellään globaali palvelun toimittamisprosessi joka on kehitetty Internet-palveluntarjoajan tarpeisiin.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tässä työssä tutkitaan ohjelmistoarkkitehtuurisuunnitteluominaisuuksien vaikutusta erään client-server –arkkitehtuuriin perustuvan mobiilipalvelusovelluksen suunnittelu- ja toteutusaikaan. Kyseinen tutkimus perustuu reaalielämän projektiin, jonka kvalitatiivinen analyysi paljasti arkkitehtuurikompponenttien välisten kytkentöjen merkittävästi vaikuttavan projektin työmäärään. Työn päätavoite oli kvantitatiivisesti tutkia yllä mainitun havainnon oikeellisuus. Tavoitteen saavuttamiseksi suunniteltiin ohjelmistoarkkitehtuurisuunnittelun mittaristo kuvaamaan kyseisen järjestelmän alijärjestelmien arkkitehtuuria ja luotiin kaksi suunniteltua mittaristoa käyttävää, työmäärää (komponentin suunnittelu-, toteutus- ja testausaikojen summa) arvioivaa mallia, joista toinen on lineaarinen ja toinen epälineaarinen. Näiden mallien kertoimet sovitettiin optimoimalla niiden arvot epälineaarista gloobaalioptimointimenetelmää, differentiaalievoluutioalgoritmia, käyttäen, niin että mallien antamat arvot vastasivat parhaiten mitattua työmäärää sekä kaikilla ominaisuuksilla eli attribuuteilla että vain osalla niistä (yksi jätettiin vuorotellen pois). Kun arkkitehtuurikompenttien väliset kytkennät jätettiin malleista pois, mitattujen ja arvoitujen työmäärien välinen ero (ilmaistuna virheenä) kasvoi eräässä tapauksessa 367 % entisestä tarkoittaen sitä, että näin muodostettu malli vastasi toteutusaikoja huonosti annetulla ainestolla. Tämä oli suurin havaitu virhe kaikkien poisjätettyjen ominaisuuksien kesken. Saadun tuloksen perusteella päätettiin, että kyseisen järjestelmän toteutusajat ovat vahvasti riippuvaisia kytkentöjen määrästä, ja näin ollen kytkentöjen määrä oli mitä todennäköisemmin kaikista tärkein työmäärään vaikuttava tekijä tutkitun järjestelmän arkkitehtuurisuunnittelussa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Käyttäjien tunnistaminen tietojärjestelmissä on ollut yksi tietoturvan kulmakivistä vuosikymmenten ajan. Ajatus käyttäjätunnuksesta ja salasanasta on kaikkein kustannustehokkain ja käytetyin tapa säilyttää luottamus tietojärjestelmän ja käyttäjien välillä. Tietojärjestelmien käyttöönoton alkuaikoina, jolloin yrityksissä oli vain muutamia tietojärjestelmiä ja niitä käyttivät vain pieni ryhmä käyttäjiä, tämä toimintamalli osoittautui toimivaksi. Vuosien mittaan järjestelmien määrä kasvoi ja sen mukana kasvoi salasanojen määrä ja monimuotoisuus. Kukaan ei osannut ennustaa, kuinka paljon salasanoihin liittyviä ongelmia käyttäjät kohtaisivat ja kuinka paljon ne tulisivat ruuhkauttamaan yritysten käyttäjätukea ja minkälaisia tietoturvariskejä salasanat tulisivat aiheuttamaan suurissa yrityksissä. Tässä diplomityössä tarkastelemme salasanojen aiheuttamia ongelmia suuressa, globaalissa yrityksessä. Ongelmia tarkastellaan neljästä eri näkökulmasta; ihmiset, teknologia, tietoturva ja liiketoiminta. Ongelmat osoitetaan esittelemällä tulokset yrityksen työntekijöille tehdystä kyselystä, joka toteutettiin osana tätä diplomityötä. Ratkaisu näihin ongelmiin esitellään keskitetyn salasanojenhallintajärjestelmän muodossa. Järjestelmän eri ominaisuuksia arvioidaan ja kokeilu -tyyppinen toteutus rakennetaan osoittamaan tällaisen järjestelmän toiminnallisuus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Työn tavoitteena oli määrittää myyntikonfiguraattorissa käytettävän tuotemallin yleinen rakenne. Ensin selvitettiin tuotemallin luomista ja konseptin suunnittelua kirjallisuuden ja asiantuntijoiden haastattelujen avulla. Asiantuntijoiden haastattelut toteutettiin vapaamuotoisesti kysymyslistaa apuna käyttäen. Tämän lisäksi työssä pohditaan sähköisen liiketoiminnan roolia sekä myyntikonfiguraattorin tulevaisuuden näkymiä. Diplomityössä käsitellään tuotemallia yleisellä tasolla. Toinen näkökulma käsittelee tuotemallia tietoteknisissä sovelluksissa käytettyjen menetelmien pohjalta. Tuotemallin muodostaminen aloitettiin asiakkaalle näkyvästä osasta eli myyntikonfiguraattorin ulkoasusta. Seuraava ongelma oli standardoida tuotetta ja tarjousta kuvaavat dokumentit globaalisti. Tähän ratkaisuun päädyttiin haastattelujen sekä asiantuntijoiden kokoontumisien pohjalta. Loppuosa diplomityöstä käsittelee myyntikonfiguraattorin asemaa kohdeyrityksen sähköisessä liiketoiminnassa sekä esittelee erään näkemyksen myyntikonfiguraattorin yhteenliittymästä asiakashallinta- ja tuotetiedonhallinta järjestelmiin. Diplomityössä saavutettiin asetetut tavoiteet: Myyntikonfigurattori yhtenäistää kohdeyrityksen hinnoittelua globaalisti, nopeuttaa tarjouksentekoprosessia, helpottaa uuden tuotteen lanseerausta ja standardoi tuotemallin globaalisti. Myyntikonfiguraattorin integrointi muihin tietojärjestelmiin tehostaa myynnin toimintoja. Haasteeksi jää loppukäyttäjien kannustaminen tehokkaaseen käyttöön sekä ylläpidon toteuttaminen. Ilman käyttäjiä ja heidän innostustaan voi projekti menettää johdon luottamuksen.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Some of the world’s leading companies now compete by providing integrated solutions to identify and solve each customer’s business problem by providing services to design, integrate, operate and finance a product or system during its life cycle. At the same time, because of the requirements of new global economy, companies are implementing new integrated ERP systems. The objective of this thesis was to define how solution offering can be implemented in the integrated ERP system so that it is possible to sell, deliver and maintain solution offering with the new enterprise applications. The research was conducted as a qualitative case study research consisting of literature review, theme-interviews and an analysis phase. For a start this study introduces new insight for combining solution business, offering modeling and modern ERP system theories. The results of this research illustrate the limitations of an integrated ERP system to support solution business and show the need to develop a commercial product model in order to improve the combination of solution offering and IT systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Workflow management systems aim at the controlled execution of complex application processes in distributed and heterogeneous environments. These systems will shape the structure of information systems in business and non-business environments. E business and system integration is a fertile soil for WF and groupware tools. This thesis aims to study WF and groupware tools in order to gather in house knowledge of WF to better utilize WF solutions in future, and to focus on SAP Business Workflow in order to find a global solution for Application Link Enabling support for system integration. Piloting this solution in Nokia collects the experience of SAP R/3 WF tool for other development projects in future. The literary part of this study will guide to the world of business process automation providing a general description of the history, use and potentials of WF & groupware software. The empirical part of this study begins with the background of the case study describing the IT environment initiating the case by introducing SAP R/3 in Nokia, the communication technique in use and WF tool. Case study is focused in one solution with SAP Business Workflow. This study provides a concept to monitor communication between ERP systems and to increase the quality of system integration. Case study describes a way create support model for ALE/EDI interfaces. Support model includes monitoring organization and the workflow processes to solve the most common IDoc related errors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A model to estimate damage caused by gray leaf spot of corn (Cercospora zea-maydis) was developed from experimental field data gathered during the summer seasons of 2000/01 and during the second crop season [January-seedtime] of 2001, in the southwest of Goiás state. Three corn hybrids were grown over two seasons and on two sites, resulting in 12 experimental plots. A disease intensity gradient (lesions per leaf) was generated through application, three times over the season, of five different doses of the fungicide propiconazol. From tasseling onward, disease intensity on the ear leaf (El), and El - 1, El - 2, El + 1, and El + 2, was evaluated weekly. A manual harvest at the physiological ripening stage was followed by grain drying and cleaning. Finally, grain yield in kg.ha-1 was estimated. Regression analysis, performed between grain yield and all combinations of the number of lesions on each leaf type, generated thirty linear equations representing the damage function. To estimate losses caused by different disease intensities at different corn growth stages, these models should first be validated. Damage coefficients may be used in determining the economic damage threshold.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ABSTRACTA model to estimate yield loss caused by Asian soybean rust (ASR) (Phakopsora pachyrhizi) was developed by collecting data from field experiments during the growing seasons 2009/10 and 2010/11, in Passo Fundo, RS. The disease intensity gradient, evaluated in the phenological stages R5.3, R5.4 and R5.5 based on leaflet incidence (LI) and number of uredinium and lesions/cm2, was generated by applying azoxystrobin 60 g a.i/ha + cyproconazole 24 g a.i/ha + 0.5% of the adjuvant Nimbus. The first application occurred when LI = 25% and the remaining ones at 10, 15, 20 and 25-day intervals. Harvest occurred at physiological maturity and was followed by grain drying and cleaning. Regression analysis between the grain yield and the disease intensity assessment criteria generated 56 linear equations of the yield loss function. The greatest loss was observed in the earliest growth stage, and yield loss coefficients ranged from 3.41 to 9.02 kg/ha for each 1% LI for leaflet incidence, from 13.34 to 127.4 kg/ha/1 lesion/cm2 for lesion density and from 5.53 to 110.0 kg/ha/1 uredinium/cm2 for uredinium density.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study examined solution business models and how they could be applied into energy efficiency business. The target of this study was to find out, what a functional solution business model applied to energy efficiency improvement projects is like. The term “functionality” was used to refer not only to the economic viability but to environmental and legal aspects and also to the implement of Critical Success Factors (CSFs) and the ability to overcome the most important market barriers and risks. This thesis is based on a comprehensive literature study on solution business, business models and energy efficiency business. This literature review was used as a foundation to an energy efficiency solution business model scheme. The created scheme was tested in a case study which studied two different energy efficiency improvement projects, illustrated the functionality of the created business model and evaluated their potential as customer targets. Solution approach was found to be suitable for energy efficiency business. The most important characteristics of a good solution business model were identified to be the relationship between the supplier and customer, a proper network, knowledge on the customer’s process and supreme technological expertise. Thus the energy efficiency solution business was recognized to be particularly suitable for example for energy suppliers or technological equipment suppliers. Because the case study was not executed from a certain company’s point of view, the most important factors such as relationships and the availability of funding could not be evaluated. Although the energy efficiency business is recognized to be economically viable, the most important factors influencing the profitability and the success of energy efficiency solution business model were identified to be the proper risk management, the ability to overcome market barriers and the realization of CSFs.