916 resultados para modeling and prediction


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A statewide study was conducted to develop regression equations for estimating flood-frequency discharges for ungaged stream sites in Iowa. Thirty-eight selected basin characteristics were quantified and flood-frequency analyses were computed for 291 streamflow-gaging stations in Iowa and adjacent States. A generalized-skew-coefficient analysis was conducted to determine whether generalized skew coefficients could be improved for Iowa. Station skew coefficients were computed for 239 gaging stations in Iowa and adjacent States, and an isoline map of generalized-skew-coefficient values was developed for Iowa using variogram modeling and kriging methods. The skew map provided the lowest mean square error for the generalized-skew- coefficient analysis and was used to revise generalized skew coefficients for flood-frequency analyses for gaging stations in Iowa. Regional regression analysis, using generalized least-squares regression and data from 241 gaging stations, was used to develop equations for three hydrologic regions defined for the State. The regression equations can be used to estimate flood discharges that have recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for ungaged stream sites in Iowa. One-variable equations were developed for each of the three regions and multi-variable equations were developed for two of the regions. Two sets of equations are presented for two of the regions because one-variable equations are considered easy for users to apply and the predictive accuracies of multi-variable equations are greater. Standard error of prediction for the one-variable equations ranges from about 34 to 45 percent and for the multi-variable equations range from about 31 to 42 percent. A region-of-influence regression method was also investigated for estimating flood-frequency discharges for ungaged stream sites in Iowa. A comparison of regional and region-of-influence regression methods, based on ease of application and root mean square errors, determined the regional regression method to be the better estimation method for Iowa. Techniques for estimating flood-frequency discharges for streams in Iowa are presented for determining ( 1) regional regression estimates for ungaged sites on ungaged streams; (2) weighted estimates for gaged sites; and (3) weighted estimates for ungaged sites on gaged streams. The technique for determining regional regression estimates for ungaged sites on ungaged streams requires determining which of four possible examples applies to the location of the stream site and its basin. Illustrations for determining which example applies to an ungaged stream site and for applying both the one-variable and multi-variable regression equations are provided for the estimation techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rationale Mephedrone (4-methylmethcathinone) is a still poorly known drug of abuse, alternative to ecstasy or cocaine. Objective The major aims were to investigate the pharmacokineticsa and locomotor activity of mephedrone in rats and provide a pharmacokinetic/pharmacodynamic model. Methods Mephedrone was administered to male SpragueDawley rats intravenously (10 mg/kg) and orally (30 and 60 mg/kg). Plasma concentrations and metabolites were characterized using LC/MS and LC-MS/MS fragmentation patterns. Locomotor activity was monitored for 180240 min. Results Mephedrone plasma concentrations after i.v. administration fit a two-compartment model (α=10.23 h−1, β=1.86 h−1). After oral administration, peak mephedrone concentrations were achieved between 0.5 and 1 h and declined to undetectable levels at 9 h. The absolute bioavailability of mephedrone was about 10 % and the percentage of mephedrone protein binding was 21.59±3.67%. We have identified five phase I metabolites in rat blood after oral administration. The relationship between brain levels and free plasma concentration was 1.85±0.08. Mephedrone induced a dose-dependent increase in locomotor activity, which lasted up to 2 h. The pharmacokineticpharmacodynamic model successfully describes the relationship between mephedrone plasma concentrations and its psychostimulant effect. Conclusions We suggest a very important first-pass effect for mephedrone after oral administration and an easy access to the central nervous system. The model described might be useful in the estimation and prediction of the onset, magnitude,and time course of mephedrone pharmacodynamics as well as to design new animal models of mephedrone addiction and toxicity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper asks a simple question: if humans and their actions co-evolve with hydrological systems (Sivapalan et al., 2012), what is the role of hydrological scientists, who are also humans, within this system? To put it more directly, as traditionally there is a supposed separation of scientists and society, can we maintain this separation as socio-hydrologists studying a socio-hydrological world? This paper argues that we cannot, using four linked sections. The first section draws directly upon the concern of science-technology studies to make a case to the (socio-hydrological) community that we need to be sensitive to constructivist accounts of science in general and socio-hydrology in particular. I review three positions taken by such accounts and apply them to hydrological science, supported with specific examples: (a) the ways in which scientific activities frame socio-hydrological research, such that at least some of the knowledge that we obtain is constructed by precisely what we do; (b) the need to attend to how socio-hydrological knowledge is used in decision-making, as evidence suggests that hydrological knowledge does not flow simply from science into policy; and (c) the observation that those who do not normally label themselves as socio-hydrologists may actually have a profound knowledge of socio-hydrology. The second section provides an empirical basis for considering these three issues by detailing the history of the practice of roughness parameterisation, using parameters like Manning's n, in hydrological and hydraulic models for flood inundation mapping. This history sustains the third section that is a more general consideration of one type of socio-hydrological practice: predictive modelling. I show that as part of a socio-hydrological analysis, hydrological prediction needs to be thought through much more carefully: not only because hydrological prediction exists to help inform decisions that are made about water management; but also because those predictions contain assumptions, the predictions are only correct in so far as those assumptions hold, and for those assumptions to hold, the socio-hydrological system (i.e. the world) has to be shaped so as to include them. Here, I add to the ``normal'' view that ideally our models should represent the world around us, to argue that for our models (and hence our predictions) to be valid, we have to make the world look like our models. Decisions over how the world is modelled may transform the world as much as they represent the world. Thus, socio-hydrological modelling has to become a socially accountable process such that the world is transformed, through the implications of modelling, in a fair and just manner. This leads into the final section of the paper where I consider how socio-hydrological research may be made more socially accountable, in a way that is both sensitive to the constructivist critique (Sect. 1), but which retains the contribution that hydrologists might make to socio-hydrological studies. This includes (1) working with conflict and controversy in hydrological science, rather than trying to eliminate them; (2) using hydrological events to avoid becoming locked into our own frames of explanation and prediction; (3) being empirical and experimental but in a socio-hydrological sense; and (4) co-producing socio-hydrological predictions. I will show how this might be done through a project that specifically developed predictive models for making interventions in river catchments to increase high river flow attenuation. Therein, I found myself becoming detached from my normal disciplinary networks and attached to the co-production of a predictive hydrological model with communities normally excluded from the practice of hydrological science.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drug metabolism can produce metabolites with physicochemical and pharmacological properties that differ substantially from those of the parent drug, and consequently has important implications for both drug safety and efficacy. To reduce the risk of costly clinical-stage attrition due to the metabolic characteristics of drug candidates, there is a need for efficient and reliable ways to predict drug metabolism in vitro, in silico and in vivo. In this Perspective, we provide an overview of the state of the art of experimental and computational approaches for investigating drug metabolism. We highlight the scope and limitations of these methods, and indicate strategies to harvest the synergies that result from combining measurement and prediction of drug metabolism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rationale Mephedrone (4-methylmethcathinone) is a still poorly known drug of abuse, alternative to ecstasy or cocaine. Objective The major aims were to investigate the pharmacokineticsa and locomotor activity of mephedrone in rats and provide a pharmacokinetic/pharmacodynamic model. Methods Mephedrone was administered to male Sprague-Dawley rats intravenously (10 mg/kg) and orally (30 and 60 mg/kg). Plasma concentrations and metabolites were characterized using LC/MS and LC-MS/MS fragmentation patterns. Locomotor activity was monitored for 180-240 min. Results Mephedrone plasma concentrations after i.v. administration fit a two-compartment model (α=10.23 h−1, β=1.86 h−1). After oral administration, peak mephedrone concentrations were achieved between 0.5 and 1 h and declined to undetectable levels at 9 h. The absolute bioavailability of mephedrone was about 10 % and the percentage of mephedrone protein binding was 21.59±3.67%. We have identified five phase I metabolites in rat blood after oral administration. The relationship between brain levels and free plasma concentration was 1.85±0.08. Mephedrone induced a dose-dependent increase in locomotor activity, which lasted up to 2 h. The pharmacokinetic-pharmacodynamic model successfully describes the relationship between mephedrone plasma concentrations and its psychostimulant effect. Conclusions We suggest a very important first-pass effect for mephedrone after oral administration and an easy access to the central nervous system. The model described might be useful in the estimation and prediction of the onset, magnitude,and time course of mephedrone pharmacodynamics as well as to design new animal models of mephedrone addiction and toxicity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, Business Model Canvas design has evolved from being a paper-based activity to one that involves the use of dedicated computer-aided business model design tools. We propose a set of guidelines to help design more coherent business models. When combined with functionalities offered by CAD tools, they show great potential to improve business model design as an ongoing activity. However, in order to create complex solutions, it is necessary to compare basic business model design tasks, using a CAD system over its paper-based counterpart. To this end, we carried out an experiment to measure user perceptions of both solutions. Performance was evaluated by applying our guidelines to both solutions and then carrying out a comparison of business model designs. Although CAD did not outperform paper-based design, the results are very encouraging for the future of computer-aided business model design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given the climatic changes around the world and the growing outdoor sports participation, existing guidelines and recommendations for exercising in naturally challenging environments such as heat, cold or altitude, exhibit potential shortcomings. Continuous efforts from sport sciences and exercise physiology communities aim at minimizing the risks of environmental-related illnesses during outdoor sports practices. Despite this, the use of simple weather indices does not permit an accurate estimation of the likelihood of facing thermal illnesses. This provides a critical foundation to modify available human comfort modeling and to integrate bio-meteorological data in order to improve the current guidelines. Although it requires further refinement, there is no doubt that standardizing the recently developed Universal Thermal Climate Index approach and its application in the field of sport sciences and exercise physiology may help to improve the appropriateness of the current guidelines for outdoor, recreational and competitive sports participation. This review first summarizes the main environmental-related risk factors that are susceptible to increase with recent climate changes when exercising outside and offers recommendations to combat them appropriately. Secondly, we briefly address the recent development of thermal stress models to assess the thermal comfort and physiological responses when practicing outdoor activities in challenging environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Huolimatta korkeasta automaatioasteesta sorvausteollisuudessa, muutama keskeinen ongelma estää sorvauksen täydellisen automatisoinnin. Yksi näistä ongelmista on työkalun kuluminen. Tämä työ keskittyy toteuttamaan automaattisen järjestelmän kulumisen, erityisesti viistekulumisen, mittaukseen konenäön avulla. Kulumisen mittausjärjestelmä poistaa manuaalisen mittauksen tarpeen ja minimoi ajan, joka käytetään työkalun kulumisen mittaukseen. Mittauksen lisäksi tutkitaan kulumisen mallinnusta sekä ennustamista. Automaattinen mittausjärjestelmä sijoitettiin sorvin sisälle ja järjestelmä integroitiin onnistuneesti ulkopuolisten järjestelmien kanssa. Tehdyt kokeet osoittivat, että mittausjärjestelmä kykenee mittaamaan työkalun kulumisen järjestelmän oikeassa ympäristössä. Mittausjärjestelmä pystyy myös kestämään häiriöitä, jotka ovat konenäköjärjestelmille yleisiä. Työkalun kulumista mallinnusta tutkittiin useilla eri menetelmillä. Näihin kuuluivat muiden muassa neuroverkot ja tukivektoriregressio. Kokeet osoittivat, että tutkitut mallit pystyivät ennustamaan työkalun kulumisasteen käytetyn ajan perusteella. Parhaan tuloksen antoivat neuroverkot Bayesiläisellä regularisoinnilla.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cognitive impairment in schizophrenia and psychosis is ubiquitous and acknowledged as a core feature of clinical expression, pathophysiology, and prediction of functioning. However, assessment of cognitive functioning is excessively time-consuming in routine practice, and brief cognitive instruments specific to psychosis would be of value. Two screening tools have recently been created to address this issue, i.e., the Brief Cognitive Assessment Tool for Schizophrenia (B-CATS) and the Screen for Cognitive Impairment in Psychiatry (SCIP). The aim of this research was to examine the comparative validity of these two brief instruments in relation to a global cognitive score. 161 patients with psychosis (96 patients diagnosed with schizophrenia and 65 patients diagnosed with bipolar disorder) and 76 healthy control subjects were tested with both instruments to examine their concurrent validity relative to a more comprehensive neuropsychological assessment battery. Scores from the B-CATS and the SCIP were highly correlated in the three diagnostic groups, and both scales showed good to excellent concurrent validity relative to a Global Cognitive Composite Score (GCCS) derived from the more comprehensive examination. The SCIP-S showed better predictive value of global cognitive impairment than the B-CATS. Partial and semi-partial correlations showed slightly higher percentages of both shared and unique variance between the SCIP-S and the GCCS than between the B-CATS and the GCCS. Brief instruments for assessing cognition in schizophrenia and bipolar disorders, such as the SCIP-S and B-CATS, seem to be reliable and promising tools for use in routine clinical practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main outcome of the master thesis is innovative solution, which can support a choice of business process modeling methodology. Potential users of this tool are people with background in business process modeling and possibilities to collect required information about organization’s business processes. Master thesis states the importance of business process modeling in implementation of strategic goals of organization. It is made by revealing the place of the concept in Business Process Management (BPM) and its particular case Business Process Reengineering (BPR). In order to support the theoretical outcomes of the thesis a case study of Northern Dimension Research Centre (NORDI) in Lappeenranta University of Technology was conducted. On its example several solutions are shown: how to apply business process modeling methodologies in practice; in which way business process models can be useful for BPM and BPR initiatives; how to apply proposed innovative solution for a choice of business process modeling methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The developing energy markets and rising energy system costs have sparked the need to find new forms of energy production and increase the self-sufficiency of energy production. One alternative is gasification, whose principles have been known for decades, but it is only recently when the technology has become a true alternative. However, in order to meet the requirements of modern energy production methods, it is necessary to study the phenomenon thoroughly. In order to understand the gasification process better and optimize it from the viewpoint of ecology and energy efficiency, it is necessary to develop effective and reliable modeling tools for gasifiers. The main aims of this work have been to understand gasification as a process and furthermore to develop an existing three-dimensional circulating fluidized bed modeling tool for modeling of gasification. The model is applied to two gasification processes of 12 and 50 MWth. The results of modeling and measurements have been compared and subsequently reviewed. The work was done in co-operation with Lappeenranta University of Technology and Foster Wheeler Energia Oy.