941 resultados para Process model alignment
Resumo:
The aging process of alcoholic beverages is generally conducted in wood barrels made with species from Quercus sp. Due to the high cost and the lack of viability of commercial production of these trees in Brazil, there is demand for new alternatives to using other native species and the incorporation of new technologies that enable greater competitiveness of sugar cane spirit aged in Brazilian wood. The drying of wood, the thermal treatment applied to it, and manufacturing techniques are important tools in defining the sensory quality of alcoholic beverages after being placed in contact with the barrels. In the thermal treatment, several compounds are changed by the application of heat to the wood and various studies show the compounds are modified, different aromas are developed, there is change in color, and beverages achieve even more pleasant taste, when compared to non-treated woods. This study evaluated the existence of significant differences between hydro-alcoholic solutions of sugar cane spirits elaborated from different species of thermo-treated and non-treated wood in terms of aroma. An acceptance test was applied to evaluate the solutions preferred by tasters under specific test conditions.
Resumo:
The aim of this research was to study the effect of air-temperature and diet composition on the mass transfer kinetics during the drying process of pellets used for Japanese Abalone (Haliotis discus hannai) feeding. In the experimental design, three temperatures were used for convective drying, as well as three different diet compositions (Diets A, B and C), in which the amount of fishmeal, spirulin, algae, fish oil and cornstarch varied. The water diffusion coefficient of the pellets was determined using the equation of Fick's second law, which resulted in values between 0.84-1.94×10-10 m²/s. The drying kinetics was modeled using Page, Modified Page, Root of time, Exponential, Logarithmic, Two-Terms, Modified Henderson-Pabis and Weibull models. In addition, two new models, referred to as 'Proposed' models 1 and 2, were used to simulate this process. According to the statistical tests applied, the models that best fitted the experimental data were Modified Henderson-Pabis, Weibull and Proposed model 2, respectively. Bifactorial analysis of variance ANOVA showed that Diet A (fishmeal 44%, spirulin 9%, fish oil 1% and cornstarch 36%) presented the highest diffusion coefficient values, which were favored by the temperature increase in the drying process.
Resumo:
The purpose of this study was to investigate and model the water absorption process by corn kernels with different levels of mechanical damage Corn kernels of AG 1510 variety with moisture content of 14.2 (% d.b.) were used. Different mechanical damage levels were indirectly evaluated by electrical conductivity measurements. The absorption process was based on the industrial corn wet milling process, in which the product was soaked with a 0.2% sulfur dioxide (SO2) solution and 0.55% lactic acid (C3H6O3) in distilled water, under controlled temperatures of 40, 50, 60, and 70 ºC and different mechanical damage levels. The Peleg model was used for the analysis and modeling of water absorption process. The conclusion is that the structural changes caused by the mechanical damage to the corn kernels influenced the initial rates of water absorption, which were higher for the most damaged kernels, and they also changed the equilibrium moisture contents of the kernels. The Peleg model was well adjusted to the experimental data presenting satisfactory values for the analyzed statistic parameters for all temperatures regardless of the damage level of the corn kernels.
Resumo:
In this study, the effects of hot-air drying conditions on color, water holding capacity, and total phenolic content of dried apple were investigated using artificial neural network as an intelligent modeling system. After that, a genetic algorithm was used to optimize the drying conditions. Apples were dried at different temperatures (40, 60, and 80 °C) and at three air flow-rates (0.5, 1, and 1.5 m/s). Applying the leave-one-out cross validation methodology, simulated and experimental data were in good agreement presenting an error < 2.4 %. Quality index optimal values were found at 62.9 °C and 1.0 m/s using genetic algorithm.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.
Resumo:
Financial time series have a tendency of abruptly changing their behavior and maintain this behavior for several consecutive periods, and commodity futures returns are not an exception. This quality proposes that nonlinear models, as opposed to linear models, can more accurately describe returns and volatility. Markov regime switching models are able to match this behavior and have become a popular way to model financial time series. This study uses Markov regime switching model to describe the behavior of energy futures returns on a commodity level, because studies show that commodity futures are a heterogeneous asset class. The purpose of this thesis is twofold. First, determine how many regimes characterize individual energy commodities’ returns in different return frequencies. Second, study the characteristics of these regimes. We extent the previous studies on the subject in two ways: We allow for the possibility that the number of regimes may exceed two, as well as conduct the research on individual commodities rather than on commodity indices or subgroups of these indices. We use daily, weekly and monthly time series of Brent crude oil, WTI crude oil, natural gas, heating oil and gasoil futures returns over 1994–2014, where available, to carry out the study. We apply the likelihood ratio test to determine the sufficient number of regimes for each commodity and data frequency. Then the time series are modeled with Markov regime switching model to obtain the return distribution characteristics of each regime, as well as the transition probabilities of moving between regimes. The results for the number of regimes suggest that daily energy futures return series consist of three to six regimes, whereas weekly and monthly returns for all energy commodities display only two regimes. When the number of regimes exceeds two, there is a tendency for the time series of energy commodities to form groups of regimes. These groups are usually quite persistent as a whole because probability of a regime switch inside the group is high. However, individual regimes in these groups are not persistent and the process oscillates between these regimes frequently. Regimes that are not part of any group are generally persistent, but show low ergodic probability, i.e. rarely prevail in the market. This study also suggests that energy futures return series characterized with two regimes do not necessarily display persistent bull and bear regimes. In fact, for the majority of time series, bearish regime is considerably less persistent. Rahoituksen aikasarjoilla on taipumus arvaamattomasti muuttaa käyttäytymistään ja jatkaa tätä uutta käyttäytymistä useiden periodien ajan, eivätkä hyödykefutuurien tuotot tee tähän poikkeusta. Tämän ominaisuuden johdosta lineaaristen mallien sijasta epälineaariset mallit pystyvät tarkemmin kuvailemaan esimerkiksi tuottojen jakauman parametreja. Markov regiiminvaihtomallit pystyvät vangitsemaan tämän ominaisuuden ja siksi niistä on tullut suosittuja rahoituksen aikasarjojen mallintamisessa. Tämä tutkimus käyttää Markov regiiminvaihtomallia kuvaamaan yksittäisten energiafutuurien tuottojen käyttäytymistä, sillä tutkimukset osoittavat hyödykefutuurien olevan hyvin heterogeeninen omaisuusluokka. Tutkimuksen tarkoitus on selvittää, kuinka monta regiimiä tarvitaan kuvaamaan energiafutuurien tuottoja eri tuottofrekvensseillä ja mitkä ovat näiden regiimien ominaisuudet. Aiempaa tutkimusta aiheesta laajennetaan määrittämällä regiimien lukumäärä tilastotieteellisen testauksen menetelmin sekä tutkimalla energiafutuureja yksittäin; ei indeksi- tai alaindeksitasolla. Tutkimuksessa käytetään päivä-, viikko- ja kuukausiaikasarjoja Brent-raakaöljyn, WTI-raakaöljyn, maakaasun, lämmitysöljyn ja polttoöljyn tuotoista aikaväliltä 1994–2014, siltä osin kuin aineistoa on saatavilla. Likelihood ratio -testin avulla estimoidaan kaikille aikasarjoille regiimien määrä,jonka jälkeen Markov regiiminvaihtomallia hyödyntäen määritetään yksittäisten regiimientuottojakaumien ominaisuudet sekä regiimien välinen transitiomatriisi. Tulokset regiimien lukumäärän osalta osoittavat, että energiafutuurien päiväkohtaisten tuottojen aikasarjoissa regiimien lukumäärä vaihtelee kolmen ja kuuden välillä. Viikko- ja kuukausituottojen kohdalla kaikkien energiafutuurien prosesseissa regiimien lukumäärä on kaksi. Kun regiimejä on enemmän kuin kaksi, on prosessilla taipumus muodostaa regiimeistä koostuvia ryhmiä. Prosessi pysyy ryhmän sisällä yleensä pitkään, koska todennäköisyys siirtyä ryhmään kuuluvien regiimien välillä on suuri. Yksittäiset regiimit ryhmän sisällä eivät kuitenkaan ole kovin pysyviä. Näin ollen prosessi vaihtelee ryhmän sisäisten regiimien välillä tiuhaan. Regiimit, jotka eivät kuulu ryhmään, ovat yleensä pysyviä, mutta prosessi ajautuu niihin vain harvoin, sillä todennäköisyys siirtyä muista regiimeistä niihin on pieni. Tutkimuksen tulokset osoittavat myös, että prosesseissa, joita ohjaa kaksi regiimiä, nämä regiimit eivät välttämättä ole pysyvät bull- ja bear-markkinatilanteet. Tulokset osoittavat sen sijaan, että bear-markkinatilanne on energiafutuureissa selvästi vähemmän pysyvä.
Resumo:
It has long been known that amino acids are the building blocks for proteins and govern their folding into specific three-dimensional structures. However, the details of this process are still unknown and represent one of the main problems in structural bioinformatics, which is a highly active research area with the focus on the prediction of three-dimensional structure and its relationship to protein function. The protein structure prediction procedure encompasses several different steps from searches and analyses of sequences and structures, through sequence alignment to the creation of the structural model. Careful evaluation and analysis ultimately results in a hypothetical structure, which can be used to study biological phenomena in, for example, research at the molecular level, biotechnology and especially in drug discovery and development. In this thesis, the structures of five proteins were modeled with templatebased methods, which use proteins with known structures (templates) to model related or structurally similar proteins. The resulting models were an important asset for the interpretation and explanation of biological phenomena, such as amino acids and interaction networks that are essential for the function and/or ligand specificity of the studied proteins. The five proteins represent different case studies with their own challenges like varying template availability, which resulted in a different structure prediction process. This thesis presents the techniques and considerations, which should be taken into account in the modeling procedure to overcome limitations and produce a hypothetical and reliable three-dimensional structure. As each project shows, the reliability is highly dependent on the extensive incorporation of experimental data or known literature and, although experimental verification of in silico results is always desirable to increase the reliability, the presented projects show that also the experimental studies can greatly benefit from structural models. With the help of in silico studies, the experiments can be targeted and precisely designed, thereby saving both money and time. As the programs used in structural bioinformatics are constantly improved and the range of templates increases through structural genomics efforts, the mutual benefits between in silico and experimental studies become even more prominent. Hence, reliable models for protein three-dimensional structures achieved through careful planning and thoughtful executions are, and will continue to be, valuable and indispensable sources for structural information to be combined with functional data.
Resumo:
The aim of this thesis is to define effects of lignin separation process on Pulp mill chemical balance especially on sodium/sulphur-balance. The objective is to develop a simulation model with WinGEMS Process Simulator and use that model to simulate the chemical balances and process changes. The literature part explains what lignin is and how kraft pulp is produced. It also introduces to the methods that can be used to extract lignin from black liquor stream and how those methods affect the pulping process. In experimental part seven different cases are simulated with the created simulation model. The simulations are based on selected reference mill that produces 500 000 tons of bleached air-dried (90 %) pulp per year. The simulations include the chemical balance calculation and the estimated production increase. Based on the simulations the heat load of the recovery boiler can be reduced and the pulp production increased when lignin is extracted. The simulations showed that decreasing the waste acid stream intake from the chlorine dioxide plant is an effective method to control the sulphidity level when about 10 % of lignin is extracted. With higher lignin removal rates the in-mill sulphuric acid production has been discovered to be a better alternative to the sulphidity control.
Resumo:
Tämä diplomityö arvioi hitsauksen laadunhallintaohjelmistomarkkinoiden kilpailijoita. Kilpailukenttä on uusi ja ei ole tarkkaa tietoa siitä minkälaisia kilpailijoita on markkinoilla. Hitsauksen laadunhallintaohjelmisto auttaa yrityksiä takaamaan korkean laadun. Ohjelmisto takaa korkean laadun varmistamalla, että hitsaaja on pätevä, hän noudattaa hitsausohjeita ja annettuja parametreja. Sen lisäksi ohjelmisto kerää kaiken tiedon hitsausprosessista ja luo siitä vaadittavat dokumentit. Diplomityön teoriaosuus muodostuu kirjallisuuskatsauksesta ratkaisuliike-toimintaan, kilpailija-analyysin ja kilpailuvoimien teoriaan sekä hitsauksen laadunhallintaan. Työn empiriaosuus on laadullinen tutkimus, jossa tutkitaan kilpailevia hitsauksen laadunhallintaohjelmistoja ja haastatellaan ohjelmistojen käyttäjiä. Diplomityön tuloksena saadaan uusi kilpailija-analyysimalli hitsauksen laadunhallintaohjelmistoille. Mallin avulla voidaan arvostella ohjelmistot niiden tarjoamien primääri- ja sekundääriominaisuuksien perusteella. Toiseksi tässä diplomityössä analysoidaan nykyinen kilpailijatilanne hyödyntämällä juuri kehitettyä kilpailija-analyysimallia.
Resumo:
The computer game industry has grown steadily for years, and in revenues it can be compared to the music and film industries. The game industry has been moving to digital distribution. Computer gaming and the concept of business model are discussed among industrial practitioners and the scientific community. The significance of the business model concept has increased in the scientific literature recently, although there is still a lot of discussion going on on the concept. In the thesis, the role of the business model in the computer game industry is studied. Computer game developers, designers, project managers and organization leaders in 11 computer game companies were interviewed. The data was analyzed to identify the important elements of computer game business model, how the business model concept is perceived and how the growth of the organization affects the business model. It was identified that the importance of human capital is crucial to the business. As games are partly a product of creative thinking also innovation and the creative process are highly valued. The same applies to technical skills when performing various activities. Marketing and customer relationships are also considered as key elements in the computer game business model. Financing and partners are important especially for startups, when the organization is dependent on external funding and third party assets. The results of this study provide organizations with improved understanding on how the organization is built and what business model elements are weighted.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
The Finnish legislation requires for a safe and secure learning environment. However, the comprehensive, risk based safety and security management (SSM) and the management commitment in the implementation and development of the SSM are not mentioned in the legislation. Multiple institutions, operators and researchers have studied and developed safety and security in educational institutions over the past decade. Typically the approach has been fragmented and without bringing up the importance of the comprehensive SSM. The development needs of the safety and security operations in universities have been studied. However, in universities of applied sciences (UASs) and in elementary schools (ESs), the performance level, strengths and weaknesses of the comprehensive SSM have not been studied. The objective of this study was to develop the comprehensive, risk based SSM of educational institutions by developing the new Asteri consultative auditing process and study its effects on auditees. Furthermore, the performance level in the comprehensive SSM in UASs and ESs were studied using Asteri and the TUTOR model developed by the Keski-Uusimaa Department for Rescue Services. In addition, strengths, development needs and differences were identified. In total, 76 educational institutions were audited between the years 2011 and 2014. The study is based on logical empiricism, and an observational applied research design was used. Auditing, observation and an electronic survey were used for data collection. Statistical analysis was used to analyze the collected information. In addition, thematic analysis was used to analyze the development areas of the organizations mentioned by the respondents in the survey. As one of the main contributions, this research presents the new Asteri consultative auditing process. Organizations with low performance levels on the audited subject benefit the most from the Asteri consultative auditing process. Asteri may be usable in many different types of audits, not only in SSM audits. As a new result, this study provides new knowledge on attitudes related to auditing. According to the research findings, auditing may generate negative attitudes and the auditor should take them into account when planning and preparing for audits. Negative attitudes can be compensated by producing added value, objectivity and positivity for the audit and, thus, improve the positive effects of auditing on knowledge and skills. Moreover, as the results of this study shows, auditing safety and security issues do not increase feelings of insecurity, but rather increase feelings of safety and security when using the new Asteri consultative auditing process with the TUTOR model. The results showed that the SSM in the audited UASs was statistically significantly more advanced than that in the audited ESs. However, there is still room for improvement in the ESs and the UASs as the approach to the SSM was fragmented. It can be assumed that the majority of Finnish UASs and ESs do not likely meet the basic level of the comprehensive, risk based the SSM.