943 resultados para PID and Fuzzy and practical models
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
Adsorbents functionalized with chelating agents are effective in removal of heavy metals from aqueous solutions. Important properties of such adsorbents are high binding affinity as well as regenerability. In this study, aminopolycarboxylic acid, EDTA and DTPA, were immobilized on the surface of silica gel, chitosan, and their hybrid materials to achieve chelating adsorbents for heavy metals such as Co(II), Ni(II), Cd(II), and Pb(II). New knowledge about the adsorption properties of EDTA- and DTPA-functionalizedadsorbents was obtained. Experimental work showed the effectiveness, regenerability, and stability of the studied adsorbents. Both advantages and disadvantages of the adsorbents were evaluated. For example, the EDTA-functionalized chitosan-silica hybrid materials combined the benefits of the silica gel and chitosan while at the same time diminishing their observed drawbacks. Modeling of adsorption kinetics and isotherms is an important step in design process. Therefore, several kinetic and isotherm models were introduced and applied in this work. Important aspects such as effect of error function, data range, initial guess values, and linearization were discussed and investigated. The selection of the most suitable model was conducted by comparing the experimental and simulated data as well as evaluating the correspondence between the theory behind the model and properties of the adsorbent. In addition, modeling of two-component data was conducted using various extended isotherms. Modeling results for both one- and twocomponent systems supported each other. Finally, application testing of EDTA- and DTPA-functionalized adsorbents was conducted. The most important result was the applicability of DTPA-functionalized silica gel and chitosan in the capturing of Co(II) from its aqueous EDTA-chelate. Moreover, these adsorbents were efficient in various solution matrices. In addition, separation of Ni(II) from Co(II) and Ni(II) and Pb(II) from Co(II) and Cd(II) was observed in two- and multimetal systems. Lastly, prior to their analysis, EDTA- and DTPA-functionalized silica gels were successfully used to preconcentrate metal ions from both pure and salty waters
Resumo:
The importance of the company’s intellectual capital (IC) increased during the last decades due to knowledge-based economy development. Despite the clear understanding of the IC importance, researchers agree on the fact that many difficulties in management of intangibles still exist from the both theoretical and practical points of view. The goal of the study is to conduct a comparison of IC management approaches used in international and Russian software companies. To carry out a proper comparison and identify similarities and differences, software firms are explored from the point of view of IC, and then be compared in the context of international and Russian sectors. At the end of the study, current IC management findings in international and Russian software companies are presented, and comparison of IC management is done. It was investigated from the comparison that international and Russian software companies have similarities and few principal differences in several IC management areas. The comparison of IC management approaches between international and Russian software companies provide helpful information to both, researchers and practitioners.
Resumo:
This thesis presents a three-dimensional, semi-empirical, steady state model for simulating the combustion, gasification, and formation of emissions in circulating fluidized bed (CFB) processes. In a large-scale CFB furnace, the local feeding of fuel, air, and other input materials, as well as the limited mixing rate of different reactants produce inhomogeneous process conditions. To simulate the real conditions, the furnace should be modelled three-dimensionally or the three-dimensional effects should be taken into account. The only available methods for simulating the large CFB furnaces three-dimensionally are semi-empirical models, which apply a relatively coarse calculation mesh and a combination of fundamental conservation equations, theoretical models and empirical correlations. The number of such models is extremely small. The main objective of this work was to achieve a model which can be applied to calculating industrial scale CFB boilers and which can simulate all the essential sub-phenomena: fluid dynamics, reactions, the attrition of particles, and heat transfer. The core of the work was to develop the model frame and the required sub-models for determining the combustion and sorbent reactions. The objective was reached, and the developed model was successfully used for studying various industrial scale CFB boilers combusting different types of fuel. The model for sorbent reactions, which includes the main reactions for calcitic limestones, was applied for studying the new possible phenomena occurring in the oxygen-fired combustion. The presented combustion and sorbent models and principles can be utilized in other model approaches as well, including other empirical and semi-empirical model approaches, and CFD based simulations. The main achievement is the overall model frame which can be utilized for the further development and testing of new sub-models and theories, and for concentrating the knowledge gathered from the experimental work carried out at bench scale, pilot scale and industrial scale apparatus, and from the computational work performed by other modelling methods.
Resumo:
In development of human medicines, it is important to predict early and accurately enough the disease and patient population to be treated as well as the effective and safe dose range of the studied medicine. This is pursued by using preclinical research models, clinical pharmacology and early clinical studies with small sample sizes. When successful, this enables effective development of medicines and reduces unnecessary exposure of healthy subjects and patients to ineffectice or harmfull doses of experimental compounds. Toremifene is a selective estrogen receptor modulator (SERM) used for treatment of breast cancer. Its development was initiated in 1980s when selection of treatment indications and doses were based on research in cell and animal models and on noncomparative clinical studies including small number of patients. Since the early development phase, the treatment indication, the patient population and the dose range were confirmed in large comparative clinical studies in patients. Based on the currently available large and long term clinical study data the aim of this study was to investigate how the early phase studies were able to predict the treatment indication, patient population and the dose range of the SERM. As a conclusion and based on the estrogen receptor mediated mechanism of action early studies were able to predict the treatment indication, target patient population and a dose range to be studied in confirmatory clinical studies. However, comparative clinical studies are needed to optimize dose selection of the SERM in treatment of breast cancer.
Resumo:
This study investigates futures market efficiency and optimal hedge ratio estimation. First, cointegration between spot and futures prices is studied using Johansen method, with two different model specifications. If prices are found cointegrated, restrictions on cointegrating vector and adjustment coefficients are imposed, to account for unbiasedness, weak exogeneity and prediction hypothesis. Second, optimal hedge ratios are estimated using static OLS, and time-varying DVEC and CCC models. In-sample and out-of-sample results for one, two and five period ahead are reported. The futures used in thesis are RTS index, EUR/RUB exchange rate and Brent oil, traded in Futures and options on RTS.(FORTS) For in-sample period, data points were acquired from start of trading of each futures contract, RTS index from August 2005, EUR/RUB exchange rate March 2009 and Brent oil October 2008, lasting till end of May 2011. Out-of-sample period covers start of June 2011, till end of December 2011. Our results indicate that all three asset pairs, spot and futures, are cointegrated. We found RTS index futures to be unbiased predictor of spot price, mixed evidence for exchange rate, and for Brent oil futures unbiasedness was not supported. Weak exogeneity results for all pairs indicated spot price to lead in price discovery process. Prediction hypothesis, unbiasedness and weak exogeneity of futures, was rejected for all asset pairs. Variance reduction results varied between assets, in-sample in range of 40-85 percent and out-of sample in range of 40-96 percent. Differences between models were found small, except for Brent oil in which OLS clearly dominated. Out-of-sample results indicated exceptionally high variance reduction for RTS index, approximately 95 percent.
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
Millions of enterprises move their applications to a cloud every year. According to Forrester Research “the global cloud computing market will grow from a $40.7 billion in 2011 to $241 billion in 2020”. Due to increased interests and demand broad range of providers and solutions have appeared in the market. It is vital to be able to predict possible problems correctly and to classify and mitigate risks associated with the migration process. The study will show the main criteria that should be taken into consideration while making decision of moving enterprise applications to the cloud and choosing appropriate vendor. The main goal of the research is to identify main problems during the migration to a cloud and propose a solution for their prevention and mitigation of consequences in case of occurrence. The research provides an overview of existing cloud solutions and deployment models for enterprise applications. It identifies decision drivers of an applications migration to a cloud and potential risks and benefits associated with this. Finally, the best practices for the successful enterprise-to-cloud migration based on the case studies analysis are formulated.
Resumo:
This research focused on operation of a manpower pool within a service business unit in Company X and aimed to identify how the operation should be improved in order to get most out of it concerning the future prospects of the service business unit. This was done by analyzing the current state of the manpower pool related operations in means of project business, project management and business models. The objective was to deepen the understanding and to highlight possible areas of improvement. The research was conducted as a qualitative single-case study utilizing also an action research method; the research approach was a combination of conceptual, action-oriented and constructive approaches. The primary data was collected with executing a comprehensive literature review and semi-structured theme interviews. The main results described how the manpower pool operates as part of the service business unit in project business by participating in different types of delivery projects; process flows for the project types were mapped. Project management was analyzed especially from the resource management point of view, and an Excel-based skills analysis model was constructed for this purpose. Utilization of operational business models was also studied to define strategic direction for development activities. The results were benchmarked against two competitors in order to specify lessons to be learnt from their use of operational business models.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Development of entrepreneurial orientation (EO) within a company is considered to be significant for firm performance in a contemporary market society with constantly changing environment. Considered as entrepreneurial, the firm is able to innovate, make risky investments and be proactive. The purpose of the thesis is to investigate factors which influence EO, the impact of EO on firm performance, and a mediating role of EO in developed and emerging market contexts. The empirical research is conducted quantitatively in a form of a survey in Russia and Finland. The results of the thesis have shown that the relationship between antecedents, EO and firm performance outcomes is different in developed and emerging contexts and can be explained by cultural differences and institutional development. The empirical research has both theoretical and practical novelty. It contributes to the existing literature on EO by the usage of comparative cross-country approach and a broader three-way interaction model between the variables. A general practical implication of the research is that managers may benefit from developing entrepreneurial strategic posture in particular contexts.
Resumo:
Third party logistics, and third party logistics providers and the services they offer have grown substantially in the last twenty years. Even though there has been extensive research on third party logistics providers, and regular industry reviews within the logistics industry, a closer research in the area of partner selection and network models in the third party logistics industry is missing. The perspective taken in this study was of expanding the network research into logistics service providers as the focal firm in the network. The purpose of the study is to analyze partnerships and networks in the third party logistics industry in order to define how networks are utilized in third party logistics markets, what have been the reasons for the partnerships, and whether there are benefits for the third party logistics provider that can be achieved through building networks and partnerships. The theoretical framework of this study was formed based on common theories in studying networks and partnerships in accordance with models of horizontal and vertical partnerships. The theories applied to the framework and context of this study included the strategic network view and the resource-based view. Applying these two network theories to the position and networks of third party logistics providers in an industrial supply chain, a theoretical model for analyzing the horizontal and vertical partnerships where the TPL provider is in focus was structured. The empirical analysis of TPL partnerships consisted of a qualitative document analysis of 33 partnership examples involving companies present in the Finnish TPL markets. For the research, existing documents providing secondary data on types of partnerships, reasons for the partnerships, and outcomes of the partnerships were searched from available online sources. Findings of the study revealed that third party logistics providers are evident in horizontal and vertical interactions varying in geographical coverage and the depth and nature of the relationship. Partnership decisions were found to be made on resource based reasons, as well as from strategic aspects. The discovered results of the partnerships in this study included cost reduction and effectiveness in the partnerships for improving existing services. In addition in partnerships created for innovative service extension, differentiation, and creation of additional value were discovered to have emerged as results of the cooperation. It can be concluded that benefits and competitive advantage can be created through building partnerships in order to expand service offering and seeking synergies.
Resumo:
In this thesis, two negatively valenced emotions are approached as reflecting children’s self-consciousness, namely guilt and shame. Despite the notable role of emotions in the psychological research, empirical research findings on the links between guilt, shame, and children’s social behavior – and particularly aggression – have been modest, inconsistent, and sometimes contradictory. This thesis contains four studies on the associations of guilt, shame, emotion regulation, and social cognitions with children’s social behavior. The longitudinal material of the thesis was collected as a survey among a relatively large amount of Finnish preadolescents. In Study I, the distinctiveness of guilt and shame in children’s social behavior were investigated. The more specific links of emotions and aggressive behavior were explored in Study II, in which emotion regulation and negative emotionality were treated as the moderators between guilt, shame, and children’s aggressive behavior. The role of emotion management was further evaluated in Study III, in which effortful control and anger were treated as the moderators between domain-specific aggressive cognitions and children’s aggressive behavior. In the light of the results from the Studies II and III, it seems that for children with poor emotion management the effects of emotions and social cognitions on aggressive behavior are straight-forward, whereas effective emotion management allows for reframing the situation. Finally, in Study IV, context effects on children’s anticipated emotions were evaluated, such that children were presented a series of hypothetical vignettes, in which the child was acting as the aggressor. Furthermore, the identity of the witnesses and victim’s reactions were systematically manipulated. Children anticipated the most shame in situations, in which all of the class was witnessing the aggressive act, whereas both guilt and shame were anticipated the most in the situations, in which the victim was reacting with sadness. Girls and low-aggressive children were more sensitive to contextual cues than boys and high-aggressive children. Overall, the results of this thesis suggest that the influences of guilt, shame, and social cognition on preadolescents’ aggressive behavior depend significantly on the nature of individual emotion regulation, as well as situational contexts. Both theoretical and practical implications of this study highlight a need to acknowledge effective emotion management as enabling the justification of one’s own immoral behavior.