10 resultados para process model consolidation
em Cochin University of Science
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Studies on Pseudoscalar Meson Bound States and Semileptonic Decays in a Relativistic Potential Model
Resumo:
In this thesis quark-antiquark bound states are considered using a relativistic two-body equation for Dirac particles. The mass spectrum of mesons includes bound states involving two heavy quarks or one heavy and one light quark. In order to analyse these states within a unified formalism, it is desirable to have a two-fermion equation that limits to one body Dirac equation with a static interaction for the light quark when the other particle's mass tends to infinity. A suitable two-body equation has been developed by Mandelzweig and Wallace. This equation is solved in momentum space and is used to describe the complete spectrum of mesons. The potential used in this work contains a short range one-gluon exchange interaction and a long range linear confining and constant potential terms. This model is used to investigate the decay processes of heavy mesons. Semileptonic decays are more tractable since there is no final state interactions between the leptons and hadrons that would otherwise complicate the situation. Studies on B and D meson decays are helpful to understand the nonperturbative strong interactions of heavy mesons, which in turn is useful to extract the details of weak interaction process. Calculation of form factors of these semileptonic decays of pseudo scalar mesons are also presented.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
This thesis presents the methodology of linking Total Productive Maintenance (TPM) and Quality Function Deployment (QFD). The Synergic power ofTPM and QFD led to the formation of a new maintenance model named Maintenance Quality Function Deployment (MQFD). This model was found so powerful that, it could overcome the drawbacks of TPM, by taking care of customer voices. Those voices of customers are used to develop the house of quality. The outputs of house of quality, which are in the form of technical languages, are submitted to the top management for making strategic decisions. The technical languages, which are concerned with enhancing maintenance quality, are strategically directed by the top management towards their adoption of eight TPM pillars. The TPM characteristics developed through the development of eight pillars are fed into the production system, where their implementation is focused towards increasing the values of the maintenance quality parameters, namely overall equipment efficiency (GEE), mean time between failures (MTBF), mean time to repair (MTIR), performance quality, availability and mean down time (MDT). The outputs from production system are required to be reflected in the form of business values namely improved maintenance quality, increased profit, upgraded core competence, and enhanced goodwill. A unique feature of the MQFD model is that it is not necessary to change or dismantle the existing process ofdeveloping house ofquality and TPM projects, which may already be under practice in the company concerned. Thus, the MQFD model enables the tactical marriage between QFD and TPM.First, the literature was reviewed. The results of this review indicated that no activities had so far been reported on integrating QFD in TPM and vice versa. During the second phase, a survey was conducted in six companies in which TPM had been implemented. The objective of this survey was to locate any traces of QFD implementation in TPM programme being implemented in these companies. This survey results indicated that no effort on integrating QFD in TPM had been made in these companies. After completing these two phases of activities, the MQFD model was designed. The details of this work are presented in this research work. Followed by this, the explorative studies on implementing this MQFD model in real time environments were conducted. In addition to that, an empirical study was carried out to examine the receptivity of MQFD model among the practitioners and multifarious organizational cultures. Finally, a sensitivity analysis was conducted to find the hierarchy of various factors influencing MQFD in a company. Throughout the research work, the theory and practice of MQFD were juxtaposed by presenting and publishing papers among scholarly communities and conducting case studies in real time scenario.
Resumo:
In this paper we try to fit a threshold autoregressive (TAR) model to time series data of monthly coconut oil prices at Cochin market. The procedure proposed by Tsay [7] for fitting the TAR model is briefly presented. The fitted model is compared with a simple autoregressive (AR) model. The results are in favour of TAR process. Thus the monthly coconut oil prices exhibit a type of non-linearity which can be accounted for by a threshold model.
Resumo:
In this introduction part, importance has been given to the elastomeric properties of polyurethanes. Emphasis has been laid to this property based on microphase separation and how this could be modified by modifying the segment lengths, as well as the structure of the segments. Implication was also made on the mechanical and thermal properties of these copolymers based on various analytical methods usually used for characterization of polymers. A brief overview of the challenges faced by the polyurethane chemistry was also done, pointing to the fact that though polyurethane industry is more than 75 years old, still a lot of questions remain unanswered, that too mostly in the synthesis of polyurethanes. A major challenge in this industry is the utilization of more environmental friendly “Green Chemistry Routes” for the synthesis of polyurethanes which are devoid of any isocyanates or harsh solvents.The research work in this thesis was focused to develop non-isocyanate green chemical process for polyurethanes and also self-organize the resultant novel polymers into nano-materials. The thesis was focused on the following three major aspects:(i) Design and development of novel melt transurethane process for polyurethanes under non-isocyanate and solvent free melt condition. (ii) Solvent induced self-organization of the novel cycloaliphatic polyurethanes prepared by the melt transurethane process into microporous templates and nano-sized polymeric hexagons and spheres. (iii) Novel polyurethane-oligophenylenevinylene random block copolymer nano-materials and their photoluminescence properties. The second chapter of the thesis gives an elaborate discussion on the “Novel Melt Transurethane Process ” for the synthesis of polyurethanes under non-isocyanate and solvent free melt condition. The polycondensation reaction was carried out between equimolar amounts of a di-urethane monomer and a diol in the presence of a catalyst under melt condition to produce polyurethanes followed by the removal of low boiling alcohol from equilibrium. The polymers synthesized through this green chemical route were found to be soluble (devoid of any cross links), thermally stable and free from any isocyanate entities. The polymerization reaction was confirmed by various analytical techniques with specific references to the extent of reaction which is the main watchful point for any successful polymerization reaction. The mechanistic aspects of the reaction were another point of consideration for the novel polymerization route which was successfully dealt with by performing various model reactions. Since this route was successful enough in synthesizing polyurethanes with novel structures, they were employed for the solvent induced self-organization which is an important area of research in the polymer world in the present scenario. Chapter three mesmerizes the reader with multitudes of morphologies depending upon the chemical backbone structure of the polyurethane as well as on the nature and amount of various solvents employed for the self-organization tactics. The rationale towards these morphologies-“Hydrogen Bonding ” have been systematically probed by various techniques. These polyurethanes were then tagged with luminescent 0ligo(phenylene vinylene) units and the effects of these OPV blocks on the morphology of the polyurethanes were analyzed in chapter four. These blocks have resulted in the formation of novel “Blue Luminescent Balls” which could find various applications in optoelectronic devices as well as delivery vehicles.
Resumo:
In this modern complex world, stress at work is found to be increasingly a common feature in day to day life. For the same reason, job stress is one of the active areas in occupational health and safety research for over last four decades and is continuing to attract researchers in academia and industry. Job stress in process industries is of concern due to its influence on process safety, and worker‘s safety and health. Safety in process (chemical and nuclear material) industry is of paramount importance, especially in a thickly populated country like India. Stress at job is the main vector in inducing work related musculoskeletal disorders which in turn can affect the worker health and safety in process industries. In view of the above, the process industries should try to minimize the job stress in workers to ensure a safe and healthy working climate for the industry and the worker. This research is mainly aimed at assessing the influence of job stress in inducing work related musculoskeletal disorders in chemical process industries in India
Resumo:
A study focusing on the identification of return generating factors and to the extent of their influence on share prices the outcome will be a tool for investment analysis in the hands of investors portfolio managers and mutual funds who are mostly concerned with changing share prices. Since the study takes into account the influence of macroeconomic variables on variations in share returns by using the outcome the government can frame out suitable policies on long term basis and that will help in nurturing a healthy economy and resultant stock market. As every company management tries to maximize the wealth of the share holders a clear idea about the return generating variables and their influence will help the management to frame various policies to maximize the wealth of the shareholders.
Resumo:
Agent based simulation is a widely developing area in artificial intelligence.The simulation studies are extensively used in different areas of disaster management. This work deals with the study of an agent based evacuation simulation which is being done to handle the various evacuation behaviors.Various emergent behaviors of agents are addressed here. Dynamic grouping behaviors of agents are studied. Collision detection and obstacle avoidances are also incorporated in this approach.Evacuation is studied with single exits and multiple exits and efficiency is measured in terms of evacuation rate, collision rate etc.Net logo is the tool used which helps in the efficient modeling of scenarios in evacuation
Resumo:
Refiners today operate their equipment for prolonged periods without shutdown. This is primarily due to the increased pressures of the market resulting in extended shutdown-to-shutdown intervals. This places extreme demands on the reliability of the plant equipment. The traditional methods of reliability assurance, like Preventive Maintenance, Predictive Maintenance and Condition Based Maintenance become inadequate in the face of such demands. The alternate approaches to reliability improvement, being adopted the world over are implementation of RCFA programs and Reliability Centered Maintenance. However refiners and process plants find it difficult to adopt this standardized methodology of RCM mainly due to the complexity and the large amount of analysis that needs to be done, resulting in a long drawn out implementation, requiring the services of a number of skilled people. These results in either an implementation restricted to only few equipment or alternately, one that is non-standard. The paper presents the current models in use, the core requirements of a standard RCM model, the alternatives to classical RCM, limitations in the existing model, classical RCM and available alternatives to RCM and will then go on to present an ‗Accelerated‘ approach to RCM implementation, that, while ensuring close conformance to the standard, does not place a large burden on the implementers