937 resultados para Process control -- Statistical methods
Resumo:
Teaching the measurement of blood pressure for both nursing and public health nursing students The purpose of this two-phase study was to develop the teaching of blood pressure measurement within the nursing degree programmes of the Universities of Applied Sciences. The first survey phase described what and how blood pressure measurement was taught within nursing degree programmes. The second intervention phase (2004-2005) evaluated first academic year nursing and public health nursing students’ knowledge and skills results for blood pressure measurement. Additionally, the effect on the Taitoviikko experimental group students’ blood pressure measurement knowledge and skills level. A further objective was to construct models for an instrument (RRmittTest) to evaluate nursing students measurement of blood pressure (2003-2009). The research data for the survey phase were collected from teachers (total sampling, N=107, response rate 77%) using a specially developed RRmittopetus-questionnaire. Quasi-experimental study data on the RRmittTest-instrument was collected from students (purposive sampling, experimental group, n=29, control group, n=44). The RRmittTest consisted of a test of knowledge (Tietotesti) and simulation-based test (TaitoSimkäsi and Taitovideo) of skills. Measurements were made immediately after the teaching and in clinical practice. Statistical methods were used to analyse the results and responses to open-ended questions were organised and classified. Due to the small amount of materials involved and the results of distribution tests of the variables, non-parametric analytic methods were mainly used. Experimental group and control group similar knowledge and skills teaching was based on the results of the national survey phase (RRmittopetus) questionnaire results. Experimental group teaching includes the supervised Taitoviikko teaching method. During Taitoviikko students studied blood pressure measurement at the municipal hospital in a real nursing environment, guided by a teacher and a clinical nursing professional. In order to evaluate both learning and teaching the processes and components of blood pressure measurement were clearly defined as follows: the reliability of measurement instruments, activities preceding blood pressure measurement, technical execution of the measurement, recording, lifestyle guidance and measurement at home (self-monitoring). According to the survey study, blood pressure measurement is most often taught at Universities of Applied Sciences, separately, as knowledge (teaching of theory, 2 hours) and skills (classroom practice, 4 hours). The teaching was implemented largely in a classroom and was based mainly on a textbook. In the intervention phase the students had good knowledge of blood pressure measurement. However, their blood pressure measurement skills were deficient and the control group students, in particular, were highly deficient. Following in clinical practice the experimental group and control group students’ blood pressure measurement recording knowledge improve and experimental groups declined lifestyle guidance. Skills did not improve within any of the components analysed. The control groups` skills on the whole, declined statistically.There was a significant decline amongst the experimental group although only in one component measured. The results describe the learning results for first academic year students and no parallel conclusions should be drawn when considering any learning results for graduating students. The results support the use and further development of the Taitoviiko teaching method. The RRmittTest developed for the study should be assessed and the results seen from a negative perspective. This evaluation tool needs to be developed and retested.
Resumo:
The objective of this study was to find out how Exel position itself against its most important competitors, and how it could strengthen its own position in the future. Both end and intermediate customers was researched about their preferences and how the decision process could be influenced. The research was done by telephone interviews and the data was analyzed by statistical methods. The results showed that the decision making when buying floorball sticks is mainly influenced by the image of the brand and the popularity of the brand, rather than the actual technical abilities of the stick.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
The aim of this study was to create a Balanced Scorecard to the DigiCup solution. The first goal was to create process descriptions to the few critical processes. The second goal was to define appropriate measurements, according to customer survey as well as following the Balanced Scorecard process description, to manage the critical success factors. The overall goal of this study was to create a performance measurement system for the solution which guides the operation towards continuous improvement. This study was conducted by using both qualitative and quantitative methods, and the analysis was done by using a case study method. The material was gathered from the current customers, the management and the employees using structured, semi-structured and open group and individual interviews. The current customers were divided into retailers and direct customers of the DigiCup solution. The questions which the customers were asked were related to the information about interviewee, company, business strategy, market, satisfaction survey and future requirements. The management defined the strategy and took part in specifying the perspectives, objectives and measurements to the Balanced Scorecard of the DigiCup solution. The employees participated into the choosing of the metrics. The material consisted from altogether sixteen interviews. At the beginning of the study the product development, the order-delivery as well as the printing processes was chosen to be the critical processes of the DigiCup solution. These processes were concentrated on already in the literature review while trying to find the characteristics of these processes as well as the critical success factors and the appropriate measurements, which could be utilized when creating the Balanced Scorecard to the DigiCup solution according to the customer survey. The appropriate perspectives, objectives and measurements were found to the DigiCup solution. The chosen measures works as a basis for the development of IT-reporting tool. As a conclusion it can be stated that when discussing a new business, where the objectives are changing according to which development’s phases the company is in, the measurement should be updated often enough.
Resumo:
Delays in the justice system have been undermining the functioning and performance of the court system all over the world for decades. Despite the widespread concern about delays, the solutions have not kept up with the growth of the problem. The delay problem existing in the justice courts processes is a good example of the growing need and pressure in professional public organizations to start improving their business process performance.This study analyses the possibilities and challenges of process improvement in professional public organizations. The study is based on experiences gained in two longitudinal action research improvement projects conducted in two separate Finnish law instances; in the Helsinki Court of Appeal and in the Insurance Court. The thesis has two objectives. First objective is to study what kinds of factors in court system operations cause delays and unmanageable backlogs and how to reduce and prevent delays. Based on the lessons learned from the case projects the objective is to give new insights on the critical factors of process improvement conducted in professional public organizations. Four main areas and factors behind the delay problem is identified: 1) goal setting and performance measurement practices, 2) the process control system, 3) production and capacity planning procedures, and 4) process roles and responsibilities. The appropriate improvement solutions include tools to enhance project planning and scheduling and monitoring the agreed time-frames for different phases of the handling process and pending inventory. The study introduces the identified critical factors in different phases of process improvement work carried out in professional public organizations, the ways the critical factors can be incorporated to the different stages of the projects, and discusses the role of external facilitator in assisting process improvement work and in enhancing ownership towards the solutions and improvement. The study highlights the need to concentrate on the critical factors aiming to get the employees to challenge their existing ways of conducting work, analyze their own processes, and create procedures for diffusing the process improvement culture instead of merely concentrating of finding tools, techniques, and solutions appropriate for applications from the manufacturing sector
Resumo:
The process of cold storage chambers contributes largely to the quality and longevity of stored products. In recent years, it has been intensified the study of control strategies in order to decrease the temperature change inside the storage chamber and to reduce the electric power consumption. This study has developed a system for data acquisition and process control, in LabVIEW language, to be applied in the cooling system of a refrigerating chamber of 30m³. The use of instrumentation and the application developed fostered the development of scientific experiments, which aimed to study the dynamic behavior of the refrigeration system, compare the performance of control strategies and the heat engine, even due to the controlled temperature, or to the electricity consumption. This system tested the strategies for on-off control, PID and fuzzy. Regarding power consumption, the fuzzy controller showed the best result, saving 10% when compared with other tested strategies.
Resumo:
A sequential batch reactor with suspended biomass and useful volume of 5 L was used in the removal of nutrients and organic matter in workbench scale under optimal conditions obtained by central composite rotational design (CCRD), with cycle time (CT) of 16 h (10.15 h, aerobic phase, and 4.35 h, anoxic phase) and carbon: nitrogen ratio (COD/NO2--N+NO3--N) equal to 6. Complete cycles (20), nitrification followed by denitrification, were evaluated to investigate the kinetic behavior of degradation of organic (COD) and nitrogenated (NH4+-N, NO2--N and NO3--N) matter present in the effluent from a bird slaughterhouse and industrial processing facility, as well as to evaluate the stability of the reactor using Shewhart control charts of individual measures. The results indicate means total inorganic nitrogen (NH4+-N+NO2- -N+NO3--N) removal of 84.32±1.59% and organic matter (COD) of 53.65±8.48% in the complete process (nitrification-denitrification) with the process under statistical control. The nitrifying activity during the aerobic phase estimated from the determination of the kinetic parameters had mean K1 and K2 values of 0.00381±0.00043 min-1 and 0.00381±0.00043 min-1, respectively. The evaluation of the kinetic behavior of the conversion of nitrogen indicated a possible reduction of CT in the anoxic phase, since removals of NO2--N and NO3--N higher than 90% were obtained with only 1 h of denitrification.
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
Suorituskyvyn mittaamisella on monia myönteisiä vaikutuksia koko organisaation toimintaan. Mittaamisen avulla toimintaa voidaan johtaa haluttuun suuntaan. Tutkimuksen tavoitteena oli tutkia, minkälainen suorituskykymittaristo tarvitaan katsastusyrityksen ylimmän johdon käyttöön, jotta katsastuksen teknisen laadun johtaminen mahdollistuisi. Tutkimuksen tarkoituksena oli rakentaa katsastuksen teknisen laadun suorituskykymittaristo ylimmälle johdolle. Katsastuksen tekninen laatu on keskeinen kysymys katsastusyritysten olemassaololle. Tekninen laatu on koko katsastustoiminnan perusta, jonka päälle liiketoiminta voidaan rakentaa. Ilman tätä perustaa ei ole jatkuvuutta liiketoiminnalle. Teknisen laadun mittaaminen ei kuitenkaan ole tällä hetkellä järjestelmällistä, eikä käytettävissä ole ollut tehtävään soveltuvaa mittaristoa. Tutkimuksessa käytettiin A-Katsastus Oy:n vuosien 2008–2011 aikana syntyneitä katsastustilastoja. Tilastollista prosessin valvonta-menetelmää (SPC) soveltamalla määritettiin toimipaikka- ja katsastajakohtaiset valvontarajat hylkäysprosenteille ja vikojen määrille. Valvontarajojen avulla rakennettiin katsastuksen teknisen laadun suorituskykymittaristo toimiala-, yritys-, toimipaikka- ja katsastajatasoille. Mittariston avulla voidaan asettaa tekniselle laadulle tavoitteet, seurata tavoitteiden toteumaa ja käynnistää tarvittaessa korjaavat toimenpiteet.
Resumo:
This dissertation examines knowledge and industrial knowledge creation processes. It looks at the way knowledge is created in industrial processes based on data, which is transformed into information and finally into knowledge. In the context of this dissertation the main tool for industrial knowledge creation are different statistical methods. This dissertation strives to define industrial statistics. This is done using an expert opinion survey, which was sent to a number of industrial statisticians. The survey was conducted to create a definition for this field of applied statistics and to demonstrate the wide applicability of statistical methods to industrial problems. In this part of the dissertation, traditional methods of industrial statistics are introduced. As industrial statistics are the main tool for knowledge creation, the basics of statistical decision making and statistical modeling are also included. The widely known Data Information Knowledge Wisdom (DIKW) hierarchy serves as a theoretical background for this dissertation. The way that data is transformed into information, information into knowledge and knowledge finally into wisdom is used as a theoretical frame of reference. Some scholars have, however, criticized the DIKW model. Based on these different perceptions of the knowledge creation process, a new knowledge creation process, based on statistical methods is proposed. In the context of this dissertation, the data is a source of knowledge in industrial processes. Because of this, the mathematical categorization of data into continuous and discrete types is explained. Different methods for gathering data from processes are clarified as well. There are two methods for data gathering in this dissertation: survey methods and measurements. The enclosed publications provide an example of the wide applicability of statistical methods in industry. In these publications data is gathered using surveys and measurements. Enclosed publications have been chosen so that in each publication, different statistical methods are employed in analyzing of data. There are some similarities between the analysis methods used in the publications, but mainly different methods are used. Based on this dissertation the use of statistical methods for industrial knowledge creation is strongly recommended. With statistical methods it is possible to handle large datasets and different types of statistical analysis results can easily be transformed into knowledge.
Resumo:
In many industrial applications, such as the printing and coatings industry, wetting of porous materials by liquids includes not only imbibition and permeation into the bulk but also surface spreading and evaporation. By understanding these phenomena, valuable information can be obtained for improved process control, runnability and printability, in which liquid penetration and subsequent drying play important quality and economic roles. Knowledge of the position of the wetting front and the distribution/degree of pore filling within the structure is crucial in describing the transport phenomena involved. Although exemplifying paper as a porous medium in this work, the generalisation to dynamic liquid transfer onto a surface, including permeation and imbibition into porous media, is of importance to many industrial and naturally occurring environmental processes. This thesis explains the phenomena in the field of heatset web offset printing but the content and the analyses are applicable in many other printing methods and also other technologies where water/moisture monitoring is crucial in order to have a stable process and achieve high quality end products. The use of near-infrared technology to study the water and moisture response of porous pigmented structures is presented. The use of sensitive surface chemical and structural analysis, as well as the internal structure investigation of a porous structure, to inspect liquid wetting and distribution, complements the information obtained by spectroscopic techniques. Strong emphasis has been put on the scale of measurement, to filter irrelevant information and to understand the relationship between interactions involved. The near-infrared spectroscopic technique, presented here, samples directly the changes in signal absorbance and its variation in the process at multiple locations in a print production line. The in-line non-contact measurements are facilitated by using several diffuse reflectance probes, giving the absolute water/moisture content from a defined position in the dynamic process in real-time. The nearinfrared measurement data illustrate the changes in moisture content as the paper is passing through the printing nips and dryer, respectively, and the analysis of the mechanisms involved highlight the roles of the contacting surfaces and the relative liquid carrier properties of both non-image and printed image areas. The thesis includes laboratory studies on wetting of porous media in the form of coated paper and compressed pigment tablets by mono-, dual-, and multi-component liquids, and paper water/moisture content analysis in both offline and online conditions, thus also enabling direct sampling of temporal water/moisture profiles from multiple locations. One main focus in this thesis was to establish a measurement system which is able to monitor rapid changes in moisture content of paper. The study suggests that near-infrared diffuse reflectance spectroscopy can be used as a moisture sensitive system and to provide accurate online qualitative indicators, but, also, when accurately calibrated, can provide quantification of water/moisture levels, its distribution and dynamic liquid transfer. Due to the high sensitivity, samples can be measured with excellent reproducibility and good signal to noise ratio. Another focus of this thesis was on the evolution of the moisture content, i.e. changes in moisture content referred to (re)wetting, and liquid distribution during printing of coated paper. The study confirmed different wetting phases together with the factors affecting each phase both for a single droplet and a liquid film applied on a porous substrate. For a single droplet, initial capillary driven imbibition is followed by equilibrium pore filling and liquid retreat by evaporation. In the case of a liquid film applied on paper, the controlling factors defining the transportation were concluded to be the applied liquid volume in relation to surface roughness, capillarity and permeability of the coating giving the liquid uptake capacity. The printing trials confirmed moisture gradients in the printed sheet depending on process parameters such as speed, fountain solution dosage and drying conditions as well as the printed layout itself. Uneven moisture distribution in the printed sheet was identified to be one of the sources for waving appearance and the magnitude of waving was influenced by the drying conditions.
Resumo:
The thesis is related to the topic of image-based characterization of fibers in pulp suspension during the papermaking process. Papermaking industry is focusing on process control optimization and automatization, which makes it possible to manufacture highquality products in a resource-efficient way. Being a part of the process control, pulp suspension analysis allows to predict and modify properties of the end product. This work is a part of the tree species identification task and focuses on analysis of fiber parameters in the pulp suspension at the wet stage of paper production. The existing machine vision methods for pulp characterization were investigated, and a method exploiting direction sensitive filtering, non-maximum suppression, hysteresis thresholding, tensor voting, and curve extraction from tensor maps was developed. Application of the method to the microscopic grayscale pulp images made it possible to detect curves corresponding to fibers in the pulp image and to compute their morphological characteristics. Performance of the method was evaluated based on the manually produced ground truth data. An accuracy of fiber characteristics estimation, including length, width, and curvature, for the acacia pulp images was found to be 84, 85, and 60% correspondingly.
Resumo:
The purpose of this thesis was to study the design of demand forecasting processes. A literature review in the field of forecasting was conducted, including general forecasting process design, forecasting methods and techniques, the role of human judgment in forecasting and forecasting performance measurement. The purpose of the literature review was to identify the important design choices that an organization aiming to design or re-design their demand forecasting process would have to make. In the empirical part of the study, these choices and the existing knowledge behind them was assessed in a case study where a demand forecasting process was re-designed for a company in the fast moving consumer goods business. The new target process is described, as well as the reasoning behind the design choices made during the re-design process. As a result, the most important design choices are highlighted, as well as their immediate effect on other processes directly tied to the demand forecasting process. Additionally, some new insights on the organizational aspects of demand forecasting processes are explored. The preliminary results indicate that in this case the new process did improve forecasting accuracy, although organizational issues related to the process proved to be more challenging than anticipated.
Resumo:
Finland’s rural landscape has gone through remarkable changes from the 1950’s, due to agricultural developments. Changed farming practices have influenced especially traditional landscape management, and modifications in the arable land structure and grasslands transitions are notable. The review of the previous studies reveal the importance of the rural landscape composition and structure to species and landscape diversity, whereas including the relevance in presence of the open ditches, size of the field and meadow patches, topology of the natural and agricultural landscape. This land-change study includes applying remote sensed data from two time series and empirical geospatial analysis in Geographic Information Systems (GIS). The aims of this retrospective research is to detect agricultural landscape use and land cover change (LULCC) dynamics and discuss the consequences of agricultural intensification to landscape structure covering from the aspects of landscape ecology. Measurements of LULC are derived directly from pre-processed aerial images by a variety of analytical procedures, including statistical methods and image interpretation. The methodological challenges are confronted in the process of landscape classification and combining change detection approaches with landscape indices. Particular importance is paid on detecting agricultural landscape features at a small scale, demanding comprehensive understanding of such agroecosystems. Topological properties of the classified arable land and valley are determined in order to provide insight and emphasize the aspect the field edges in the agricultural landscape as important habitat. Change detection dynamics are presented with change matrix and additional calculations of gain, loss, swap, net change, change rate and tendencies are made. Transition’s possibility is computed following Markov’s probability model and presented with matrix, as well. Thesis’s spatial aspect is revealed with illustrative maps providing knowledge of location of the classified landscape categories and location of the dynamics of the changes occurred. It was assured that in Rekijoki valley’s landscape, remarkable changes in landscape has occurred. Landscape diversity has been strongly influenced by modern agricultural landscape change, as NP of open ditches has decreased and the MPS of the arable plot has decreased. Overall change in the diversity of the landscape is determined with the decrease of SHDI. Valley landscape considered as traditional land use area has experienced major transitional changes, as meadows class has lost almost one third of the area due to afforestation. Also, remarkable transitions have occurred from forest to meadow and arable land to built area. Boundaries measurement between modern and traditional landscape has indicated noticeable proportional increase in arable land-forest edge type and decrease in arable land-meadow edge type. Probability calculations predict higher future changes for traditional landscape, but also for arable land turning into built area.
Resumo:
In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.