965 resultados para Process analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter argues that the electoral competition between the New Left and the Radical Right is best understood as a cultural divide anchored in different class constituencies. Based on individual-level data from the European Social Survey, we analyze the links between voters' class position, their economic and cultural preferences and their party choice for four small and affluent European countries. We find a striking similarity in the class pattern across countries. Everywhere, the New Left attracts disproportionate support from socio-cultural professionals and presents a clear-cut middle-class profile, whereas the Radical Right is most successful among production and service workers and receives least support from professionals. In general, the Radical Right depends on the votes of lowereducated men and older citizens and has turned into a new type of working-class party. However, its success within the working-class is not due to economic, but to cultural issues. The voters of the Radical Right collide with those of the New Left over a cultural conflict of identity and community - and not over questions of redistribution. A full-grown cleavage has thus emerged in the four countries under study, separating a libertarian-universalistic pole from an authoritarian-communitarian pole and going along with a process of class realignment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diplomityön tavoitteena oli kehittää Fazer Suklaan oston tilaus-toimitusprosesseja, jotta raaka-aineet ja pakkausmateriaalit pystytään hoitamaan mahdollisimman tehokkaasti. Ensin selvitettiin kirjallisuuden avulla tilaus-toimitusprosessin päävaiheet ja niihin vaikuttavat tekijät. Empiirisessä osassa lähdettiin liikkeelle käymällä läpi Fazer Suklaan oston nykytilanne, joka tehtiin ostajille suunnatun aikaselvityksen, haastatteluiden ja nykyisten tilaus-toimitusprosessien kuvaamisen avulla. Tavoitetilanteen rakentaminen aloitettiin ostettavien materiaalien ja toimittajien luokittelemisella. Tämän perusteella nämä materiaalit voitiin jakaa kolmen eri tilaus toimitusprosessin alle. Automaattisessa tilaus-toimitusprosessissa eri vaiheet automatisoidaan yhdessäavain toimittajien kanssa. Puoliautomaattinen prosessi perustuu systeemiin, jossa toimittaja näkee internetin kautta Fazerin tuotantosuunnitelman ja tekee tämän perusteella materiaalien täydennykset. Yksinkertaisessa prosessissa ostoarvoltaan alhaiset materiaalit hoidetaan mahdollisimman lähellä käyttöpistettä ja prosessin vaiheet tehdään mandollisimman pienellä työmäärällä. Tavoiteprosessien implementoinnilla todettiin suurimmiksi eduiksi prosessivaiheiden vähentyminen ja manuaalisen työn automatisoituminen. Tätä kautta saatiin prosessin eri vaiheiden työmäärää vähennettyä, sekä alennettua varastotasoja ja näin tilaus toimitusprosessin kokonaiskustannuksia pystyttiin pienentämään.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 In the drilling processes and especially deep-hole drilling process, the monitoring system and having control on mechanical parameters (e.g. Force, Torque,Vibration and Acoustic emission) are essential. The main focus of this thesis work is to study the characteristics of deep-hole drilling process, and optimize the monitoring system for controlling the process. The vibration is considered as a major defect area of the deep-hole drilling process which often leads to breakage of the drill, therefore by vibration analysis and optimizing the workpiecefixture, this area is studied by finite element method and the suggestions are explained. By study on a present monitoring system, and searching on the new sensor products, the modifications and recommendations are suggested for optimize the present monitoring system for excellent performance in deep-hole drilling process research and measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, the author provides a framework to guide¦research in emotional intelligence. Studies conducted up¦to the present bear on a conception of emotional intelligence¦as pertaining to the domain of consciousness and¦investigate the construct with a correlational approach.¦As an alternative, the author explores processes underlying¦emotional intelligence, introducing the distinction¦between conscious and automatic processing as a potential¦source of variability in emotionally intelligent¦behavior. Empirical literature is reviewed to support the¦central hypothesis that individual differences in emotional¦intelligence may be best understood by considering¦the way individuals automatically process emotional¦stimuli. Providing directions for research, the author¦encourages the integration of experimental investigation¦of processes underlying emotional intelligence with¦correlational analysis of individual differences and¦fosters the exploration of the automaticity component¦of emotional intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life cycle analysis (LCA) is a comprehensive method for assessing the environmental impact of a product or an activity over its entire life cycle. The purpose of conducting LCA studies varies from one application to another. Different applications use LCA for different purposes. In general, the main aim of using LCA is to reduce the environmental impact of products through guiding the decision making process towards more sustainable solutions. The most critical phase in an LCA study is the Life Cycle Impact Assessment (LCIA) where the life cycle inventory (LCI) results of the considered substances related to the study of a certain system are transformed into understandable impact categories that represent the impact on the environment. In this research work, a general structure clarifying the steps that shall be followed ir order to conduct an LCA study effectively is presented. These steps are based on the ISO 14040 standard framework. In addition, a survey is done on the most widely used LCIA methodologies. Recommendations about possible developments and suggetions for further research work regarding the use of LCA and LCIA methodologies are discussed as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis gives an overview of the validation process for thermal hydraulic system codes and it presents in more detail the assessment and validation of the French code CATHARE for VVER calculations. Three assessment cases are presented: loop seal clearing, core reflooding and flow in a horizontal steam generator. The experience gained during these assessment and validation calculations has been used to analyze the behavior of the horizontal steam generator and the natural circulation in the geometry of the Loviisa nuclear power plant. The cases presented are not exhaustive, but they give a good overview of the work performed by the personnel of Lappeenranta University of Technology (LUT). Large part of the work has been performed in co-operation with the CATHARE-team in Grenoble, France. The design of a Russian type pressurized water reactor, VVER, differs from that of a Western-type PWR. Most of thermal-hydraulic system codes are validated only for the Western-type PWRs. Thus, the codes should be assessed and validated also for VVER design in order to establish any weaknesses in the models. This information is needed before codes can be used for the safety analysis. Theresults of the assessment and validation calculations presented here show that the CATHARE code can be used also for the thermal-hydraulic safety studies for VVER type plants. However, some areas have been indicated which need to be reassessed after further experimental data become available. These areas are mostly connected to the horizontal stem generators, like condensation and phase separation in primary side tubes. The work presented in this thesis covers a large numberof the phenomena included in the CSNI code validation matrices for small and intermediate leaks and for transients. Also some of the phenomena included in the matrix for large break LOCAs are covered. The matrices for code validation for VVER applications should be used when future experimental programs are planned for code validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belt-drive systems have been and still are the most commonly used power transmission form in various applications of different scale and use. The peculiar features of the dynamics of the belt-drives include highly nonlinear deformation,large rigid body motion, a dynamical contact through a dry friction interface between the belt and pulleys with sticking and slipping zones, cyclic tension of the belt during the operation and creeping of the belt against the pulleys. The life of the belt-drive is critically related on these features, and therefore, amodel which can be used to study the correlations between the initial values and the responses of the belt-drives is a valuable source of information for the development process of the belt-drives. Traditionally, the finite element models of the belt-drives consist of a large number of elements thatmay lead to computational inefficiency. In this research, the beneficial features of the absolute nodal coordinate formulation are utilized in the modeling of the belt-drives in order to fulfill the following requirements for the successful and efficient analysis of the belt-drive systems: the exact modeling of the rigid body inertia during an arbitrary rigid body motion, the consideration of theeffect of the shear deformation, the exact description of the highly nonlinear deformations and a simple and realistic description of the contact. The use of distributed contact forces and high order beam and plate elements based on the absolute nodal coordinate formulation are applied to the modeling of the belt-drives in two- and three-dimensional cases. According to the numerical results, a realistic behavior of the belt-drives can be obtained with a significantly smaller number of elements and degrees of freedom in comparison to the previously published finite element models of belt-drives. The results of theexamples demonstrate the functionality and suitability of the absolute nodal coordinate formulation for the computationally efficient and realistic modeling ofbelt-drives. This study also introduces an approach to avoid the problems related to the use of the continuum mechanics approach in the definition of elastic forces on the absolute nodal coordinate formulation. This approach is applied to a new computationally efficient two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation. The proposed beam element uses a linear displacement field neglecting higher-order terms and a reduced number of nodal coordinates, which leads to fewer degrees of freedom in a finite element.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the strategy implementation process of enterprices; a process whichhas lacked the academic attentioon compared with a rich strategy formation research trdition. Strategy implementation is viewed as a process ensuring tha the strtegies of an organisation are realised fully and quickly, yet with constant consideration of changing circumstances. The aim of this sudy is to provide a framework for identifying, analysing and removing the strategy implementation bottleneck af an organization and thus for intesifying its strategy process.The study is opened by specifying the concept, tasks and key actors of strategy implementation process; especially arguments for the critical implementation role of the top management are provided. In order to facilitate the analysis nad synthetisation of the core findings of scattered doctrine, six characteristic approaches to strategy implementation phenomenon are identified and compared. The Bottleneck Framework is introduced as an instrument for arranging potential strategy realisation problems, prioritising an organisation's implementation obstacles and focusing the improvement measures accordingly. The SUCCESS Framework is introduced as a mnemonic of the seven critical factors to be taken into account when promoting sttrategy implementation. Both frameworks are empirically tested by applying them to real strategy implementation intesification process in an international, industrial, group-structured case enterprise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A beautiful smile is directly related with white teeth. Nowadays oral care has increased and developed processes for beautiful smiles. Dental bleaching is frequently used in odontology, not just for health care also for aesthetic treatment. With the possibility of teeth bleaching, now the importance is in, how white the tooth is? Because color is relate to an individual perception. In order to assets teeth correct color identification has been developed many color guides, models, spaces and analytical methods. Spite all of these useful tools the color interpretation depends on environmental factors, position of the sample in the data acquisition and most importantly the instrument sensitivity. The commons methods have proved to be useful. They are easy to handle, some are portable but they do not have a high sensitivity. The present work is based on the integration of a new analytical technique for color acquisition. High spectral Image (HSI) is able to performed image analysis with high quality and efficiency. HSI is used in many fields and we used it for color image analysis within the bleaching process. The main comparison was done with the HSI and the colorimeter through the processes of two different bleaching protocols. The results showed that HSI has higher sensitivity than the colorimeter. During the analysis the dental surface with the HSI we were able to notice surface changes. These changes were analyzed by roughness studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: (1) To assess the outcomes of minimally invasive simple prostatectomy (MISP) for the treatment of symptomatic benign prostatic hyperplasia in men with large prostates and (2) to compare them with open simple prostatectomy (OSP). METHODS: A systematic review of outcomes of MISP for benign prostatic hyperplasia with meta-analysis was conducted. The article selection process was conducted according to the PRISMA guidelines. RESULTS: Twenty-seven observational studies with 764 patients were analyzed. The mean prostate volume was 113.5 ml (95 % CI 106-121). The mean increase in Qmax was 14.3 ml/s (95 % CI 13.1-15.6), and the mean improvement in IPSS was 17.2 (95 % CI 15.2-19.2). Mean duration of operation was 141 min (95 % CI 124-159), and the mean intraoperative blood loss was 284 ml (95 % CI 243-325). One hundred and four patients (13.6 %) developed a surgical complication. In comparative studies, length of hospital stay (WMD -1.6 days, p = 0.02), length of catheter use (WMD -1.3 days, p = 0.04) and estimated blood loss (WMD -187 ml, p = 0.015) were significantly lower in the MISP group, while the duration of operation was longer than in OSP (WMD 37.8 min, p < 0.0001). There were no differences in improvements in Qmax, IPSS and perioperative complications between both procedures. The small study sizes, publication bias, lack of systematic complication reporting and short follow-up are limitations. CONCLUSIONS: MISP seems an effective and safe treatment option. It provides similar improvements in Qmax and IPSS as OSP. Despite taking longer, it results in less blood loss and shorter hospital stay. Prospective randomized studies comparing OSP, MISP and laser enucleation are needed to define the standard surgical treatment for large prostates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variation of task analysis was used to build an empirical model of how therapists may facilitate client assimilation process, described in the Assimilation of Problematic Experiences Scale. A rational model was specified and considered in light of an analysis of therapist in-session performances (N = 117) drawn from six inpatient therapies for depression. The therapist interventions were measured by the Comprehensive Psychotherapeutic Interventions Rating Scale. Consistent with the rational model, confronting interventions were particularly useful in helping clients elaborate insight. However, rather than there being a small number of progress-related interventions at lower levels of assimilation, therapists' use of interventions was broader than hypothesized and drew from a wide range of therapeutic approaches. Concerning the higher levels of assimilation, there was insufficient data to allow an analysis of the therapist's progress-related interventions.