887 resultados para Analysis and statistical methods
Resumo:
Life cycle analysis (LCA) is a comprehensive method for assessing the environmental impact of a product or an activity over its entire life cycle. The purpose of conducting LCA studies varies from one application to another. Different applications use LCA for different purposes. In general, the main aim of using LCA is to reduce the environmental impact of products through guiding the decision making process towards more sustainable solutions. The most critical phase in an LCA study is the Life Cycle Impact Assessment (LCIA) where the life cycle inventory (LCI) results of the considered substances related to the study of a certain system are transformed into understandable impact categories that represent the impact on the environment. In this research work, a general structure clarifying the steps that shall be followed ir order to conduct an LCA study effectively is presented. These steps are based on the ISO 14040 standard framework. In addition, a survey is done on the most widely used LCIA methodologies. Recommendations about possible developments and suggetions for further research work regarding the use of LCA and LCIA methodologies are discussed as well.
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
The dynamical properties ofshaken granular materials are important in many industrial applications where the shaking is used to mix, segregate and transport them. In this work asystematic, large scale simulation study has been performed to investigate the rheology of dense granular media, in the presence of gas, in a three dimensional vertical cylinder filled with glass balls. The base wall of the cylinder is subjected to sinusoidal oscillation in the vertical direction. The viscoelastic behavior of glass balls during a collision, have been studied experimentally using a modified Newton's Cradle device. By analyzing the results of the measurements, using numerical model based on finite element method, the viscous damping coefficient was determinedfor the glass balls. To obtain detailed information about the interparticle interactions in a shaker, a simplified model for collision between particles of a granular material was proposed. In order to simulate the flow of surrounding gas, a formulation of the equations for fluid flow in a porous medium including particle forces was proposed. These equations are solved with Large Eddy Simulation (LES) technique using a subgrid-model originally proposed for compressible turbulent flows. For a pentagonal prism-shaped container under vertical vibrations, the results show that oscillon type structures were formed. Oscillons are highly localized particle-like excitations of the granular layer. This self-sustaining state was named by analogy with its closest large-scale analogy, the soliton, which was first documented by J.S. Russell in 1834. The results which has been reportedbyBordbar and Zamankhan(2005b)also show that slightly revised fluctuation-dissipation theorem might apply to shaken sand, which appears to be asystem far from equilibrium and could exhibit strong spatial and temporal variations in quantities such as density and local particle velocity. In this light, hydrodynamic type continuum equations were presented for describing the deformation and flow of dense gas-particle mixtures. The constitutive equation used for the stress tensor provides an effective viscosity with a liquid-like character at low shear rates and a gaseous-like behavior at high shear rates. The numerical solutions were obtained for the aforementioned hydrodynamic equations for predicting the flow dynamics ofdense mixture of gas and particles in vertical cylindrical containers. For a heptagonal prism shaped container under vertical vibrations, the model results were found to predict bubbling behavior analogous to those observed experimentally. This bubbling behavior may be explained by the unusual gas pressure distribution found in the bed. In addition, oscillon type structures were found to be formed using a vertically vibrated, pentagonal prism shaped container in agreement with computer simulation results. These observations suggest that the pressure distribution plays a key rolein deformation and flow of dense mixtures of gas and particles under vertical vibrations. The present models provide greater insight toward the explanation of poorly understood hydrodynamic phenomena in the field of granular flows and dense gas-particle mixtures. The models can be generalized to investigate the granular material-container wall interactions which would be an issue of high interests in the industrial applications. By following this approach ideal processing conditions and powder transport can be created in industrial systems.
Resumo:
During the latest few years the need for new motor types has grown, since both high efficiency and an accurate dynamic performance are demanded in industrial applications. For this reason, new effective control systems such as direct torque control (DTC) have been developed. Permanent magnet synchronous motors (PMSM) are well suitable for new adjustable speed AC inverter drives, because their efficiency and power factor are not depending on the pole pair number and speed to the same extent as it is the case in induction motors. Therefore, an induction motor (IM) with a mechanical gearbox can often be replaced with a direct PM motor drive. Space as well as costs will be saved, because the efficiency increases and the cost of maintenance decreases as well. This thesis deals with design criterion, analytical calculation and analysis of the permanent magnet synchronous motor for both sinusoidal air-gap flux density and rectangular air-gapflux density. It is examined how the air-gap flux, flux densities, inductances and torque can be estimated analytically for salient pole and non-salient pole motors. It has been sought by means of analytical calculations for the ultimate construction for machines rotating at relative low 300 rpm to 600 rpm speeds, which are suitable speeds e.g. in Pulp&Paper industry. The calculations are verified by using Finite Element calculations and by measuring of prototype motor. The prototype motor is a 45 kW, 600 rpm PMSM with buried V-magnets, which is a very appropriate construction for high torque motors with a high performance. With the purposebuilt prototype machine it is possible not only to verify the analytical calculations but also to show whether the 600 rpm PMSM can replace the 1500 rpm IM with a gear. It can also be tested if the outer dimensions of the PMSM may be the same as for the IM and if the PMSM in this case can produce a 2.5 fold torque, in consequence of which it may be possible to achieve the same power. The thesis also considers the question how to design a permanent magnet synchronous motor for relatively low speed applications that require a high motor torqueand efficiency as well as bearable costs of permanent magnet materials. It is shown how a selection of different parameters affects the motor properties. Key words: Permanent magnet synchronous motor, PMSM, surface magnets, buried magnets
Resumo:
This thesis investigates the strategy implementation process of enterprices; a process whichhas lacked the academic attentioon compared with a rich strategy formation research trdition. Strategy implementation is viewed as a process ensuring tha the strtegies of an organisation are realised fully and quickly, yet with constant consideration of changing circumstances. The aim of this sudy is to provide a framework for identifying, analysing and removing the strategy implementation bottleneck af an organization and thus for intesifying its strategy process.The study is opened by specifying the concept, tasks and key actors of strategy implementation process; especially arguments for the critical implementation role of the top management are provided. In order to facilitate the analysis nad synthetisation of the core findings of scattered doctrine, six characteristic approaches to strategy implementation phenomenon are identified and compared. The Bottleneck Framework is introduced as an instrument for arranging potential strategy realisation problems, prioritising an organisation's implementation obstacles and focusing the improvement measures accordingly. The SUCCESS Framework is introduced as a mnemonic of the seven critical factors to be taken into account when promoting sttrategy implementation. Both frameworks are empirically tested by applying them to real strategy implementation intesification process in an international, industrial, group-structured case enterprise.
Resumo:
In 2008, a Swiss Academies of Arts and Sciences working group chaired by Professor Emilio Bossi issued a "Memorandum on scientific integrity and the handling of misconduct in the scientific context", together with a paper setting out principles and procedures concerning integrity in scientific research. In the Memorandum, unjustified claims of authorship in scientific publications are referred to as a form of scientific misconduct - a view widely shared in other countries. In the Principles and Procedures, the main criteria for legitimate authorship are specified, as well as the associated responsibilities. It is in fact not uncommon for disputes about authorship to arise with regard to publications in fields where research is generally conducted by teams rather than individuals. Such disputes may concern not only the question who is or is not to be listed as an author but also, frequently, the precise sequence of names, if the list is to reflect the various authors' roles and contributions. Subjective assessments of the contributions made by the individual members of a research group may differ substantially. As scientific collaboration - often across national boundaries - is now increasingly common, ensuring appropriate recognition of all parties is a complex matter and, where disagreements arise, it may not be easy to reach a consensus. In addition, customs have changed over the past few decades; for example, the practice of granting "honorary" authorship to an eminent researcher - formerly not unusual - is no longer considered acceptable. It should be borne in mind that the publications list has become by far the most important indicator of a researcher's scientific performance; for this reason, appropriate authorship credit has become a decisive factor in the careers of young researchers, and it needs to be managed and protected accordingly. At the international and national level, certain practices have therefore developed concerning the listing of authors and the obligations of authorship. The Scientific Integrity Committee of the Swiss Academies of Arts and Sciences has collated the relevant principles and regulations and formulated recommendations for authorship in scientific publications. These should help to prevent authorship disputes and offer guidance in the event of conflicts.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
Within Data Envelopment Analysis, several alternative models allow for an environmental adjustment. The majority of them deliver divergent results. Decision makers face the difficult task of selecting the most suitable model. This study is performed to overcome this difficulty. By doing so, it fills a research gap. First, a two-step web-based survey is conducted. It aims (1) to identify the selection criteria, (2) to prioritize and weight the selection criteria with respect to the goal of selecting the most suitable model and (3) to collect the preferences about which model is preferable to fulfil each selection criterion. Second, Analytic Hierarchy Process is used to quantify the preferences expressed in the survey. Results show that the understandability, the applicability and the acceptability of the alternative models are valid selection criteria. The selection of the most suitable model depends on the preferences of the decision makers with regards to these criteria.
Resumo:
Rautateillä käytettävät tavaravaunut ovat vanhenemassa hyvin nopeasti; tämä koskee niin Venäjää, Suomea, Ruotsia kuin laajemminkin Eurooppaa. Venäjällä ja Euroopassa on käytössä runsaasti vaunuja, jotka ovat jo ylittäneet niille suositeltavan käyttöiän. Silti niitä käytetään kuljetuksissa, kun näitä korvaavia uusia vaunuja ei ole tarpeeksi saatavilla. Uusimmat vaunut ovat yleensä vaunuja vuokraavien yritysten tai uusien rautatieoperaattorien hankkimia - tämä koskee erityisesti Venäjää, jossa vaunuvuokraus on noussut erittäin suosituksi vaihtoehdoksi. Ennusteissa kerrotaan vaunupulan kasvavan ainakin vuoteen 2010 saakka. Jos rautateiden suosio rahtikuljetusmuotona kasvaa, niin voimistuva vaunukysyntä jatkuu huomattavan paljon pidemmän aikaa. Euroopan ja Venäjän vaunukannan tilanne näkyy myös sitä palvelevan konepajateollisuuden ongelmina - yleisesti ottaen alan eurooppalaiset yritykset ovat heikosti kannattavia ja niiden liikevaihto ei juuri kasva, venäläiset ja ukrainalaiset yritykset ovat olleet samassa tilanteessa, joskin aivan viime vuosina tilanne on osassa kääntynyt paremmaksi. Kun näiden maanosien yritysten liikevaihtoa, voittoa ja omistaja-arvoa verrataan yhdysvaltalaisiin kilpailijoihin, huomataan että jälkimmäisten suoriutuminen on huomattavan paljon parempaa, ja näillä yrityksillä on myös kyky maksaa osinkoja omistajilleen. Tutkimuksen tarkoituksena oli kehittää uuden tyyppinen kuljetusvaunu Suomen, Venäjän sekä mahdollisesti myös Kiinan väliseen liikenteeseen. Vaunutyypin tarkoituksena olisi kyetä toimimaan monikäyttöisenä, niin raaka-aineiden kuin konttienkin kuljetuksessa, tasapainottaen kuljetusmuotojen aiheuttamaa kuljetuspaino-ongelmaa. Kehitystyön pohjana käytimme yli 1000 venäläisen vaunutyypin tietokantaa, josta valitsimme Data Envelopment Analysis -menetelmällä soveliaimmat vaunut kontinkuljetukseen (lähemmin tarkastelimme n. 40 vaunutyyppiä), jättäen mahdollisimman vähän tyhjää tilaa junaan, mutta silti kyeten kantamaan valitun konttilastin. Kun kantokykyongelmia venäläisissä vaunuissa ei useinkaan ole, on vertailu tehtävissä tavarajunan pituuden ja kokonaispainon perusteella. Simuloituamme yhdistettyihin kuljetuksiin soveliasta vaunutyyppiä käytännössä löytyvässä kuljetusverkostossa (esim. raakapuuta Suomeen tai Kiinaan ja kontteja takaisin Venäjän suuntaan), huomasimme lyhemmän vaunupituuden sisältävän kustannusetua, erityisesti raakaainekuljetuksissa, mutta myös rajanylityspaikkojen mahdollisesti vähentyessä. Lyhempi vaunutyyppi on myös joustavampi erilaisten konttipituuksien suhteen (40 jalan kontin käyttö on yleistynyt viime vuosina). Työn lopuksi ehdotamme uuden vaunutyypin tuotantotavaksi verkostomaista lähestymistapaa, jossa osa vaunusta tehtäisiin Suomessa ja osa Venäjällä ja/tai Ukrainassa. Vaunutyypin tulisi olla rekisteröity Venäjälle, sillä silloin sitä voi käyttää Suomen ja Venäjän, kuten myös soveltuvin osin Venäjän ja Kiinan välisessä liikenteessä.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
In this paper, manufacturability analysis and collection of design aspects is made for a microwave test-fixture. Aspects of applying systematic design for a microwave test-fixture design and manufacturing are also analysed. Special questionnaires for the component and machining are made in order to enable necessary information to ensure DFM(A) – aspects of the component. The aspects of easy manufacturing for machining the microwave test-fixture are collected. Material selection is discussed and manufacturing stages of prototype manufacturing are presented.
Resumo:
There is an increasing interest in the use of breath analysis for monitoring human physiology and exposure to toxic substances or environmental pollutants. This review focuses on the current status of the sampling procedures, collection devices and sample-enrichment methodologies used for exhaled breath-vapor analysis. We discuss the different parameters affecting each of the above steps, taking into account the requirements for breath analysis in exposure assessments and the need to analyze target compounds at sub-ppbv levels. Finally, we summarize the practical applications of exposure analysis in the past two decades
Resumo:
Stratospheric ozone can be measured accurately using a limb scatter remote sensing technique at the UV-visible spectral region of solar light. The advantages of this technique includes a good vertical resolution and a good daytime coverage of the measurements. In addition to ozone, UV-visible limb scatter measurements contain information about NO2, NO3, OClO, BrO and aerosols. There are currently several satellite instruments continuously scanning the atmosphere and measuring the UVvisible region of the spectrum, e.g., the Optical Spectrograph and Infrared Imager System (OSIRIS) launched on the Odin satellite in February 2001, and the Scanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY (SCIAMACHY) launched on Envisat in March 2002. Envisat also carries the Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument, which also measures limb-scattered sunlight under bright limb occultation conditions. These conditions occur during daytime occultation measurements. The global coverage of the satellite measurements is far better than any other ozone measurement technique, but still the measurements are sparse in the spatial domain. Measurements are also repeated relatively rarely over a certain area, and the composition of the Earth’s atmosphere changes dynamically. Assimilation methods are therefore needed in order to combine the information of the measurements with the atmospheric model. In recent years, the focus of assimilation algorithm research has turned towards filtering methods. The traditional Extended Kalman filter (EKF) method takes into account not only the uncertainty of the measurements, but also the uncertainty of the evolution model of the system. However, the computational cost of full blown EKF increases rapidly as the number of the model parameters increases. Therefore the EKF method cannot be applied directly to the stratospheric ozone assimilation problem. The work in this thesis is devoted to the development of inversion methods for satellite instruments and the development of assimilation methods used with atmospheric models.
Resumo:
Different methods to determine total fat (TF) and fatty acids (FA), including trans fatty acids (TFA), in diverse foodstuffs were evaluated, incorporating gravimetric methods and gas chromatography with flame ionization detector (GC/FID), in accordance with a modified AOAC 996.06 method. Concentrations of TF and FA obtained through these different procedures diverged (p< 0.05) and TFA concentrations varied beyond 20 % of the reference values. The modified AOAC 996.06 method satisfied both accuracy and precision, was fast and employed small amounts of low toxicity solvents. Therefore, the results showed that this methodology is viable to be adopted in Brazil for nutritional labeling purposes.