30 resultados para sampling methodology
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Mahdollisuus lyhytaikaisen virtsankeruun käyttöön lypsylehmien virtsan pseudouridiinin erityksen määrittämisessä
Resumo:
Summary
Resumo:
Summary
Resumo:
Summary
Resumo:
Abstract
Resumo:
Abstract
Resumo:
Firms operating in a changing environment have a need for structures and practices that provide flexibility and enable rapid response to changes. Given the challenges they face in attempts to keep up with market needs, they have to continuously improve their processes and products, and develop new products to match market requirements. Success in changing markets depends on the firm's ability to convert knowledge into innovations, and consequently their internal structures and capabilities have an important role in innovation activities. According 10 the dynamic capability view of the firm, firms thus need dynamic capabilities in (he form ofassets, processes and structures that enable strategic flexibility and support entrepreneurial opportunity sensing and exploitation. Dynamic capabilities are also needed in conditions of rapid change in the operating environment, and in activities such as new product development and expansion to new markets. Despite the growing interest in these issues and the theoretical developments in the field of strategy research, there are still only very few empirical studies, and large-scale empirical studies in particular, that provide evidence that firms'dynamic capabilities are reflected in performance differences. This thesis represents an attempt to advance the research by providing empirical evidence of thelinkages between the firm's dynamic capabilities and performance in intenationalization and innovation activities. The aim is thus to increase knowledge and enhance understanding of the organizational factors that explain interfirm performance differences. The study is in two parts. The first part is the introduction and the second part comprises five research publications covering the theoretical foundations of the dynamic capability view and subsequent empirical analyses. Quantitative research methodology is used throughout. The thesis contributes to the literature in several ways. While a lot of prior research on dynamic capabilities is conceptual in nature, or conducted through case studies, this thesis introduces empirical measures for assessing the different aspects, and uses large-scale sampling to investigate the relationships between them and performance indicators. The dynamic capability view is further developed by integrating theoretical frameworks and research traditions from several disciplines. The results of the study provide support for the basic tenets of the dynamic capability view. The empirical findings demonstrate that the firm's ability to renew its knowledge base and other intangible assets, its proactive, entrepreneurial behavior, and the structures and practices that support operational flexibility arepositively related to performance indicators.
Resumo:
There is a broad consensus among economists that technologicalchange has been a major contributor to the productivity growth and, hence, to the growth of the material welfare in western industrialized countries at least over the last century. Paradoxically, this issue has not been the focal point of theoretical economics. At the same time, we have witnessed the rise of the importance of technological issues at the strategic management level of business firms. Interestingly, the research has not accurately responded to this challenge either. The tension between the overwhelming empirical evidence of the importance of technology and its relative omission in the research offers a challenging target for a methodological endeavor. This study deals with the question of how different theories cope with technology and explain technological change. The focusis at the firm level and the analysis concentrates on metatheoretical issues, except for the last two chapters, which examine the problems of strategic management of technology. Here the aim is to build a new evolutionary-based theoreticalframework to analyze innovation processes at the firm level. The study consistsof ten chapters. Chapter 1 poses the research problem and contrasts the two basic approaches, neoclassical and evolutionary, to be analyzed. Chapter 2 introduces the methodological framework which is based on the methodology of isolation. Methodological and ontoogical commitments of the rival approaches are revealed and basic questions concerning their ways of theorizing are elaborated. Chapters 3-6 deal with the so-called substantive isolative criteria. The aim is to examine how different approaches cope with such critical issues as inherent uncertainty and complexity of innovative activities (cognitive isolations, chapter 3), theboundedness of rationality of innovating agents (behavioral isolations, chapter4), the multidimensional nature of technology (chapter 5), and governance costsrelated to technology (chapter 6). Chapters 7 and 8 put all these things together and look at the explanatory structures used by the neoclassical and evolutionary approaches in the light of substantive isolations. The last two cpahters of the study utilize the methodological framework and tools to appraise different economics-based candidates in the context of strategic management of technology. The aim is to analyze how different approaches answer the fundamental question: How can firms gain competitive advantages through innovations and how can the rents appropriated from successful innovations be sustained? The last chapter introduces a new evolutionary-based technology management framework. Also the largely omitted issues of entrepreneurship are examined.
Resumo:
Overall Equipment Effectiveness (OEE) is the key metric of operational excellence. OEE monitors the actual performance of equipment relative to its performance capabilities under optimal manufacturing conditions. It looks at the entire manufacturing environment measuring, in addition to the equipment availability, the production efficiency while the equipment is available to run products, as well as the efficiency loss that results from scrap, rework, and yield losses. The analysis of the OEE provides improvement opportunities for the operation. One of the tools used for OEE improvement is Six Sigma DMAIC methodology which is a set of practices originally developed to improve processes by eliminating defects. It asserts the continuous efforts to reduce variation in process outputs as key to business success, as well as the possibility of measurement, analysis, improvement and control of manufacturing and business processes. In the case of the Bottomer line AD2378 in Papsac Maghreb Casablanca plant, the OEE figures reached 48.65 %, which is below the accepted OEE group performance. This required immediate actions to achieve OEE improvement. This Master thesis focuses on the application of Six Sigma DMAIC methodology in the OEE improvement on the Bottomer Line AD2378 in Papsac Maghreb Casablanca plant. First, the Six Sigma DMAIC and OEE usage in operation measurement will be discussed. Afterwards, the different DMAIC phases will allow the identification of improvement focus, the identification of OEE low performance causes and the design of improvement solutions. These will be implemented to allow further tracking of improvement impact on the plant operations.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,
Resumo:
The purpose of this thesis was to define how product carbon footprint analysis and its results can be used in company's internal development as well as in customer and interest group guidance, and how these factors are related to corporate social responsibility. From-cradle-to-gate carbon footprint was calculated for three products; Torino Whole grain barley, Torino Pearl barley, and Elovena Barley grit & oat bran, all of them made of Finnish barley. The carbon footprint of the Elovena product was used to determine carbon footprints for industrial kitchen cooked porridge portions. The basic calculation data was collected from several sources. Most of the data originated from Raisio Group's contractual farmers and Raisio Group's cultivation, processing and packaging specialists. Data from national and European literature and database sources was also used. The electricity consumption for porridge portions' carbon footprint calculations was determined with practical measurements. The carbon footprint calculations were conducted according to the ISO 14044 standard, and the PAS 2050 guide was also applied. A consequential functional unit was applied in porridge portions' carbon footprint calculations. Most of the emissions from barley products' life cycle originate from primary production. The nitrous oxide emissions from cultivated soil and the use and production of nitrogenous fertilisers contribute over 50% of products' carbon footprint. Torino Pearl barley has the highest carbon footprint due to the lowest processing output. The reductions in products' carbon footprint can be achieved with developments in cultivation and grain processing. The carbon footprint of porridge portion can be reduced by using domestically produced plant-based ingredients and by making the best possible use of the kettle. Carbon footprint calculation can be used to determine possible improvement points related to corporate environmental responsibility. Several improvement actions are related to economical and social responsibility through better raw material utilization and expense reductions.