14 resultados para Dynamic data analysis

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä diplomityö arvioi hitsauksen laadunhallintaohjelmistomarkkinoiden kilpailijoita. Kilpailukenttä on uusi ja ei ole tarkkaa tietoa siitä minkälaisia kilpailijoita on markkinoilla. Hitsauksen laadunhallintaohjelmisto auttaa yrityksiä takaamaan korkean laadun. Ohjelmisto takaa korkean laadun varmistamalla, että hitsaaja on pätevä, hän noudattaa hitsausohjeita ja annettuja parametreja. Sen lisäksi ohjelmisto kerää kaiken tiedon hitsausprosessista ja luo siitä vaadittavat dokumentit. Diplomityön teoriaosuus muodostuu kirjallisuuskatsauksesta ratkaisuliike-toimintaan, kilpailija-analyysin ja kilpailuvoimien teoriaan sekä hitsauksen laadunhallintaan. Työn empiriaosuus on laadullinen tutkimus, jossa tutkitaan kilpailevia hitsauksen laadunhallintaohjelmistoja ja haastatellaan ohjelmistojen käyttäjiä. Diplomityön tuloksena saadaan uusi kilpailija-analyysimalli hitsauksen laadunhallintaohjelmistoille. Mallin avulla voidaan arvostella ohjelmistot niiden tarjoamien primääri- ja sekundääriominaisuuksien perusteella. Toiseksi tässä diplomityössä analysoidaan nykyinen kilpailijatilanne hyödyntämällä juuri kehitettyä kilpailija-analyysimallia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research concerns the Urban Living Idea Contest conducted by Creator Space™ of BASF SE during its 150th anniversary in 2015. The main objectives of the thesis are to provide a comprehensive analysis of the Urban Living Idea Contest (ULIC) and propose a number of improvement suggestions for future years. More than 4,000 data points were collected and analyzed to investigate the functionality of different elements of the contest. Furthermore, a set of improvement suggestions were proposed to BASF SE. Novelty of this thesis lies in the data collection and the original analysis of the contest, which identified its critical elements, as well as the areas that could be improved. The author of this research was a member of the organizing team and involved in the decision making process from the beginning until the end of the ULIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents an understanding of how a U.S. based, international MBA school has been able to achieve competitive advantage within a relatively short period of time. A framework is built to comprehend how the dynamic capability and value co-creation theories are connected and to understand how the dynamic capabilities have enabled value co-creation to happen between the school and its students, leading to such competitive advantage for the school. The data collection method followed a qualitative single-case study with a process perspective. Seven semi-structured interviews were made in September and October of 2015; one current employee of the MBA school was interviewed, with the other six being graduates and/or former employees of the MBA school. In addition, the researcher has worked as a recruiter at the MBA school, enabling to build bridges and a coherent whole of the empirical findings. Data analysis was conducted by first identifying themes from interviews, after which a narrative was written and a causal network model was built. Thus, a combination of thematic analysis, narrative and grounded theory were used as data analysis methods. This study finds that value co-creation is enabled by the dynamic capabilities of the MBA school; also capabilities would not be dynamic if value co-creation did not take place. Thus, this study presents that even though the two theories represent different level analyses, they are intertwined and together they can help to explain competitive advantage. The MBA case school’s dynamic capabilities are identified to be the sales & marketing capabilities and international market creation capabilities, thus the study finds that the MBA school does not only co-create value with existing students (customers) in the school setting, but instead, most of the value co-creation happens between the school and the student cohorts (network) already in the recruiting phase. Therefore, as a theoretical implication, the network should be considered as part of the context. The main value created seem to lie in the MBA case school’s international setting & networks. MBA schools around the world can learn from this study; schools should try to find their own niche and specialize, based on their own values and capabilities. With a differentiating focus and a unique and practical content, the schools can and should be well-marketed and proactively sold in order to receive more student applications and enhance competitive advantage. Even though an MBA school can effectively be treated as a business, as the study shows, the main emphasis should still be on providing quality education. Good content with efficient marketing can be the winning combination for an MBA school.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Studying testis is complex, because the tissue has a very heterogeneous cell composition and its structure changes dynamically during development. In reproductive field, the cell composition is traditionally studied by morphometric methods such as immunohistochemistry and immunofluorescence. These techniques provide accurate quantitative information about cell composition, cell-cell association and localization of the cells of interest. However, the sample preparation, processing, staining and data analysis are laborious and may take several working days. Flow cytometry protocols coupled with DNA stains have played an important role in providing quantitative information of testicular cells populations ex vivo and in vitro studies. Nevertheless, the addition of specific cells markers such as intracellular antibodies would allow the more specific identification of cells of crucial interest during spermatogenesis. For this study, adult rat Sprague-Dawley rats were used for optimization of the flow cytometry protocol. Specific steps within the protocol were optimized to obtain a singlecell suspension representative of the cell composition of the starting material. Fixation and permeabilization procedure were optimized to be compatible with DNA stains and fluorescent intracellular antibodies. Optimization was achieved by quantitative analysis of specific parameters such as recovery of meiotic cells, amount of debris and comparison of the proportions of the various cell populations with already published data. As a result, a new and fast flow cytometry method coupled with DNA stain and intracellular antigen detection was developed. This new technique is suitable for analysis of population behavior and specific cells during postnatal testis development and spermatogenesis in rodents. This rapid protocol recapitulated the known vimentin and γH2AX protein expression patterns during rodent testis ontogenesis. Moreover, the assay was applicable for phenotype characterization of SCRbKO and E2F1KO mouse models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis the process of building a software for transport accessibility analysis is described. The goal was to create a software which is easy to distribute and simple to use for the user without particular background in the field of the geographical data analysis. It was shown that existing tools do not suit for this particular task due to complex interface or significant rendering time. The goal was accomplished by applying modern approaches in the process of building web applications such as maps based on vector tiles, FLUX architecture design pattern and module bundling. It was discovered that vector tiles have considerable advantages over image-based tiles such as faster rendering and real-time styling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study analyzes a young Finnish micro-sized firm that is attempting to reach internationalization readiness in the pre-internationalization stage. The purpose of this research is to analyze and better understand how a young firm reaches internationalization readiness in the pre-internationalization stage. Small firm internationalization is a vastly researched topic. Little emphasis has been placed on the specific antecedents that help the firm reach internationalization readiness in the pre-internationalization stage. The contribution of this research is thus two-fold. First, the research contributes to known theories of firm internationalization. Second, the research further extends knowledge related to how firms reach internationalization readiness specifically in the pre-internationalization stage. The theoretical background of the research involves the traditional stage theory (Uppsala model), pre-internationalization stage theory, international entrepreneurship theory and dynamic capabilities theory. With the help of these four relevant theories, empirical data was collected. The research method utilized in this study was a qualitative single case study combined with critical realist philosophy. The data analysis of this research was conducted using abduction in order to allow freedom in the analysis of the research findings. The empirical data was collected through semi-structured, face-to-face interviews. The key respondents in this study were the two managers of the case company. The findings of this study revealed four important themes from the case company’s perspective towards reaching internationalization readiness in the pre-internationalization stage. Extensive knowledge of the home market and target market were the two most important themes in this research. The next most relevant theme for reaching internationalization readiness in the pre-internationalization stage was the managers’ previous international business experiences. The final theme affecting the firm’s ability to reach internationalization readiness was the firm’s specific resources. Even though the research findings of this study are case sensitive, the research insights and explanations have the potential to be transferred to similar firms and contexts. Future research should therefore aim towards more longitudinal studies in which the context is emphasized. This should include a variety of firms in similar stages of internationalization and contexts. Future studies of this kind would be of great benefit to academia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigates the development of relationships in same global virtual team working on different projects. The purpose is to explore how do interpersonal relationships develop in terms of characteristics of virtuality and if there is any influence of project lifespan on the development of these relationships. Since relationships are dynamic in nature and are influenced by multiple levels of variables including individual, group and organizational level, therefore characteristics of virtuality have been considered from all these aspects so as to study their influence on development of relationships. In this study, relationships have been studied at two different levels. At first, dyadic relationships between two members of a GVT have been analyzed and thereafter, focus has been on the development of relationships among the team, based on these dyads. Characteristics having influence on development of relationships include trust, physical distance, time zone difference, cultural and language differences, level of formalization in the organization and means of communication used by team members. Level of formalization and means of communication are two characteristics which emerged after empirical study and are found to have direct influence on development of relationships. Remaining characteristics have been identified through literature review. In order to conduct the study, qualitative methodology has been applied. Empirical data has been collected based on a single case study while using semi-structured interviews as data gathering technique. Data analysis has been performed by applying thematic analysis along with the utilization of company documents such as work sheets, minutes of meetings and recordings of conferences. Findings of the study indicate that development of relationships, both at dyadic level and team level, is influenced by different events taking place among different members of GVT. These events have either positive or negative influence on the characteristics of virtuality, which leads to development of the relationships. It has been found that, trust, among all factors plays a greater role in development of these relations. Contrary to the belief that most conflicts arise among members of different cultures, they are equally likely to happen among the members from same culture in GVT environment. Study suggests that relationship development is not a smooth process but it fluctuates based on different events in teams. For further research, teams within large firms shall be studied along these lines. This study is an early attempt towards bringing different characteristics of virtuality together which previously, have been studied individually. It is therefore plausible to conduct similar studies so as to generalize the findings of this study which has provided a starting point.