967 resultados para Processing methods
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Forensic intelligence has recently gathered increasing attention as a potential expansion of forensic science that may contribute in a wider policing and security context. Whilst the new avenue is certainly promising, relatively few attempts to incorporate models, methods and techniques into practical projects are reported. This work reports a practical application of a generalised and transversal framework for developing forensic intelligence processes referred to here as the Transversal model adapted from previous work. Visual features present in the images of four datasets of false identity documents were systematically profiled and compared using image processing for the detection of a series of modus operandi (M.O.) actions. The nature of these series and their relation to the notion of common source was evaluated with respect to alternative known information and inferences drawn regarding respective crime systems. 439 documents seized by police and border guard authorities across 10 jurisdictions in Switzerland with known and unknown source level links formed the datasets for this study. Training sets were developed based on both known source level data, and visually supported relationships. Performance was evaluated through the use of intra-variability and inter-variability scores drawn from over 48,000 comparisons. The optimised method exhibited significant sensitivity combined with strong specificity and demonstrates its ability to support forensic intelligence efforts.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Low quality mine drainage from tailings facilities persists as one of the most significant global environmental concerns related to sulphide mining. Due to the large variation in geological and environmental conditions at mine sites, universal approaches to the management of mine drainage are not always applicable. Instead, site-specific knowledge of the geochemical behaviour of waste materials is required for the design and closure of the facilities. In this thesis, tailings-derived water contamination and factors causing the pollution were investigated in two coeval active sulphide mine sites in Finland: the Hitura Ni mine and the Luikonlahti Cu-Zn-Co-Ni mine and talc processing plant. A hydrogeochemical study was performed to characterise the tailingsderived water pollution at Hitura. Geochemical changes in the Hitura tailings were evaluated with a detailed mineralogical and geochemical investigation (solid-phase speciation, acid mine drainage potential, pore water chemistry) and using a spatial assessment to identify the mechanisms of water contamination. A similar spatial investigation, applying selective extractions, was carried out in the Luikonlahti tailings area for comparative purposes (Hitura low-sulphide tailings vs. Luikonlahti sulphide-rich tailings). At both sites, hydrogeochemistry of tailings seepage waters was further characterised to examine the net results of the processes observed within the impoundments and to identify constraints for water treatment. At Luikonlahti, annual and seasonal variation in effluent quality was evaluated based on a four-year monitoring period. Observations pertinent to future assessment and mine drainage prevention from existing and future tailings facilities were presented based on the results. A combination of hydrogeochemical approaches provided a means to delineate the tailings-derived neutral mine drainage at Hitura. Tailings effluents with elevated Ni, SO4 2- and Fe content had dispersed to the surrounding aquifer through a levelled-out esker and underneath the seepage collection ditches. In future mines, this could be avoided with additional basal liners in tailings impoundments where the permeability of the underlying Quaternary deposits is inadequate, and with sufficiently deep ditches. Based on the studies, extensive sulphide oxidation with subsequent metal release may already initiate during active tailings disposal. The intensity and onset of oxidation depended on e.g. the Fe sulphide content of the tailings, water saturation level, and time of exposure of fresh sulphide grains. Continuous disposal decreased sulphide weathering in the surface of low-sulphide tailings, but oxidation initiated if they were left uncovered after disposal ceased. In the sulphide-rich tailings, delayed burial of the unsaturated tailings had resulted in thick oxidized layers, despite the continuous operation. Sulphide weathering and contaminant release occurred also in the border zones. Based on the results, the prevention of sulphide oxidation should already be considered in the planning of tailings disposal, taking into account the border zones. Moreover, even lowsulphide tailings should be covered without delay after active disposal ceases. The quality of tailings effluents showed wide variation within a single impoundment and between the two different types of tailings facilities assessed. The affecting factors included source materials, the intensity of weathering of tailings and embankment materials along the seepage flow path, inputs from the process waters, the water retention time in tailings, and climatic seasonality. In addition, modifications to the tailings impoundment may markedly change the effluent quality. The wide variation in the tailings effluent quality poses challenges for treatment design. The final decision on water management requires quantification of the spatial and seasonal fluctuation at the site, taking into account changes resulting from the eventual closure of the impoundment. Overall, comprehensive hydrogeochemical mapping was deemed essential in the identification of critical contaminants and their sources at mine sites. Mineralogical analysis, selective extractions, and pore water analysis were a good combination of methods for studying the weathering of tailings and in evaluating metal mobility from the facilities. Selective extractions with visual observations and pH measurements of tailings solids were, nevertheless, adequate in describing the spatial distribution of sulphide oxidation in tailings impoundments. Seepage water chemistry provided additional data on geochemical processes in tailings and was necessary for defining constraints for water treatment.
Resumo:
The objectives of this research work “Identification of the Emerging Issues in Recycled Fiber processing” are discovering of emerging research issues and presenting of new approaches to identify promising research themes in recovered paper application and production. The projected approach consists of identifying technological problems often encountered in wastepaper preparation processes and also improving the quality of recovered paper and increasing its proportion in the composition of paper and board. The source of information for the problem retrieval is scientific publications in which waste paper application and production were discussed. The study has exploited several research methods to understand the changes related to utilization of recovered paper. The all assembled data was carefully studied and categorized by applying software called RefViz and CiteSpace. Suggestions were made on the various classes of these problems that need further investigation in order to propose an emerging research trends in recovered paper.
Resumo:
Cutin and suberin are structural and protective polymers of plant surfaces. The epidermal cells of the aerial parts of plants are covered with an extracellular cuticular layer, which consists of polyester cutin, highly resistant cutan, cuticular waxes and polysaccharides which link the layer to the epidermal cells. A similar protective layer is formed by a polyaromatic-polyaliphatic biopolymer suberin, which is present particularly in the cell walls of the phellem layer of periderm of the underground parts of plants (e.g. roots and tubers) and the bark of trees. In addition, suberization is also a major factor in wound healing and wound periderm formation regardless of the plants’ tissue. Knowledge of the composition and functions of cuticular and suberin polymers is important for understanding the physiological properties for the plants and for nutritional quality when these plants are consumed as foods. The aims of the practical work were to assess the chemical composition of cuticular polymers of several northern berries and seeds and suberin of two varieties of potatoes. Cutin and suberin were studied as isolated polymers and further after depolymerization as soluble monomers and solid residues. Chemical and enzymatic depolymerization techniques were compared and a new chemical depolymerization method was developed. Gas chromatographic analysis with mass spectrometric detection (GC-MS) was used to assess the monomer compositions. Polymer investigations were conducted with solid state carbon-13 cross polarization magic angle spinning nuclear magnetic resonance spectroscopy (13C CP-MAS NMR), Fourier transform infrared spectroscopy (FTIR) and microscopic analysis. Furthermore, the development of suberin over one year of post-harvest storage was investigated and the cuticular layers from berries grown in the North and South of Finland were compared. The results show that the amounts of isolated cuticular layers and cutin monomers, as well as monomeric compositions vary greatly between the berries. The monomer composition of seeds was found to differ from the corresponding berry peel monomers. The berry cutin monomers were composed mostly of long-chain aliphatic ω-hydroxy acids, with various mid-chain functionalities (double-bonds, epoxy, hydroxy and keto groups). Substituted α,ω-diacids predominated over ω-hydroxy acids in potato suberin monomers and slight differences were found between the varieties. The newly-developed closed tube chemical method was found to be suitable for cutin and suberin analysis and preferred over the solvent-consuming and laborious reflux method. Enzymatic hydrolysis with cutinase was less effective than chemical methanolysis and showed specificity towards α,ω-diacid bonds. According to 13C CP-MAS NMR and FTIR, the depolymerization residues contained significant amounts of aromatic structures, polysaccharides and possible cutan-type aliphatic moieties. Cultivation location seems to have effect on cuticular composition. The materials studied contained significant amounts of different types of biopolymers that could be utilized for several purposes with or without further processing. The importance of the so-called waste material from industrial processes of berries and potatoes as a source of either dietary fiber or specialty chemicals should be further investigated in detail. The evident impact of cuticular and suberin polymers, among other fiber components, on human health should be investigated in clinical trials. These by-product materials may be used as value-added fiber fractions in the food industry and as raw materials for specialty chemicals such as lubricants and emulsifiers, or as building blocks for novel polymers.
Resumo:
Contemporary organisations have to embrace the notion of doing ‘more with less’. This challenges knowledge production within companies and public organisations, forcing them to reorganise their structures and rethink what knowledge production actually means in the context of innovation and how knowledge is actually produced among various professional groups within the organisation in their everyday actions. Innovations are vital for organisational survival, and ‘ordinary’ employees and customers are central but too-often ignored producers of knowledge for contemporary organisations. Broader levels of participation and reflexive practices are needed. This dissertation discusses the missing links between innovation research conducted in the context of industrial management, arts, and culture; applied drama and theatre practices (specifically post-Boalian approaches); and learning – especially organising reflection – in organisational settings. This dissertation (1) explores and extends the role of research-based theatre to organising reflection and reflexive practices in the context of practice-based innovation, (2) develops a reflexive model of RBT for investigating and developing practice-based organisational process innovations in order to contribute to the development of a tool for innovation management and analysis, and (3) operationalises this model within private- and publicsector organisations. The proposed novel reflexive model of research-based theatre for investigating and developing practice-based organisational process innovations extends existing methods and offers a different way of organising reflection and reflexive practices in the context of general innovation management. The model was developed through five participatory action research processes conducted in four different organisations. The results provide learning steps – a reflection path – for understanding complex organisational life, people, and relations amid renewal and change actions. The proposed model provides a new approach to organising and cultivating reflexivity in practice-based innovation activities via research-based theatre. The results can be utilised as a guideline when processing practice-based innovation within private or public organisations. The model helps innovation managers to construct, together with their employees, temporary communities where they can learn together through reflecting on their own and each others’ experiences and to break down assumptions related to their own perspectives. The results include recommendations for practical development steps applicable in various organisations with regard to (i) application of research-based theatre and (ii) related general innovation management. The dissertation thus contributes to the development of novel learning approaches in knowledge production. Keywords: practice-based innovation, research-based theatre, learning, reflection, mode 2b knowledge production
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented