993 resultados para Graphical processing units


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methyl chloride is an important chemical intermediate with a variety of applications. It is produced today in large units and shipped to the endusers. Most of the derived products are harmless, as silicones, butyl rubber and methyl cellulose. However, methyl chloride is highly toxic and flammable. On-site production in the required quantities is desirable to reduce the risks involved in transportation and storage. Ethyl chloride is a smaller-scale chemical intermediate that is mainly used in the production of cellulose derivatives. Thus, the combination of onsite production of methyl and ethyl chloride is attractive for the cellulose processing industry, e.g. current and future biorefineries. Both alkyl chlorides can be produced by hydrochlorination of the corresponding alcohol, ethanol or methanol. Microreactors are attractive for the on-site production as the reactions are very fast and involve toxic chemicals. In microreactors, the diffusion limitations can be suppressed and the process safety can be improved. The modular setup of microreactors is flexible to adjust the production capacity as needed. Although methyl and ethyl chloride are important chemical intermediates, the literature available on potential catalysts and reaction kinetics is limited. Thus the thesis includes an extensive catalyst screening and characterization, along with kinetic studies and engineering the hydrochlorination process in microreactors. A range of zeolite and alumina based catalysts, neat and impregnated with ZnCl2, were screened for the methanol hydrochlorination. The influence of zinc loading, support, zinc precursor and pH was investigated. The catalysts were characterized with FTIR, TEM, XPS, nitrogen physisorption, XRD and EDX to identify the relationship between the catalyst characteristics and the activity and selectivity in the methyl chloride synthesis. The acidic properties of the catalyst were strongly influenced upon the ZnCl2 modification. In both cases, alumina and zeolite supports, zinc reacted to a certain amount with specific surface sites, which resulted in a decrease of strong and medium Brønsted and Lewis acid sites and the formation of zinc-based weak Lewis acid sites. The latter are highly active and selective in methanol hydrochlorination. Along with the molecular zinc sites, bulk zinc species are present on the support material. Zinc modified zeolite catalysts exhibited the highest activity also at low temperatures (ca 200 °C), however, showing deactivation with time-onstream. Zn/H-ZSM-5 zeolite catalysts had a higher stability than ZnCl2 modified H-Beta and they could be regenerated by burning the coke in air at 400 °C. Neat alumina and zinc modified alumina catalysts were active and selective at 300 °C and higher temperatures. However, zeolite catalysts can be suitable for methyl chloride synthesis at lower temperatures, i.e. 200 °C. Neat γ-alumina was found to be the most stable catalyst when coated in a microreactor channel and it was thus used as the catalyst for systematic kinetic studies in the microreactor. A binder-free and reproducible catalyst coating technique was developed. The uniformity, thickness and stability of the coatings were extensively characterized by SEM, confocal microscopy and EDX analysis. A stable coating could be obtained by thermally pretreating the microreactor platelets and ball milling the alumina to obtain a small particle size. Slurry aging and slow drying improved the coating uniformity. Methyl chloride synthesis from methanol and hydrochloric acid was performed in an alumina-coated microreactor. Conversions from 4% to 83% were achieved in the investigated temperature range of 280-340 °C. This demonstrated that the reaction is fast enough to be successfully performed in a microreactor system. The performance of the microreactor was compared with a tubular fixed bed reactor. The results obtained with both reactors were comparable, but the microreactor allows a rapid catalytic screening with low consumption of chemicals. As a complete conversion of methanol could not be reached in a single microreactor, a second microreactor was coupled in series. A maximum conversion of 97.6 % and a selectivity of 98.8 % were reached at 340°C, which is close to the calculated values at a thermodynamic equilibrium. A kinetic model based on kinetic experiments and thermodynamic calculations was developed. The model was based on a Langmuir Hinshelwood-type mechanism and a plug flow model for the microreactor. The influence of the reactant adsorption on the catalyst surface was investigated by performing transient experiments and comparing different kinetic models. The obtained activation energy for methyl chloride was ca. two fold higher than the previously published, indicating diffusion limitations in the previous studies. A detailed modeling of the diffusion in the porous catalyst layer revealed that severe diffusion limitations occur starting from catalyst coating thicknesses of 50 μm. At a catalyst coating thickness of ca 15 μm as in the microreactor, the conditions of intrinsic kinetics prevail. Ethanol hydrochlorination was performed successfully in the microreactor system. The reaction temperature was 240-340°C. An almost complete conversion of ethanol was achieved at 340°C. The product distribution was broader than for methanol hydrochlorination. Ethylene, diethyl ether and acetaldehyde were detected as by-products, ethylene being the most dominant by-product. A kinetic model including a thorough thermodynamic analysis was developed and the influence of adsorbed HCl on the reaction rate of ethanol dehydration reactions was demonstrated. The separation of methyl chloride using condensers was investigated. The proposed microreactor-condenser concept enables the production of methyl chloride with a high purity of 99%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The steel industry produces, besides steel, also solid mineral by-products or slags, while it emits large quantities of carbon dioxide (CO2). Slags consist of various silicates and oxides which are formed in chemical reactions between the iron ore and the fluxing agents during the high temperature processing at the steel plant. Currently, these materials are recycled in the ironmaking processes, used as aggregates in construction, or landfilled as waste. The utilization rate of the steel slags can be increased by selectively extracting components from the mineral matrix. As an example, aqueous solutions of ammonium salts such as ammonium acetate, chloride and nitrate extract calcium quite selectively already at ambient temperature and pressure conditions. After the residual solids have been separated from the solution, calcium carbonate can be precipitated by feeding a CO2 flow through the solution. Precipitated calcium carbonate (PCC) is used in different applications as a filler material. Its largest consumer is the papermaking industry, which utilizes PCC because it enhances the optical properties of paper at a relatively low cost. Traditionally, PCC is manufactured from limestone, which is first calcined to calcium oxide, then slaked with water to calcium hydroxide and finally carbonated to PCC. This process emits large amounts of CO2, mainly because of the energy-intensive calcination step. This thesis presents research work on the scale-up of the above-mentioned ammonium salt based calcium extraction and carbonation method, named Slag2PCC. Extending the scope of the earlier studies, it is now shown that the parameters which mainly affect the calcium utilization efficiency are the solid-to-liquid ratio of steel slag and the ammonium salt solvent solution during extraction, the mean diameter of the slag particles, and the slag composition, especially the fractions of total calcium, silicon, vanadium and iron as well as the fraction of free calcium oxide. Regarding extraction kinetics, slag particle size, solid-to-liquid ratio and molar concentration of the solvent solution have the largest effect on the reaction rate. Solvent solution concentrations above 1 mol/L NH4Cl cause leaching of other elements besides calcium. Some of these such as iron and manganese result in solution coloring, which can be disadvantageous for the quality of the PCC product. Based on chemical composition analysis of the produced PCC samples, however, the product quality is mainly similar as in commercial products. Increasing the novelty of the work, other important parameters related to assessment of the PCC quality, such as particle size distribution and crystal morphology are studied as well. As in traditional PCC precipitation process, the ratio of calcium and carbonate ions controls the particle shape; a higher value for [Ca2+]/[CO32-] prefers precipitation of calcite polymorph, while vaterite forms when carbon species are present in excess. The third main polymorph, aragonite, is only formed at elevated temperatures, above 40-50 °C. In general, longer precipitation times cause transformation of vaterite to calcite or aragonite, but also result in particle agglomeration. The chemical equilibrium of ammonium and calcium ions and dissolved ammonia controlling the solution pH affects the particle sizes, too. Initial pH of 12-13 during the carbonation favors nonagglomerated particles with a diameter of 1 μm and smaller, while pH values of 9-10 generate more agglomerates of 10-20 μm. As a part of the research work, these findings are implemented in demonstrationscale experimental process setups. For the first time, the Slag2PCC technology is tested in scale of ~70 liters instead of laboratory scale only. Additionally, design of a setup of several hundreds of liters is discussed. For these purposes various process units such as inclined settlers and filters for solids separation, pumps and stirrers for material transfer and mixing as well as gas feeding equipment are dimensioned and developed. Overall emissions reduction of the current industrial processes and good product quality as the main targets, based on the performed partial life cycle assessment (LCA), it is most beneficial to utilize low concentration ammonium salt solutions for the Slag2PCC process. In this manner the post-treatment of the products does not require extensive use of washing and drying equipment, otherwise increasing the CO2 emissions of the process. The low solvent concentration Slag2PCC process causes negative CO2 emissions; thus, it can be seen as a carbon capture and utilization (CCU) method, which actually reduces the anthropogenic CO2 emissions compared to the alternative of not using the technology. Even if the amount of steel slag is too small for any substantial mitigation of global warming, the process can have both financial and environmental significance for individual steel manufacturers as a means to reduce the amounts of emitted CO2 and landfilled steel slag. Alternatively, it is possible to introduce the carbon dioxide directly into the mixture of steel slag and ammonium salt solution. The process would generate a 60-75% pure calcium carbonate mixture, the remaining 25-40% consisting of the residual steel slag. This calcium-rich material could be re-used in ironmaking as a fluxing agent instead of natural limestone. Even though this process option would require less process equipment compared to the Slag2PCC process, it still needs further studies regarding the practical usefulness of the products. Nevertheless, compared to several other CO2 emission reduction methods studied around the world, the within this thesis developed and studied processes have the advantage of existing markets for the produced materials, thus giving also a financial incentive for applying the technology in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lignin, after cellulose, is the second most abundant biopolymer on Earth, accounting for 30% of the organic carbon in the biosphere. It is considered an important evolutionary adaptation of plants during their transition from the aquatic environment to land, since it bestowed the early tracheophytes with physical support to stand upright and enabled long-distance transport of water and solutes by waterproofing the vascular tissue. Although essential for plant growth and development, lignin is the major plant cell wall component responsible for biomass recalcitrance to industrial processing. The fact that lignin is a non-linear aromatic polymer built with chemically diverse and poorly reactive linkages and a variety of monomer units precludes the ability of any single enzyme to properly recognize and degrade it. Consequently, the use of lignocellulosic feedstock as a renewable and sustainable resource for the production of biofuels and bio-based materials will depend on the identification and characterization of the factors that determine plant biomass recalcitrance, especially the highly complex phenolic polymer lignin. Here, we summarize the current knowledge regarding lignin metabolism in plants, its effect on biomass recalcitrance and the emergent strategies to modify biomass recalcitrance through metabolic engineering of the lignin pathway. In addition, the potential use of sugarcane as a second-generation biofuel crop and the advances in lignin-related studies in sugarcane are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nucleus tractus solitarii (NTS) receives afferent projections from the arterial baroreceptors, carotid chemoreceptors and cardiopulmonary receptors and as a function of this information produces autonomic adjustments in order to maintain arterial blood pressure within a narrow range of variation. The activation of each of these cardiovascular afferents produces a specific autonomic response by the excitation of neuronal projections from the NTS to the ventrolateral areas of the medulla (nucleus ambiguus, caudal and rostral ventrolateral medulla). The neurotransmitters at the NTS level as well as the excitatory amino acid (EAA) receptors involved in the processing of the autonomic responses in the NTS, although extensively studied, remain to be completely elucidated. In the present review we discuss the role of the EAA L-glutamate and its different receptor subtypes in the processing of the cardiovascular reflexes in the NTS. The data presented in this review related to the neurotransmission in the NTS are based on experimental evidence obtained in our laboratory in unanesthetized rats. The two major conclusions of the present review are that a) the excitation of the cardiovagal component by cardiovascular reflex activation (chemo- and Bezold-Jarisch reflexes) or by L-glutamate microinjection into the NTS is mediated by N-methyl-D-aspartate (NMDA) receptors, and b) the sympatho-excitatory component of the chemoreflex and the pressor response to L-glutamate microinjected into the NTS are not affected by an NMDA receptor antagonist, suggesting that the sympatho-excitatory component of these responses is mediated by non-NMDA receptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theme of this thesis is context-speci c independence in graphical models. Considering a system of stochastic variables it is often the case that the variables are dependent of each other. This can, for instance, be seen by measuring the covariance between a pair of variables. Using graphical models, it is possible to visualize the dependence structure found in a set of stochastic variables. Using ordinary graphical models, such as Markov networks, Bayesian networks, and Gaussian graphical models, the type of dependencies that can be modeled is limited to marginal and conditional (in)dependencies. The models introduced in this thesis enable the graphical representation of context-speci c independencies, i.e. conditional independencies that hold only in a subset of the outcome space of the conditioning variables. In the articles included in this thesis, we introduce several types of graphical models that can represent context-speci c independencies. Models for both discrete variables and continuous variables are considered. A wide range of properties are examined for the introduced models, including identi ability, robustness, scoring, and optimization. In one article, a predictive classi er which utilizes context-speci c independence models is introduced. This classi er clearly demonstrates the potential bene ts of the introduced models. The purpose of the material included in the thesis prior to the articles is to provide the basic theory needed to understand the articles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present review deals with the stages of synthesis and processing of asparagine-linked oligosaccharides occurring in the lumen of the endoplasmic reticulum and their relationship to the acquisition by glycoproteins of their proper tertiary structures. Special emphasis is placed on reactions taking place in trypanosomatid protozoa since their study has allowed the detection of the transient glucosylation of glycoproteins catalyzed by UDP-Glc:glycoprotein glucosyltransferase and glucosidase II. The former enzyme has the unique property of covalently tagging improperly folded conformations by catalyzing the formation of protein-linked Glc1Man7GlcNAc2, Glc1Man8GlcNac2 and Glc1Man9GlcNAc2 from the unglucosylated proteins. Glucosyltransferase is a soluble protein of the endoplasmic reticulum that recognizes protein domains exposed in denatured but not in native conformations (probably hydrophobic amino acids) and the innermost N-acetylglucosamine unit that is hidden from macromolecular probes in most native glycoproteins. In vivo, the glucose units are removed by glucosidase II. The influence of oligosaccharides in glycoprotein folding is reviewed as well as the participation of endoplasmic reticulum chaperones (calnexin and calreticulin) that recognize monoglucosylated species in the same process. A model for the quality control of glycoprotein folding in the endoplasmic reticulum, i.e., the mechanism by which cells recognize the tertiary structure of glycoproteins and only allow transit to the Golgi apparatus of properly folded species, is discussed. The main elements of this control are calnexin and calreticulin as retaining components, the UDP-Glc:glycoprotein glucosyltransferase as a sensor of tertiary structures and glucosidase II as the releasing agent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental issues, including global warming, have been serious challenges realized worldwide, and they have become particularly important for the iron and steel manufacturers during the last decades. Many sites has been shut down in developed countries due to environmental regulation and pollution prevention while a large number of production plants have been established in developing countries which has changed the economy of this business. Sustainable development is a concept, which today affects economic growth, environmental protection, and social progress in setting up the basis for future ecosystem. A sustainable headway may attempt to preserve natural resources, recycle and reuse materials, prevent pollution, enhance yield and increase profitability. To achieve these objectives numerous alternatives should be examined in the sustainable process design. Conventional engineering work cannot address all of these substitutes effectively and efficiently to find an optimal route of processing. A systematic framework is needed as a tool to guide designers to make decisions based on overall concepts of the system, identifying the key bottlenecks and opportunities, which lead to an optimal design and operation of the systems. Since the 1980s, researchers have made big efforts to develop tools for what today is referred to as Process Integration. Advanced mathematics has been used in simulation models to evaluate various available alternatives considering physical, economic and environmental constraints. Improvements on feed material and operation, competitive energy market, environmental restrictions and the role of Nordic steelworks as energy supplier (electricity and district heat) make a great motivation behind integration among industries toward more sustainable operation, which could increase the overall energy efficiency and decrease environmental impacts. In this study, through different steps a model is developed for primary steelmaking, with the Finnish steel sector as a reference, to evaluate future operation concepts of a steelmaking site regarding sustainability. The research started by potential study on increasing energy efficiency and carbon dioxide reduction due to integration of steelworks with chemical plants for possible utilization of available off-gases in the system as chemical products. These off-gases from blast furnace, basic oxygen furnace and coke oven furnace are mainly contained of carbon monoxide, carbon dioxide, hydrogen, nitrogen and partially methane (in coke oven gas) and have proportionally low heating value but are currently used as fuel within these industries. Nonlinear optimization technique is used to assess integration with methanol plant under novel blast furnace technologies and (partially) substitution of coal with other reducing agents and fuels such as heavy oil, natural gas and biomass in the system. Technical aspect of integration and its effect on blast furnace operation regardless of capital expenditure of new operational units are studied to evaluate feasibility of the idea behind the research. Later on the concept of polygeneration system added and a superstructure generated with alternative routes for off-gases pretreatment and further utilization on a polygeneration system producing electricity, district heat and methanol. (Vacuum) pressure swing adsorption, membrane technology and chemical absorption for gas separation; partial oxidation, carbon dioxide and steam methane reforming for methane gasification; gas and liquid phase methanol synthesis are the main alternative process units considered in the superstructure. Due to high degree of integration in process synthesis, and optimization techniques, equation oriented modeling is chosen as an alternative and effective strategy to previous sequential modelling for process analysis to investigate suggested superstructure. A mixed integer nonlinear programming is developed to study behavior of the integrated system under different economic and environmental scenarios. Net present value and specific carbon dioxide emission is taken to compare economic and environmental aspects of integrated system respectively for different fuel systems, alternative blast furnace reductants, implementation of new blast furnace technologies, and carbon dioxide emission penalties. Sensitivity analysis, carbon distribution and the effect of external seasonal energy demand is investigated with different optimization techniques. This tool can provide useful information concerning techno-environmental and economic aspects for decision-making and estimate optimal operational condition of current and future primary steelmaking under alternative scenarios. The results of the work have demonstrated that it is possible in the future to develop steelmaking towards more sustainable operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study was designed to evaluate the effect of different conditions of collection, transport and storage on the quality of blood samples from normal individuals in terms of the activity of the enzymes ß-glucuronidase, total hexosaminidase, hexosaminidase A, arylsulfatase A and ß-galactosidase. The enzyme activities were not affected by the different materials used for collection (plastic syringes or vacuum glass tubes). In the evaluation of different heparin concentrations (10% heparin, 5% heparin, and heparinized syringe) in the syringes, it was observed that higher doses resulted in an increase of at least 1-fold in the activities of ß-galactosidase, total hexosaminidase and hexosaminidase A in leukocytes, and ß-glucuronidase in plasma. When the effects of time and means of transportation were studied, samples that had been kept at room temperature showed higher deterioration with time (72 and 96 h) before processing, and in this case it was impossible to isolate leukocytes from most samples. Comparison of heparin and acid citrate-dextrose (ACD) as anticoagulants revealed that ß-glucuronidase and hexosaminidase activities in plasma reached levels near the lower normal limits when ACD was used. In conclusion, we observed that heparin should be used as the preferable anticoagulant when measuring these lysosomal enzyme activities, and we recommend that, when transport time is more than 24 h, samples should be shipped by air in a styrofoam box containing wet ice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When contrast sensitivity functions to Cartesian and angular gratings were compared in previous studies the peak sensitivity to angular stimuli was reported to be 0.21 log units higher. In experiments carried out to repeat this result, we used the same two-alternative forced-choice paradigm, but improved experimental control and precision by increasing contrast resolution from 8 to 12 bits, increasing the screen refresh rate from 30 Hz interlaced to 85 Hz non-interlaced, linearizing the voltage-luminance relation, modulating luminance in frequencies that minimize pixel aliasing, and improving control of the subject's exposure to the stimuli. The contrast sensitivity functions to Cartesian and angular gratings were similar in form and peak sensitivity (2.4 cycles per visual degree (c/deg) and 32 c/360º, respectively) to those reported in a previous study (3 c/deg and 32 c/360º, respectively), but peak sensitivity to angular stimuli was 0.13 log units lower than that to Cartesian stimuli. When the experiment was repeated, this time simulating the experimental control level used in the previous study, no difference between the peak sensitivity to Cartesian and angular stimuli was found. This result agrees with most current models that assume Cartesian filtering at the first visual processing stage. The discrepancy in the results is explained in part by differences in the degree of experimental control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.