47 resultados para processing enzymes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface chemistry is of great importance in plant biomass engineering and applications. The surface chemical composition of biomass which includes lignin, carbohydrates and extractives influences its interactions with chemical agents, such as pulp processing/papermaking chemicals, or enzymes for different purposes. In this thesis, the changes in the surface chemical composition of lignocellulosic biomass after physical modification for the improvement of resulting paper properties and chemical treatment for the enhancement of enzymatic hydrolysis were investigated. Low consistency (LC) refining was used as physical treatment of bleached softwood and hardwood pulp samples, and the surface chemistry of refined samples was investigated. The refined pulp was analysed as whole pulp while the fines-free fibre samples were characterized separately. The fines produced in LCrefining contributed to an enlarged surface specific area as well as the change of surface coverage by lignin and extractives, as investigated by X-ray photoelectron spectroscopy (XPS). The surface coverage by lignin of the whole pulp decreased after refining while the surface coverage by extractives increased both for pine and eucalyptus. In the case of pine, the removal of fines resulted in reduction of the surface coverage by extractives, while the surface coverage by lignin increased on fibre sample (without fines). In the case of eucalyptus, the surface coverage by lignin of fibre samples decreased after the removal of fines. In addition, the surface distribution of carbohydrates, lignin and extractives of pine and eucalyptus samples was determined by Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS). LC-refining increased the amounts of pentose, hexose and extractives on the surface of pine samples. ToF-SIMS also gave clear evidence about xylan deposition and reduction of surface lignin distribution on the fibre of eucalyptus. However, the changes in the surface chemical composition during the physical treatment has led to an increase in the adsorption of fluorescent whitening agents (FWAs) on fibres due to a combination of electro-static forces, specific surface area of fibres and hydrophobic interactions. Various physicochemical pretreatments were conducted on wood and non-wood biomass for enhancing enzymatic hydrolysis of polysaccharides, and the surface chemistry of the pretreated and enzymatically hydrolysed samples was investigated by field emission scanning electron microscopy (FE-SEM), XPS and ToF-SIMS. A hydrotrope was used as a relatively novel pretreatment technology both in the case of wood and non-wood biomass. For comparison, ionic liquid and hydrothermal pretreatments were applied on softwood and hardwood as well. Thus, XPS analysis showed that the surface lignin was more efficiently removed by hydrotropic pretreatment compared to ionic liquid or hydrothermal pretreatments. SEM analysis also found that already at room temperature the ionic liquid pretreatments were more effective in swelling the fibres compared with hydrotropic pretreatment at elevated temperatures. The enzymatic hydrolysis yield of hardwood was enhanced due to the decrease in surface coverage of lignin, which was induced by hydrotropic treatment. However, hydrotropic pretreatment was not appropriate for softwood because of the predominance of guaiacyl lignin structure in this material. In addition, the reduction of surface lignin and xylan during pretreatment and subsequent increase in cellulose hydrolysis by enzyme could be observed from ToF-SIMS results. The characterisation of the non-wood biomass (e.g. sugarcane bagasse and common reed) treated by hydrotropic method, alkaline and alkaline hydrogen peroxide pretreatments were carried out by XPS and ToF-SIMS. According to the results, the action for the removal of the surface lignin of non-wood biomass by hydrotropic pretreatment was more significant compared to alkaline and alkaline hydrogen peroxide pretreatments, although a higher total amount of lignin could be removed by alkaline and alkaline hydrogen peroxide pretreatment. Furthermore, xylan could be remarkably more efficiently removed by hydrotropic method. Therefore, the glucan yield achieved from hydrotropic treated sample was higher than that from samples treated with alkaline or alkaline hydrogen peroxide. Through the use of ToF-SIMS, the distribution and localization of lignin and carbohydrates on the surface of ignocelluloses during pretreatment and enzymatic hydrolysis could be detected, and xylan degradation during enzymatic hydrolysis could also be assessed. Thus, based on the results from XPS and ToF-SIMS, the mechanism of the hydrotropic pretreatment in improving the accessibility of enzymes to fibre and further ameliorating of the enzymatic saccharification could be better elucidated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyketides are a diverse group of natural products produced in many bacteria, fungi and plants. These metabolites have diverse biological activities and several members of this group are in clinical use as antibiotics, anticancer agents, antifungals and immunosuppressants. The different polyketides are produced by polyketide synthases, which catalyze the condensation of extender units into various polyketide scaffolds. After the biosynthesis of the polyketide backbone, more versatility is created to the molecule by tailoring enzymes catalyzing for instance hydroxylations, methylations and glycosylations. Flavoprotein monooxygenases (FPMO) and short-chain alcohol dehydrogenases/reductases (SDR) are two enzyme families that catalyze unusual tailoring reactions in the biosynthesis of natural products. In the experimental section, functions of homologous FPMO and SDR tailoring enzymes from five different angucycline pathways were studied in vitro. The results revealed how different angucyclinones are produced from a common intermediate and that FPMO JadH and SDR LanV are responsible for the divergence of jadomycins and landomycins, respectively, from other angucyclines. Structural studies of these tailoring enzymes revealed differences between homologous enzymes and enabled the use of structure-based protein engineering. Mutagenesis experiments gave important information about the enzymes behind the evolution of distinct angucycline metabolites. These experiments revealed a correlation between the substrate inhibition and bi-functionality in JadH homologue PgaE. In the case of LanV, analysis of mutagenesis results revealed that the difference between the stereospecificities of LanV and its homologues CabV and UrdMred is unexpectedly related to the conformation of the substrate rather than to the structure of the enzyme. Altogether, the results presented here have improved our knowledge about different steps of angucycline biosynthesis and the reaction mechanisms used by the tailoring enzymes behind these steps. This information can hopefully be used to modify these enzymes to produce novel metabolites, which have new biological targets or possess novel modes-of-action. The understanding of these unusual enzyme mechanisms is also interesting to enzymologists outside the field of natural product research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is done to examine waste power plant’s optimal processing chain and it is important to consider from several points of view on why one option is better than the other. This is to insure that the right decision is made. Incineration of waste has devel-oped to be one decent option for waste disposal. There are several legislation matters and technical options to consider when starting up a waste power plant. From the tech-niques pretreatment, burner and flue gas cleaning are the biggest ones to consider. The treatment of incineration residues is important since it can be very harmful for the envi-ronment. The actual energy production from waste is not highly efficient and there are several harmful compounds emitted. Recycling of waste before incineration is not very typical and there are not many recycling options for materials that cannot be easily re-cycled to same product. Life cycle assessment is a good option for studying the envi-ronmental effect of the system. It has four phases that are part of the iterative study process. In this study the case environment is a waste power plant. The modeling of the plant is done with GaBi 6 software and the scope is from gate-to-grave. There are three different scenarios, from which the first and second are compared to each other to reach conclusions. Zero scenario is part of the study to demonstrate situation without the power plant. The power plant in this study is recycling some materials in scenario one and in scenario two even more materials and utilize the bottom ash more ways than one. The model has the substitutive processes for the materials when they are not recycled in the plant. The global warming potential results show that scenario one is the best option. The variable costs that have been considered tell the same result. The conclusion is that the waste power plant should not recycle more and utilize bottom ash in a number of ways. The area is not ready for that kind of utilization and production from recycled materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral study conducts an empirical analysis of the impact of Word-of-Mouth (WOM) on marketing-relevant outcomes such as attitudes and consumer choice, during a high-involvement and complex service decision. Due to its importance to decisionmaking, WOM has attracted interest from academia and practitioners for decades. Consumers are known to discuss products and services with one another. These discussions help consumers to form an evaluative opinion, as WOM reduces perceived risk, simplifies complexity, and increases the confidence of consumers in decisionmaking. These discussions are also highly impactful as WOM is a trustworthy source of information, since it is independent from the company or brand. In responding to the calls for more research on what happens after WOM information is received, and how it affects marketing-relevant outcomes, this dissertation extends prior WOM literature by investigating how consumers process information in a highinvolvement service domain, in particular higher-education. Further, the dissertation studies how the form of WOM influences consumer choice. The research contributes to WOM and services marketing literature by developing and empirically testing a framework for information processing and studying the long-term effects of WOM. The results of the dissertation are presented in five research publications. The publications are based on longitudinal data. The research leads to the development of a proposed theoretical framework for the processing of WOM, based on theories from social psychology. The framework is specifically focused on service decisions, as it takes into account evaluation difficulty through the complex nature of choice criteria associated with service purchase decisions. Further, other gaps in current WOM literature are taken into account by, for example, examining how the source of WOM and service values affects the processing mechanism. The research also provides implications for managers aiming to trigger favorable WOM through marketing efforts, such as advertising and testimonials. The results provide suggestions on how to design these marketing efforts by taking into account the mechanism through which information is processed, or the form of social influence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general aim of the thesis was to study university students’ learning from the perspective of regulation of learning and text processing. The data were collected from the two academic disciplines of medical and teacher education, which share the features of highly scheduled study, a multidisciplinary character, a complex relationship between theory and practice and a professional nature. Contemporary information society poses new challenges for learning, as it is not possible to learn all the information needed in a profession during a study programme. Therefore, it is increasingly important to learn how to think and learn independently, how to recognise gaps in and update one’s knowledge and how to deal with the huge amount of constantly changing information. In other words, it is critical to regulate one’s learning and to process text effectively. The thesis comprises five sub-studies that employed cross-sectional, longitudinal and experimental designs and multiple methods, from surveys to eye tracking. Study I examined the connections between students’ study orientations and the ways they regulate their learning. In total, 410 second-, fourth- and sixth-year medical students from two Finnish medical schools participated in the study by completing a questionnaire measuring both general study orientations and regulation strategies. The students were generally deeply oriented towards their studies. However, they regulated their studying externally. Several interesting and theoretically reasonable connections between the variables were found. For instance, self-regulation was positively correlated with deep orientation and achievement orientation and was negatively correlated with non-commitment. However, external regulation was likewise positively correlated with deep orientation and achievement orientation but also with surface orientation and systematic orientation. It is argued that external regulation might function as an effective coping strategy in the cognitively loaded medical curriculum. Study II focused on medical students’ regulation of learning and their conceptions of the learning environment in an innovative medical course where traditional lectures were combined wth problem-based learning (PBL) group work. First-year medical and dental students (N = 153) completed a questionnaire assessing their regulation strategies of learning and views about the PBL group work. The results indicated that external regulation and self-regulation of the learning content were the most typical regulation strategies among the participants. In line with previous studies, self-regulation wasconnected with study success. Strictly organised PBL sessions were not considered as useful as lectures, although the students’ views of the teacher/tutor and the group were mainly positive. Therefore, developers of teaching methods are challenged to think of new solutions that facilitate reflection of one’s learning and that improve the development of self-regulation. In Study III, a person-centred approach to studying regulation strategies was employed, in contrast to the traditional variable-centred approach used in Study I and Study II. The aim of Study III was to identify different regulation strategy profiles among medical students (N = 162) across time and to examine to what extent these profiles predict study success in preclinical studies. Four regulation strategy profiles were identified, and connections with study success were found. Students with the lowest self-regulation and with an increasing lack of regulation performed worse than the other groups. As the person-centred approach enables us to individualise students with diverse regulation patterns, it could be used in supporting student learning and in facilitating the early diagnosis of learning difficulties. In Study IV, 91 student teachers participated in a pre-test/post-test design where they answered open-ended questions about a complex science concept both before and after reading either a traditional, expository science text or a refutational text that prompted the reader to change his/her beliefs according to scientific beliefs about the phenomenon. The student teachers completed a questionnaire concerning their regulation and processing strategies. The results showed that the students’ understanding improved after text reading intervention and that refutational text promoted understanding better than the traditional text. Additionally, regulation and processing strategies were found to be connected with understanding the science phenomenon. A weak trend showed that weaker learners would benefit more from the refutational text. It seems that learners with effective learning strategies are able to pick out the relevant content regardless of the text type, whereas weaker learners might benefit from refutational parts that contrast the most typical misconceptions with scientific views. The purpose of Study V was to use eye tracking to determine how third-year medical studets (n = 39) and internal medicine residents (n = 13) read and solve patient case texts. The results revealed differences between medical students and residents in processing patient case texts; compared to the students, the residents were more accurate in their diagnoses and processed the texts significantly faster and with a lower number of fixations. Different reading patterns were also found. The observed differences between medical students and residents in processing patient case texts could be used in medical education to model expert reasoning and to teach how a good medical text should be constructed. The main findings of the thesis indicate that even among very selected student populations, such as high-achieving medical students or student teachers, there seems to be a lot of variation in regulation strategies of learning and text processing. As these learning strategies are related to successful studying, students enter educational programmes with rather different chances of managing and achieving success. Further, the ways of engaging in learning seldom centre on a single strategy or approach; rather, students seem to combine several strategies to a certain degree. Sometimes, it can be a matter of perspective of which way of learning can be considered best; therefore, the reality of studying in higher education is often more complicated than the simplistic view of self-regulation as a good quality and external regulation as a harmful quality. The beginning of university studies may be stressful for many, as the gap between high school and university studies is huge and those strategies that were adequate during high school might not work as well in higher education. Therefore, it is important to map students’ learning strategies and to encourage them to engage in using high-quality learning strategies from the beginning. Instead of separate courses on learning skills, the integration of these skills into course contents should be considered. Furthermore, learning complex scientific phenomena could be facilitated by paying attention to high-quality learning materials and texts and other support from the learning environment also in the university. Eye tracking seems to have great potential in evaluating performance and growing diagnostic expertise in text processing, although more research using texts as stimulus is needed. Both medical and teacher education programmes and the professions themselves are challenging in terms of their multidisciplinary nature and increasing amounts of information and therefore require good lifelong learning skills during the study period and later in work life.