20 resultados para Data pre-processing

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of drug substances in formulation development in the pharmaceutical industry is increasing. Some of these are amorphous drugs and have glass transition below ambient temperature, and thus they are usually difficult to formulate and handle. One reason for this is the reduced viscosity, related to the stickiness of the drug, that makes them complicated to handle in unit operations. Thus, the aim in this thesis was to develop a new processing method for a sticky amorphous model material. Furthermore, model materials were characterised before and after formulation, using several characterisation methods, to understand more precisely the prerequisites for physical stability of amorphous state against crystallisation. The model materials used were monoclinic paracetamol and citric acid anhydrate. Amorphous materials were prepared by melt quenching or by ethanol evaporation methods. The melt blends were found to have slightly higher viscosity than the ethanol evaporated materials. However, melt produced materials crystallised more easily upon consecutive shearing than ethanol evaporated materials. The only material that did not crystallise during shearing was a 50/50 (w/w, %) blend regardless of the preparation method and it was physically stable at least two years in dry conditions. Shearing at varying temperatures was established to measure the physical stability of amorphous materials in processing and storage conditions. The actual physical stability of the blends was better than the pure amorphous materials at ambient temperature. Molecular mobility was not related to the physical stability of the amorphous blends, observed as crystallisation. Molecular mobility of the 50/50 blend derived from a spectral linewidth as a function of temperature using solid state NMR correlated better with the molecular mobility derived from a rheometer than that of differential scanning calorimetry data. Based on the results obtained, the effect of molecular interactions, thermodynamic driving force and miscibility of the blends are discussed as the key factors to stabilise the blends. The stickiness was found to be affected glass transition and viscosity. Ultrasound extrusion and cutting were successfully tested to increase the processability of sticky material. Furthermore, it was found to be possible to process the physically stable 50/50 blend in a supercooled liquid state instead of a glassy state. The method was not found to accelerate the crystallisation. This may open up new possibilities to process amorphous materials that are otherwise impossible to manufacture into solid dosage forms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this research was to establish the necessary conditions under which individuals are prepared to commit themselves to quality assurance work in the organisation of a Polytechnic. The conditions were studied using four main concepts: awareness of quality, commitment to the organisation, leadership and work welfare. First, individuals were asked to describe these four concepts. Then, relationships between the concepts were analysed in order to establish the conditions for the commitment of an individual towards quality assurance work (QA). The study group comprised the entire personnel of Helsinki Polytechnic, of which 341 (44.5%) individuals participated. Mixed methods were used as the methodological base. A questionnaire and interviews were used as the research methods. The data from the interviews were used for the validation of the results, as well as for completing the analysis. The results of these interviews and analyses were integrated using the concurrent nested design method. In addition, the questionnaire was used to separately analyse the impressions and meanings of the awareness of quality and leadership, because, according to the pre-understanding, impressions of phenomena expressed in terms of reality have an influence on the commitment to QA. In addition to statistical figures, principal component analysis was used as a description method. For comparisons between groups, one way variance analysis and effect size analysis were used. For explaining the analysis methods, forward regression analysis and structural modelling were applied. As a result of the research it was found that 51% of the conditions necessary for a commitment to QA were explained by an individual’s experience/belief that QA was a method of development, that QA was possible to participate in and that the meaning of quality included both product and process qualities. If analysed separately, other main concepts (commitment to the organisation, leadership and work welfare) played only a small part in explaining an individual’s commitment. In the context of this research, a structural path model of the main concepts was built. In the model, the concepts were interconnected by paths created as a result of a literature search covering the main concepts, as well as a result of an analysis of the empirical material of this thesis work. The path model explained 46% of the necessary conditions under which individuals are prepared to commit themselves to QA. The most important path for achieving a commitment stemmed from product and system quality emanating from the new goals of the Polytechnic, moved through the individual’s experience that QA is a method of the total development of quality and ended in a commitment to QA. The second most important path stemmed from the individual’s experience of belonging to a supportive work community, moved through the supportive value of the job and through affective commitment to the organisation and ended in a commitment to QA. The third path stemmed from an individual’s experiences in participating in QA, moved through collective system quality and through these to the supportive value of the job to affective commitment to the organisation and ended in a commitment to QA. The final path in the path model stemmed from leadership by empowerment, moved through collective system quality, the supportive value of the job and an affective commitment to the organisation, and again, ended in a commitment to QA. As a result of the research, it was found that the individual’s functional department was an important factor in explaining the differences between groups. Therefore, it was found that understanding the processing of part cultures in the organisation is important when developing QA. Likewise, learning-teaching paradigms proved to be a differentiating factor. Individuals thinking according to the humanistic-constructivistic paradigm showed more commitment to QA than technological-rational thinkers. Also, it was proved that the QA training program did not increase commitment, as the path model demonstrated that those who participated in training showed 34% commitment, whereas those who did not showed 55% commitment. As a summary of the results it can be said that the necessary conditions under which individuals are prepared to commit themselves to QA cannot be treated in a reductionistic way. Instead, the conditions must be treated as one totality, with all the main concepts interacting simultaneously. Also, the theoretical framework of quality must include its dynamic aspect, which means the development of the work of the individual and learning through auditing. In addition, this dynamism includes the reflection of the paradigm of the functions of the individual as well as that of all parts of the organisation. It is important to understand and manage the various ways of thinking and the cultural differences produced by the fragmentation of the organisation. Finally, it seems possible that the path model can be generalised for use in any organisation development project where the personnel should be committed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through this study I aim to portray connections between home and school through the patterns of thought and action shared in everyday life in a certain community. My observations are primarily based upon interviews, writings and artwork by people from home (N=32) and school (N=13) contexts. Through the stories told, I depict the characters and characteristic features of the home-school interaction by generations. According to the material, in the school days of the grandparents the focus was on discipline and order. For the parents, the focus had shifted towards knowledge, while for the pupils today, the focus lies on evaluation, through which the upbringing of the child is steered towards favourable outcomes. Teachers and those people at home hold partially different understandings of home-school interaction, both of its manifested forms and potentials. The forms of contact in use today are largely seen as one-sided. Yearning for openness and regularity is shared by both sides, yet understood differently. Common causes for failure are said to lie in plain human difficulties in communication and social interaction, but deeply rooted traditions regarding forms of contact also cast a shadow on the route to successful co-operation. This study started around the idea, that home-school interaction should be steered towards the ex-change of constructive ideas between both the home and school environments. Combining the dif-ferent views gives to something to build upon. To test this idea, I drafted a practice period, which was implemented in a small pre-school environment in the fall of 1997. My focus of interest in this project was on the handling of ordinary life information in the schools. So I combined individual views, patterns of knowledge and understanding of the world into the process of teaching. Works of art and writings by the informants worked as tools for information processing and as practical forms of building home-school interaction. Experiences from the pre-school environ-ment were later on echoed in constructing home-school interaction in five other schools. In both these projects, the teaching in the school was based on stories, thoughts and performances put to-gether by the parents, grandparents and children at home. During these processes, the material used in this study, consisting of artwork, writings and interviews (N=501), was collected. The data shows that information originating from the home environments was both a motivating and interesting addition to the teaching. There even was a sense of pride when assessing the seeds of knowledge from one’s own roots. In most cases and subjects, the homegrown information content was seamlessly connected to the functions of school and the curriculum. This project initiated thought processes between pupils and teachers, adults, children and parents, teachers and parents, and also between generations. It appeared that many of the subjects covered had not been raised before between the various participant groups. I have a special interest here in visual expression and its various contextual meanings. There art material portrays how content matter and characteristic features of the adult and parent contexts reflect in the works of the children. Another clearly noticeable factor in the art material is the impact of time-related traditions and functions on the means of visual expression. Comparing the visual material to the written material reveals variances of meaning and possibilities between these forms of expression. The visual material appears to be related especially to portraying objects, action and usage. Processing through that making of images was noted to bring back memories of concrete structures, details and also emotions. This process offered the child an intensive social connection with the adults. In some cases, with children and adults alike, this project brought forth an ongoing relation to visual expression. During this study I end up changing the concept to ‘home-school collaboration’. This widely used concept guides and outlines the interaction between schools and homes. In order to broaden the field of possibilities, I choose to use the concept ‘school-home interconnection’. This concept forms better grounds for forming varying impressions and practices when building interactive contexts. This concept places the responsibility of bridging the connection-gap in the schools. Through the experiences and innovations of thought gained from these projects, I form a model of pedagogy that embraces the idea of school-home interconnection and builds on the various impres-sions and expressions contained in it. In this model, school makes use of the experiences, thoughts and conceptions from the home environment. Various forms of expression are used to portray and process this information. This joint evaluation and observation evolves thought patterns both in school and at home. Keywords: percieving, visuality, visual culture, art and text, visual expression, art education, growth in interaction, home-school collaboration, school-home interconnection, school-home interaction model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A child learns new things, creates social relationships and participates in play with the help of language. How can a child overcome these challenges if the surrounding language is not his mother tongue? The objective of learning a new language in the Pre-school education is an active bilingualism in all fields of the language. Theoretical context of the research rises from bilingualism, learning a language, language skills and evaluating them. Object of the research was to understand language skills of a child from a different linguistic and cultural background in the final stage of Pre-school education and to clarify how learning Finnish was supported during the Pre-school year. Answers to the research issues will be searched with the following questions: 1) What kind of language skills does a child from a different linguistic and cultural backgrounds have at the final stage of Pre-school education?, 1.1) What kind of listening comprehension skills?, 1.2) What kind of speech and vocabulary skills?, 1.3) What kind of structural skills?, 2) What kind of individual differences are there in language skills of children from different linguistic and cultural backgrounds?, and 3) How has a child from a different linguistic and cultural background been supported in learning Finnish during the Pre-school education? The view of language skills in this research is holistic even though it will be analysed in separate fields. The aim of this research is to form an overall impression of Finnish skills of the children participating in the research. Eight Pre-school-aged children with different linguistic and cultural backgrounds and their kindergarten teachers participated in this research. The children had taken part in Finnish activities for about three years. The research material consists of the test series (KITA), which evaluate children’s language skills – and of the questionnaire to the kindergarten teachers. The purpose of the questionnaire was to provide additional information on children’s language skills in Pre-school teaching situations and on supporting Finnish in Pre-school education. This research is qualitative and processing of the material is based on content analysis. According to the kindergarten teachers, the children’s social language skills were sufficient to cope in everyday life but children needed assistance with longer instructions. The same phenomenon could also be seen with the KITA tests – in which long and abstract instructions turned out to be difficult. Individual differences of the children were perceived in productivity skills, which were realised in fluent or influent speech. The children were supported in learning Finnish individually, in small-groups and in the activities of a whole group. ‘Finnish as the second language’ small-groups were the most common form of support in learning the language. The support at understanding activities was emphasized in whole group situations as well as in individual situations while assisting the child’s language skills. Generally, the children’s language skills were in the same level with developing basic language skills. The data of this research help to understand children’s language skills after three years of adopting Finnish. The results can be utilised in planning and evaluation of teaching another language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decades there has been a global shift in forest management from a focus solely on timber management to ecosystem management that endorses all aspects of forest functions: ecological, economic and social. This has resulted in a shift in paradigm from sustained yield to sustained diversity of values, goods and benefits obtained at the same time, introducing new temporal and spatial scales into forest resource management. The purpose of the present dissertation was to develop methods that would enable spatial and temporal scales to be introduced into the storage, processing, access and utilization of forest resource data. The methods developed are based on a conceptual view of a forest as a hierarchically nested collection of objects that can have a dynamically changing set of attributes. The temporal aspect of the methods consists of lifetime management for the objects and their attributes and of a temporal succession linking the objects together. Development of the forest resource data processing method concentrated on the extensibility and configurability of the data content and model calculations, allowing for a diverse set of processing operations to be executed using the same framework. The contribution of this dissertation to the utilisation of multi-scale forest resource data lies in the development of a reference data generation method to support forest inventory methods in approaching single-tree resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The average daily intake of folate, one of the B vitamins, falls below recommendations among the Finnish population. Bread and cereals are the main sources of folate, rye being the most significant single source. Processing is a prerequisite for the consumption of whole grain rye; however, little is known about the effect of processing on folates. Moreover, data on the bioavailability of endogenous cereal folates are scarce. The aim of this study was to examine the variation in as well as the effect of fermentation, germination, and thermal processes on folate contents in rye. Bioavailability of endogenous rye folates was investigated in a four-week human intervention study. One of the objectives throughout the work was to optimise and evaluate analytical methods for determining folate contents in cereals. Affinity chromatographic purification followed by high-performance liquid chromatography (HPLC) was a suitable method for analysing cereal products for folate vitamers, and microbiological assay with Lactobacillus rhamnosus reliably quantified the total folate. However, HPLC gave approximately 30% lower results than the microbiological assay. The folate content of rye was high and could be further increased by targeted processing. The vitamer distribution of whole grain rye was characterised by a large proportion of formylated vitamers followed by 5-methyltetrahydrofolate. In sourdough fermentation of rye, the studied yeasts synthesized and lactic acid bacteria mainly depleted folate. Two endogenous bacteria isolated from rye flour were found to produce folate during fermentation. Inclusion of baker s yeast in sourdough fermentation raised the folate level so that the bread could contain more folate than the flour it was made of. Germination markedly increased the folate content of rye, with particularly high folate concentrations in hypocotylar roots. Thermal treatments caused significant folate losses but the preceding germination compensated well for the losses. In the bioavailability study, moderate amounts of endogenous folates in the form of different rye products and orange juice incorporated in the diet improved the folate status among healthy adults. Endogenous folates from rye and orange juice showed similar bioavailability to folic acid from fortified white bread. In brief, it was shown that the folate content of rye can be enhanced manifold by optimising and combining food processing techniques. This offers some practical means to increase the daily intake of folate in a bioavailable form.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Taita Hills in southeastern Kenya form the northernmost part of Africa’s Eastern Arc Mountains, which have been identified by Conservation International as one of the top ten biodiversity hotspots on Earth. As with many areas of the developing world, over recent decades the Taita Hills have experienced significant population growth leading to associated major changes in land use and land cover (LULC), as well as escalating land degradation, particularly soil erosion. Multi-temporal medium resolution multispectral optical satellite data, such as imagery from the SPOT HRV, HRVIR, and HRG sensors, provides a valuable source of information for environmental monitoring and modelling at a landscape level at local and regional scales. However, utilization of multi-temporal SPOT data in quantitative remote sensing studies requires the removal of atmospheric effects and the derivation of surface reflectance factor. Furthermore, for areas of rugged terrain, such as the Taita Hills, topographic correction is necessary to derive comparable reflectance throughout a SPOT scene. Reliable monitoring of LULC change over time and modelling of land degradation and human population distribution and abundance are of crucial importance to sustainable development, natural resource management, biodiversity conservation, and understanding and mitigating climate change and its impacts. The main purpose of this thesis was to develop and validate enhanced processing of SPOT satellite imagery for use in environmental monitoring and modelling at a landscape level, in regions of the developing world with limited ancillary data availability. The Taita Hills formed the application study site, whilst the Helsinki metropolitan region was used as a control site for validation and assessment of the applied atmospheric correction techniques, where multiangular reflectance field measurements were taken and where horizontal visibility meteorological data concurrent with image acquisition were available. The proposed historical empirical line method (HELM) for absolute atmospheric correction was found to be the only applied technique that could derive surface reflectance factor within an RMSE of < 0.02 ps in the SPOT visible and near-infrared bands; an accuracy level identified as a benchmark for successful atmospheric correction. A multi-scale segmentation/object relationship modelling (MSS/ORM) approach was applied to map LULC in the Taita Hills from the multi-temporal SPOT imagery. This object-based procedure was shown to derive significant improvements over a uni-scale maximum-likelihood technique. The derived LULC data was used in combination with low cost GIS geospatial layers describing elevation, rainfall and soil type, to model degradation in the Taita Hills in the form of potential soil loss, utilizing the simple universal soil loss equation (USLE). Furthermore, human population distribution and abundance were modelled with satisfactory results using only SPOT and GIS derived data and non-Gaussian predictive modelling techniques. The SPOT derived LULC data was found to be unnecessary as a predictor because the first and second order image texture measurements had greater power to explain variation in dwelling unit occurrence and abundance. The ability of the procedures to be implemented locally in the developing world using low-cost or freely available data and software was considered. The techniques discussed in this thesis are considered equally applicable to other medium- and high-resolution optical satellite imagery, as well the utilized SPOT data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Replication and transcription of the RNA genome of alphaviruses relies on a set of virus-encoded nonstructural proteins. They are synthesized as a long polyprotein precursor, P1234, which is cleaved at three processing sites to yield nonstructural proteins nsP1, nsP2, nsP3 and nsP4. All the four proteins function as constitutive components of the membrane-associated viral replicase. Proteolytic processing of P1234 polyprotein is precisely orchestrated and coordinates the replicase assembly and maturation. The specificity of the replicase is also controlled by proteolytic cleavages. The early replicase is composed of P123 polyprotein intermediate and nsP4. It copies the positive sense RNA genome to complementary minus-strand. Production of new plus-strands requires complete processing of the replicase. The papain-like protease residing in nsP2 is responsible for all three cleavages in P1234. This study addressed the mechanisms of proteolytic processing of the replicase polyprotein in two alphaviruses Semliki Forest virus (SFV) and Sindbis virus (SIN) representing different branches of the genus. The survey highlighted the functional relation of the alphavirus nsP2 protease to the papain-like enzymes. A new structural motif the Cys-His catalytic dyad accompanied with an aromatic residue following the catalytic His was described for nsP2 and a subset of other thiol proteases. Such an architecture of the catalytic center was named the glycine specificity motif since it was implicated in recognition of a specific Gly residue in the substrate. In particular, the presence of the motif in nsP2 makes the appearance of this amino acid at the second position upstream of the scissile bond a necessary condition for the cleavage. On top of that, there were four distinct mechanisms identified, which provide affinity for the protease and specifically direct the enzyme to different sites in the P1234 polyprotein. Three factors RNA, the central domain of nsP3 and the N-terminus of nsP2 were demonstrated to be external modulators of the nsP2 protease. Here I suggest that the basal nsP2 protease specificity is inherited from the ancestral papain-like enzyme and employs the recognition of the upstream amino acid signature in the immediate vicinity of the scissile bond. This mechanism is responsible for the efficient processing of the SFV nsP3/nsP4 junction. I propose that the same mechanism is involved in the cleavage of the nsP1/nsP2 junction of both viruses as well. However, in this case it rather serves to position the substrate, whereas the efficiency of the processing is ensured by the capability of nsP2 to cut its own N-terminus in cis. Both types of cleavages are demonstrated here to be inhibited by RNA, which is interpreted as impairing the basal papain-like recognition of the substrate. In contrast, processing of the SIN nsP3/nsP4 junction was found to be activated by RNA and additionally potentiated by the presence of the central region of nsP3 in the protease. The processing of the nsP2/nsP3 junction in both viruses occurred via another mechanism, requiring the exactly processed N-terminus of nsP2 in the protease and insensitive to RNA addition. Therefore, the three processing events in the replicase polyprotein maturation are performed via three distinct mechanisms in each of two studied alphaviruses. Distinct sets of conditions required for each cleavage ensure sequential maturation of P1234 polyprotein: nsP4 is released first, then the nsP1/nsP2 site is cut in cis, and liberation of the nsP2 N-terminus activates the cleavage of the nsP2/nsP3 junction at last. The first processing event occurs differently in SFV and SIN, whereas the subsequent cleavages are found to be similar in the two viruses and therefore, their mechanisms are suggested to be conserved in the genus. The RNA modulation of the alphavirus nonstructural protease activity, discovered here, implies bidirectional functional interplay between the alphavirus RNA metabolism and protease regulation. The nsP2 protease emerges as a signal transmitting moiety, which senses the replication stage and responds with proteolytic cleavages. A detailed hypothetical model of the alphavirus replicase core was inferred from the data obtained in the study. Similar principles in replicase organization and protease functioning are expected to be employed by other RNA viruses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tactile sensation plays an important role in everyday life. While the somatosensory system has been studied extensively, the majority of information has come from studies using animal models. Recent development of high-resolution anatomical and functional imaging techniques has enabled the non-invasive study of human somatosensory cortex and thalamus. This thesis provides new insights into the functional organization of the human brain areas involved in tactile processing using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). The thesis also demonstrates certain optimizations of MEG and fMRI methods. Tactile digit stimulation elicited stimulus-specific responses in a number of brain areas. Contralateral activation was observed in somatosensory thalamus (Study II), primary somatosensory cortex (SI; I, III, IV), and post-auditory belt area (III). Bilateral activation was observed in secondary somatosensory cortex (SII; II, III, IV). Ipsilateral activation was found in the post-central gyrus (area 2 of SI cortex; IV). In addition, phasic deactivation was observed within ipsilateral SI cortex and bilateral primary motor cortex (IV). Detailed investigation of the tactile responses demonstrated that the arrangement of distal-proximal finger representations in area 3b of SI in humans is similar to that found in monkeys (I). An optimized MEG approach was sufficient to resolve such fine detail in functional organization. The SII region appeared to contain double representations for fingers and toes (II). The detection of activations in the SII region and thalamus improved at the individual and group levels when cardiac-gated fMRI was used (II). Better detection of body part representations at the individual level is an important improvement, because identification of individual representations is crucial for studying brain plasticity in somatosensory areas. The posterior auditory belt area demonstrated responses to both auditory and tactile stimuli (III), implicating this area as a physiological substrate for the auditory-tactile interaction observed in earlier psychophysical studies. Comparison of different smoothing parameters (III) demonstrated that proper evaluation of co-activation should be based on individual subject analysis with minimal or no smoothing. Tactile input consistently influenced area 3b of the human ipsilateral SI cortex (IV). The observed phasic negative fMRI response is proposed to result from interhemispheric inhibition via trans-callosal connections. This thesis contributes to a growing body of human data suggesting that processing of tactile stimuli involves multiple brain areas, with different spatial patterns of cortical activation for different stimuli.