998 resultados para Auditory sentence processing
Resumo:
This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.
Resumo:
Brainstem auditory-evoked potential (BAEP) has been widely used for different purposes in veterinary practice and is commonly used to identify inherited deafness and presbycusis. In this study, 43 Boxer dogs were evaluated using the BAEP. Deafness was diagnosed in 3 dogs (2 bilateral and 1 unilateral) allowing the remaining 40 Boxers to be included for normative data analysis including an evaluation on the influence of age on the BAEP. The animals were divided into 2 groups of 20 Boxers each based on age. The mean age was 4.54 years (range, 1-8) in group I, and 9.83 years (range, 8.5-12) in group II. The mean latency for I, III, and V waves were 1.14 (±0.07), 2.64 (±0.11), and 3.48 (±0.10) ms in group I, and 1.20 (±0.12), 2.73 (±0.15), and 3.58 (±0.22) ms in group II, respectively. The mean inter-peak latencies for the I-III, III-V and I-V intervals were 1.50 (±0.15), 0.84 (±0.15), and 2.34 (±0.11) ms in group I, and 1.53 (±0.16), 0.85 (±0.15), and 2.38 (±0.19) ms in group II, respectively. Latencies of waves I and III were significant different between group I and II. For the I-III, III-V and I-V intervals, no significant differences were observed between the 2 groups. As far as we know, this is the first normative study of BAEP obtained from Boxer dogs.
Resumo:
The Shadow Moiré fringe patterns are level lines of equal depth generated by interference between a master grid and its shadow projected on the surface. In simplistic approach, the minimum error is about the order of the master grid pitch, that is, always larger than 0,1 mm, resulting in an experimental technique of low precision. The use of a phase shift increases the accuracy of the Shadow Moiré technique. The current work uses the phase shifting method to determine the surfaces three-dimensional shape using isothamic fringe patterns and digital image processing. The current study presents the method and applies it to images obtained by simulation for error evaluation, as well as to a buckled plate, obtaining excellent results. The method hands itself particularly useful to decrease the errors in the interpretation of the Moiré fringes that can adversely affect the calculations of displacements in pieces containing many concave and convex regions in relatively small areas.
Resumo:
Presentation at the Nordic Perspectives on Open Access and Open Science seminar, Helsinki, October 15, 2013
Resumo:
The purpose of the study is to examine and increase knowledge on customer knowledge processing in B2B context from sales perspective. Further objectives include identifying possible inhibiting and enabling factors in each phase of the process. The theoretical framework is based on customer knowledge management literature. The study is a qualitative study, in which the research method utilized is a case study. The empirical part was implemented in a case company by conducting in-depth interviews with the company’s value-selling champions located internationally. Context was maintenance business. Altogether 17 interviews were conducted. The empirical findings indicate that customer knowledge processing has not been clearly defined within the maintenance business line. Main inhibiting factors in acquiring customer knowledge are lack of time and vast amount of customer knowledge received. Enabling factors recognized are good customer relationships and sales representatives’ communication skills. Internal dissemination of knowledge is mainly inhibited by lack of time and restrictions in customer relationship management systems. Enabling factors are composition of the sales team and updated customer knowledge. Inhibiting utilization is lack of goals to utilize the customer knowledge and a low quality of the knowledge. Moreover, customer knowledge is not systematically updated nor analysed. Management of customer knowledge is based on the CRM system. As implications of the study, it is suggested for the case company to define customer knowledge processing in order to support maintenance business process.
Resumo:
The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.
Resumo:
ABSTRACT Five experiments were conducted to evaluate the hypothesis that Solanum americanum density and time of coexistence affect the quality of processing tomato fruit. The tomato crop was established using either the direct drilling or the transplanting technique. The factors evaluated consisted of weed density (from 0 up to 6 plants m-2) and time of weed interference (early bloom stage, full flowering stage, fruit filling, and harvest time). The effects of competition on tomato fruit quality were analysed using a multiple model. Tomato variables evaluated included industrial fruit types (which depended on ripeness and disease infection) and soluble solids level(obrix). Tomato fruit quality is dependent on the factors tested. Under low densities (< 6 plants m-2) of S. americanum there was a small impact on the quality of the tomato fruits. The percentage of grade A (mature fruit with red color and without pathogen infection) tomato fruits is the variable most affect by the independent variables. The impact of these independent variables on the percentage of grade C (green and/or with more than 15% disease infection) tomato yield was of smaller magnitude and in an inverse trend as the observed for grade A. The level of soluble solids was influenced by the weed interference on only two experiments, but the impact was of small magnitude. The impact of the results on current and future crop management practices is discussed.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
The nucleus tractus solitarii (NTS) receives afferent projections from the arterial baroreceptors, carotid chemoreceptors and cardiopulmonary receptors and as a function of this information produces autonomic adjustments in order to maintain arterial blood pressure within a narrow range of variation. The activation of each of these cardiovascular afferents produces a specific autonomic response by the excitation of neuronal projections from the NTS to the ventrolateral areas of the medulla (nucleus ambiguus, caudal and rostral ventrolateral medulla). The neurotransmitters at the NTS level as well as the excitatory amino acid (EAA) receptors involved in the processing of the autonomic responses in the NTS, although extensively studied, remain to be completely elucidated. In the present review we discuss the role of the EAA L-glutamate and its different receptor subtypes in the processing of the cardiovascular reflexes in the NTS. The data presented in this review related to the neurotransmission in the NTS are based on experimental evidence obtained in our laboratory in unanesthetized rats. The two major conclusions of the present review are that a) the excitation of the cardiovagal component by cardiovascular reflex activation (chemo- and Bezold-Jarisch reflexes) or by L-glutamate microinjection into the NTS is mediated by N-methyl-D-aspartate (NMDA) receptors, and b) the sympatho-excitatory component of the chemoreflex and the pressor response to L-glutamate microinjected into the NTS are not affected by an NMDA receptor antagonist, suggesting that the sympatho-excitatory component of these responses is mediated by non-NMDA receptors.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
This study was designed to evaluate the effect of different conditions of collection, transport and storage on the quality of blood samples from normal individuals in terms of the activity of the enzymes ß-glucuronidase, total hexosaminidase, hexosaminidase A, arylsulfatase A and ß-galactosidase. The enzyme activities were not affected by the different materials used for collection (plastic syringes or vacuum glass tubes). In the evaluation of different heparin concentrations (10% heparin, 5% heparin, and heparinized syringe) in the syringes, it was observed that higher doses resulted in an increase of at least 1-fold in the activities of ß-galactosidase, total hexosaminidase and hexosaminidase A in leukocytes, and ß-glucuronidase in plasma. When the effects of time and means of transportation were studied, samples that had been kept at room temperature showed higher deterioration with time (72 and 96 h) before processing, and in this case it was impossible to isolate leukocytes from most samples. Comparison of heparin and acid citrate-dextrose (ACD) as anticoagulants revealed that ß-glucuronidase and hexosaminidase activities in plasma reached levels near the lower normal limits when ACD was used. In conclusion, we observed that heparin should be used as the preferable anticoagulant when measuring these lysosomal enzyme activities, and we recommend that, when transport time is more than 24 h, samples should be shipped by air in a styrofoam box containing wet ice.