940 resultados para 080107 Natural Language Processing
Resumo:
The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.
Resumo:
Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures
Resumo:
Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd.
Resumo:
The aim of this work was to develop a generic methodology for evaluating and selecting, at the conceptual design phase of a project, the best process technology for Natural Gas conditioning. A generic approach would be simple and require less time and would give a better understanding of why one process is to be preferred over another. This will lead to a better understanding of the problem. Such a methodology would be useful in evaluating existing, novel and hybrid technologies. However, to date no information is available in the published literature on such a generic approach to gas processing. It is believed that the generic methodology presented here is the first available for choosing the best or cheapest method of separation for natural gas dew-point control. Process cost data are derived from evaluations carried out by the vendors. These evaluations are then modelled using a steady-state simulation package. From the results of the modelling the cost data received are correlated and defined with respect to the design or sizing parameters. This allows comparisons between different process systems to be made in terms of the overall process. The generic methodology is based on the concept of a Comparative Separation Cost. This takes into account the efficiency of each process, the value of its products, and the associated costs. To illustrate the general applicability of the methodology, three different cases suggested by BP Exploration are evaluated. This work has shown that it is possible to identify the most competitive process operations at the conceptual design phase and illustrate why one process has an advantage over another. Furthermore, the same methodology has been used to identify and evaluate hybrid processes. It has been determined here that in some cases they offer substantial advantages over the separate process techniques.
Resumo:
One of the main objectives of this study was to functionalise various rubbers (i.e. ethylene propylene copolymer (EP), ethylene propylene diene terpolymer (EPDM), and natural rubber (NR)) using functional monomers, maleic anhydride (MA) and glycidyl methacrylate (GMA), via reactive processing routes. The functionalisation of the rubber was carried out via different reactive processing methods in an internal mixer. GMA was free-radically grafted onto EP and EPDM in the melt state in the absence and presence of a comonomer, trimethylolpropane triacrylate (TRlS). To optinuse the grafting conditions and the compositions, the effects of various paranleters on the grafting yields and the extent of side reactions were investigated. Precipitation method and Soxhlet extraction method was established to purifY the GMA modified rubbers and the grafting degree was determined by FTIR and titration. It was found that without TRlS the grafting degree of GMA increased with increasing peroxide concentration. However, grafting was low and the homopolymerisation of GMA and crosslinking of the polymers were identified as the main side reactions competing with the desired grafting reaction for EP and EPDM, respectively. The use of the tri-functional comonomer, TRlS, was shown to greatly enhance the GMA grafting and reduce the side reactions in terms of the higher GMA grafting degree, less alteration of the rheological properties of the polymer substrates and very little formation of polyGMA. The grafting mechanisms were investigated. MA was grafted onto NR using both thermal initiation and peroxide initiation. The results showed clearly that the reaction of MA with NR could be thermally initiated above 140°C in the absence of peroxide. At a preferable temperature of 200°C, the grafting degree was increased with increasing MA concentration. The grafting reaction could also be initiated with peroxide. It was found that 2,5-dimethyl-2,5-bis(ter-butylproxy) hexane (TIOI) was a suitable peroxide to initiate the reaction efficiently above I50°C. The second objective of the work was to utilize the functionalised rubbers in a second step to achieve an in-situ compatibilisation of blends based on poly(ethylene terephthalate) (PET), in particular, with GMA-grafted-EP and -EPDM and the reactive blending was carried out in an internal mixer. The effects of GMA grafting degree, viscosities of GMAgrafted- EP and -EPDM and the presence of polyGMA in the rubber samples on the compatibilisation of PET blends in terms of morphology, dynamical mechanical properties and tensile properties were investigated. It was found that the GMA modified rubbers were very efficient in compatibilising the PET blends and this was supported by the much finer morphology and the better tensile properties. The evidence obtained from the analysis of the PET blends strongly supports the existence of the copolymers through the interfacial reactions between the grafted epoxy group in the GMA modified rubber and the terminal groups of PET in the blends.
Resumo:
A large number of compounds containing quinonoid or hindered phenol functions were examined for their roles as antifatigue agents. Among the evaluated quinones and phenols expected to have macroalkyl radical scavenging ability, BQ, αTOC, γTOC and GM showed relatively good performance for fatigue resistance (although their performance was slightly less effective than the commercial aromatic amine antioxidants, IPPD and 6PPD). The compounds which were shown to have higher reactivity with alkyl radicals (via calculated reactivity indices) showed better fatigue resistance. This fact supports the suggestion that strong alkyl radical scavengers should be also effective antifatigue agents. Evidence produced based on calculation of reactivity indices suggests that the quinones examined react with alkyl radicals on the meta position of the quinone rings producing phenoxyl radicals. The phenoxyl radicals are expected either to disproportionate, to recombine with a further alkyl radical, or to abstract a hydrogen from another alkyl radical producing an olefine. The regeneration of quinones and formation of the corresponding phenols is expected to occur during the antifatigue activity. The phenol antioxidant, HBA is expected to produce a quinonoid compound and this is also expected to function in a similar way to other quinones. Another phenol, GM, which is also known to scavenge alkyl radicals showed good antifatigue performance. Tocopherols had effective antifatigue activity and are expected to have different antifatigue mechanisms from that of other quinones, hence αTOC was examined for its mechanisms during rubber fatiguing using HPLC analysis. Trimers of αTOC which were produced during vulcanisation are suggested to contribute to the fatigue activity observed. The evidence suggests that the trimers reproduce αTOC and a mechanism was proposed. Although antifatigue agents evaluated showed antifatigue activity, most of them had poor thermoxidative resistance, hence it was necessary to compensate for this by using a combination of antioxidants with the antifatigue agents. Reactive antioxidants which have the potential to graft on the polymer chains during reactive processing were used for this purpose. APMA was the most effective antioxidant among other evaluated reactive antioxidants. Although high ratio of grafting was achieved after optimisation of grafting conditions, it is suggested that this was achieved by long branches of APMA due to large extent of polymerisation. This is expected to cause maldistribution of APMA leading to reducing the effect of CB-D activity (while CB-A activity showed clear advantages for grafting). Further optimisation of grafting conditions is required in order to use APMA more effectively. Moreover, although synergistic effects between APMA and antifatigue agents were expected, none of the evaluated antifatigue agents, BQ, αTOC, γTOC and TMQ, showed significant synergism both in fatigue and thermoxidative resistance. They performed just as additives.
Resumo:
The main aim of this work was to study the effect of two comonomers, trimethylolpropane trimethacrylate (TRIS) and divinylbenzene (DVB) on the nature and efficiency of grafting of two different monomers, glycidyl methacrylate (GMA) and maleic anhydride (MA) on polypropylene (P) and on natural rubber (NR) using reactive processing methods. Four different peroxides, benzoyl peroxide (BPO), dicumyl peroxide (DCP), 2,5-dimethyl-2,5-bis-(tert-butyl peroxy) hexane (t-101), and 1,1-di(tert-butylperoxy)-3,3,5-trimethyl cyclohexene (T-29B90) were examined as free radical initiators. An appropriate methodology was established and chemical composition and reactive processing parameters were examined and optimised. It was found that in the absence of the coagents DVB and TRIS, the grafting degree of GMA and MA increased with increasing peroxide concentration, but the level of grafting was low and the homopolymerisaton of GMA and the crosslinking of NR or chain scission of PP were identified as the main side reactions that competed with the desired grafting reaction in the polymers. At high concentrations of the peroxide T-101 (>0.02 mr) cross linking of NR and chain scission of PP became dominant and unacceptable. An attempt to add a reactive coagent, e.g. TRIS during grafting of GMA on natural rubber resulted in excessive crosslinking because of the very high reactivity of this comonomer with the C=C of the rubber. Therefore, the use of any multifunctional and highly reactive coagent such as TRIS, could not be applied in the grafting of GAM onto natural rubber. In the case of PP, however, the use of TRIS and DVB was shown to greatly enhance the grafting degree and reduce the chain scission with very little extent of monomer homopolymerisation taking place. The results showed that the grafting degree was increased with increasing GMA and MA concentrations. It was also found that T-101 was a suitable peroxide to initiate the grafting reaction of these monomers on NR and PP and the optimum temperature for this peroxide was =160°C. A very preliminary work was also conducted on the use of the functionalised-PP (f-PP) in the absence and presence of the two comonomers (f-PP-DVB or f-PP-TRIS) for the purpose of compatibilising PP-PBT blends through reactive blending. Examination of the morphology of the blends suggested that an effective compatibilisation has been achieved when using f-PP-DVB and f-PP-TRIS, however more work is required in this area.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
This study is concerned with one of the most interesting and the least well-researched areas in contemporary research on classroom interaction: that of the discourse variability exhibited by participants. It investigates the way in which the language of native speakers (NSs) as well as that of non-native speakers (NNSs) may vary according to the circumstances under which it is produced. The study, therefore, attempts to characterise the performance of both NSs and NNSs (with particular emphasis placed on the latter) in various types of interaction in and beyond the EFL classroom. These are: Formal Interview (FI), Formal Classroom Interaction (FCI), Informal Classroom Interaction (ICI), Informal Classroom Discussion (ICD), and Informal Conversation (IC). The corpus of the study consisted of four NSs and fifteen NNSs. Both a video and a tape recording was made for each type of interaction, with the exception of the IC which was only audio-recorded so as not to inhibit the natural use of language. Each lasted for 35 minutes. The findings of the study mark clearly the distinction between the `artificiality' of classroom interaction and the `naturalness' or `authenticity' of non-classroom discourse. Amongst the most interesting findings are the following: Unlike both FCI and ICI, in the FI, ICD, and IC, the language of NNSs was characterised by: greater quantity of oral output, a wider range of errors, the use of natural discourse strategies such as holding the floor and self-correction, and a greater number of initiations in both ICD and IC. It is suggested that if `natural' or `authentic' discourse is to be promoted, the incorporation of FI, ICD, and IC into the EFL classroom activities is much needed. The study differs from most studies on classroom interaction in that it attempts to relate work in the EFL classroom to the `real' world as its prime objective.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
Time after time… and aspect and mood. Over the last twenty five years, the study of time, aspect and - to a lesser extent - mood acquisition has enjoyed increasing popularity and a constant widening of its scope. In such a teeming field, what can be the contribution of this book? We believe that it is unique in several respects. First, this volume encompasses studies from different theoretical frameworks: functionalism vs generativism or function-based vs form-based approaches. It also brings together various sub-fields (first and second language acquisition, child and adult acquisition, bilingualism) that tend to evolve in parallel rather than learn from each other. A further originality is that it focuses on a wide range of typologically different languages, and features less studied languages such as Korean and Bulgarian. Finally, the book gathers some well-established scholars, young researchers, and even research students, in a rich inter-generational exchange, that ensures the survival but also the renewal and the refreshment of the discipline. The book at a glance The first part of the volume is devoted to the study of child language acquisition in monolingual, impaired and bilingual acquisition, while the second part focuses on adult learners. In this section, we will provide an overview of each chapter. The first study by Aviya Hacohen explores the acquisition of compositional telicity in Hebrew L1. Her psycholinguistic approach contributes valuable data to refine theoretical accounts. Through an innovating methodology, she gathers information from adults and children on the influence of definiteness, number, and the mass vs countable distinction on the constitution of a telic interpretation of the verb phrase. She notices that the notion of definiteness is mastered by children as young as 10, while the mass/count distinction does not appear before 10;7. However, this does not entail an adult-like use of telicity. She therefore concludes that beyond definiteness and noun type, pragmatics may play an important role in the derivation of Hebrew compositional telicity. For the second chapter we move from a Semitic language to a Slavic one. Milena Kuehnast focuses on the acquisition of negative imperatives in Bulgarian, a form that presents the specificity of being grammatical only with the imperfective form of the verb. The study examines how 40 Bulgarian children distributed in two age-groups (15 between 2;11-3;11, and 25 between 4;00 and 5;00) develop with respect to the acquisition of imperfective viewpoints, and the use of imperfective morphology. It shows an evolution in the recourse to expression of force in the use of negative imperatives, as well as the influence of morphological complexity on the successful production of forms. With Yi-An Lin’s study, we concentrate both on another type of informant and of framework. Indeed, he studies the production of children suffering from Specific Language Impairment (SLI), a developmental language disorder the causes of which exclude cognitive impairment, psycho-emotional disturbance, and motor-articulatory disorders. Using the Leonard corpus in CLAN, Lin aims to test two competing accounts of SLI (the Agreement and Tense Omission Model [ATOM] and his own Phonetic Form Deficit Model [PFDM]) that conflicts on the role attributed to spellout in the impairment. Spellout is the point at which the Computational System for Human Language (CHL) passes over the most recently derived part of the derivation to the interface components, Phonetic Form (PF) and Logical Form (LF). ATOM claims that SLI sufferers have a deficit in their syntactic representation while PFDM suggests that the problem only occurs at the spellout level. After studying the corpus from the point of view of tense / agreement marking, case marking, argument-movement and auxiliary inversion, Lin finds further support for his model. Olga Gupol, Susan Rohstein and Sharon Armon-Lotem’s chapter offers a welcome bridge between child language acquisition and multilingualism. Their study explores the influence of intensive exposure to L2 Hebrew on the development of L1 Russian tense and aspect morphology through an elicited narrative. Their informants are 40 Russian-Hebrew sequential bilingual children distributed in two age groups 4;0 – 4;11 and 7;0 - 8;0. They come to the conclusion that bilingual children anchor their narratives in perfective like monolinguals. However, while aware of grammatical aspect, bilinguals lack the full form-function mapping and tend to overgeneralize the imperfective on the principles of simplicity (as imperfective are the least morphologically marked forms), universality (as it covers more functions) and interference. Rafael Salaberry opens the second section on foreign language learners. In his contribution, he reflects on the difficulty L2 learners of Spanish encounter when it comes to distinguishing between iterativity (conveyed with the use of the preterite) and habituality (expressed through the imperfect). He examines in turn the theoretical views that see, on the one hand, habituality as part of grammatical knowledge and iterativity as pragmatic knowledge, and on the other hand both habituality and iterativity as grammatical knowledge. He comes to the conclusion that the use of preterite as a default past tense marker may explain the impoverished system of aspectual distinctions, not only at beginners but also at advanced levels, which may indicate that the system is differentially represented among L1 and L2 speakers. Acquiring the vast array of functions conveyed by a form is therefore no mean feat, as confirmed by the next study. Based on the prototype theory, Kathleen Bardovi-Harlig’s chapter focuses on the development of the progressive in L2 English. It opens with an overview of the functions of the progressive in English. Then, a review of acquisition research on the progressive in English and other languages is provided. The bulk of the chapter reports on a longitudinal study of 16 learners of L2 English and shows how their use of the progressive expands from the prototypical uses of process and continuousness to the less prototypical uses of repetition and future. The study concludes that the progressive spreads in interlanguage in accordance with prototype accounts. However, it suggests additional stages, not predicted by the Aspect Hypothesis, in the development from activities and accomplishments at least for the meaning of repeatedness. A similar theoretical framework is adopted in the following chapter, but it deals with a lesser studied language. Hyun-Jin Kim revisits the claims of the Aspect Hypothesis in relation to the acquisition of L2 Korean by two L1 English learners. Inspired by studies on L2 Japanese, she focuses on the emergence and spread of the past / perfective marker ¬–ess- and the progressive – ko iss- in the interlanguage of her informants throughout their third and fourth semesters of study. The data collected through six sessions of conversational interviews and picture description tasks seem to support the Aspect Hypothesis. Indeed learners show a strong association between past tense and accomplishments / achievements at the start and a gradual extension to other types; a limited use of past / perfective marker with states and an affinity of progressive with activities / accomplishments and later achievements. In addition, - ko iss– moves from progressive to resultative in the specific category of Korean verbs meaning wear / carry. While the previous contributions focus on function, Evgeniya Sergeeva and Jean-Pierre Chevrot’s is interested in form. The authors explore the acquisition of verbal morphology in L2 French by 30 instructed native speakers of Russian distributed in a low and high levels. They use an elicitation task for verbs with different models of stem alternation and study how token frequency and base forms influence stem selection. The analysis shows that frequency affects correct production, especially among learners with high proficiency. As for substitution errors, it appears that forms with a simple structure are systematically more frequent than the target form they replace. When a complex form serves as a substitute, it is more frequent only when it is replacing another complex form. As regards the use of base forms, the 3rd person singular of the present – and to some extent the infinitive – play this role in the corpus. The authors therefore conclude that the processing of surface forms can be influenced positively or negatively by the frequency of the target forms and of other competing stems, and by the proximity of the target stem to a base form. Finally, Martin Howard’s contribution takes up the challenge of focusing on the poorer relation of the TAM system. On the basis of L2 French data obtained through sociolinguistic interviews, he studies the expression of futurity, conditional and subjunctive in three groups of university learners with classroom teaching only (two or three years of university teaching) or with a mixture of classroom teaching and naturalistic exposure (2 years at University + 1 year abroad). An analysis of relative frequencies leads him to suggest a continuum of use going from futurate present to conditional with past hypothetic conditional clauses in si, which needs to be confirmed by further studies. Acknowledgements The present volume was inspired by the conference Acquisition of Tense – Aspect – Mood in First and Second Language held on 9th and 10th February 2008 at Aston University (Birmingham, UK) where over 40 delegates from four continents and over a dozen countries met for lively and enjoyable discussions. This collection of papers was double peer-reviewed by an international scientific committee made of Kathleen Bardovi-Harlig (Indiana University), Christine Bozier (Lund Universitet), Alex Housen (Vrije Universiteit Brussel), Martin Howard (University College Cork), Florence Myles (Newcastle University), Urszula Paprocka (Catholic University of Lublin), †Clive Perdue (Université Paris 8), Michel Pierrard (Vrije Universiteit Brussel), Rafael Salaberry (University of Texas at Austin), Suzanne Schlyter (Lund Universitet), Richard Towell (Salford University), and Daniel Véronique (Université d’Aix-en-Provence). We are very much indebted to that scientific committee for their insightful input at each step of the project. We are also thankful for the financial support of the Association for French Language Studies through its workshop grant, and to the Aston Modern Languages Research Foundation for funding the proofreading of the manuscript.
Resumo:
Adults show great variation in their auditory skills, such as being able to discriminate between foreign speech-sounds. Previous research has demonstrated that structural features of auditory cortex can predict auditory abilities; here we are interested in the maturation of 2-Hz frequency-modulation (FM) detection, a task thought to tap into mechanisms underlying language abilities. We hypothesized that an individual's FM threshold will correlate with gray-matter density in left Heschl's gyrus, and that this function-structure relationship will change through adolescence. To test this hypothesis, we collected anatomical magnetic resonance imaging data from participants who were tested and scanned at three time points: at 10, 11.5 and 13 years of age. Participants judged which of two tones contained FM; the modulation depth was adjusted using an adaptive staircase procedure and their threshold was calculated based on the geometric mean of the last eight reversals. Using voxel-based morphometry, we found that FM threshold was significantly correlated with gray-matter density in left Heschl's gyrus at the age of 10 years, but that this correlation weakened with age. While there were no differences between girls and boys at Times 1 and 2, at Time 3 there was a relationship between gray-matter density in left Heschl's gyrus in boys but not in girls. Taken together, our results confirm that the structure of the auditory cortex can predict temporal processing abilities, namely that gray-matter density in left Heschl's gyrus can predict 2-Hz FM detection threshold. This ability is dependent on the processing of sounds changing over time, a skill believed necessary for speech processing. We tested this assumption and found that FM threshold significantly correlated with spelling abilities at Time 1, but that this correlation was found only in boys. This correlation decreased at Time 2, and at Time 3 we found a significant correlation between reading and FM threshold, but again, only in boys. We examined the sex differences in both the imaging and behavioral data taking into account pubertal stages, and found that the correlation between FM threshold and spelling was strongest pre-pubertally, and the correlation between FM threshold and gray-matter density in left Heschl's gyrus was strongest mid-pubertally.
Resumo:
Auditory processing disorder (APD) is diagnosed when a patient presents with listening difficulties which can not be explained by a peripheral hearing impairment or higher-order cognitive or language problems. This review explores the association between auditory processing disorder (APD) and other specific developmental disorders such as dyslexia and attention-deficit hyperactivity disorder. The diagnosis and aetiology of APD are similar to those of other developmental disorders and it is well established that APD often co-occurs with impairments of language, literacy, and attention. The genetic and neurological causes of APD are poorly understood, but developmental and behavioural genetic research with other disorders suggests that clinicians should expect APD to co-occur with other symptoms frequently. The clinical implications of co-occurring symptoms of other developmental disorders are considered and the review concludes that a multi-professional approach to the diagnosis and management of APD, involving speech and language therapy and psychology as well as audiology, is essential to ensure that children have access to the most appropriate range of support and interventions.
Resumo:
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities as well as carefully matched low-ambiguity sentences. Results showed ambiguity-related responses in the posterior left inferior frontal gyrus (pLIFG) and posterior left middle temporal regions. The pLIFG activations were present for both syntactic and semantic ambiguities suggesting that this region is not specialised for processing either semantic or syntactic information, but instead performs cognitive operations that are required to resolve different types of ambiguity irrespective of their linguistic nature, for example by selecting between possible interpretations or reinterpreting misparsed sentences. Syntactic ambiguities also produced activation in the posterior middle temporal gyrus. These data confirm the functional relationship between these two brain regions and their importance in constructing grammatical representations of spoken language.
Resumo:
The purpose of this research was to investigate the effects of Processing Instruction (VanPatten, 1996, 2007), as an input-based model for teaching second language grammar, on Syrian learners’ processing abilities. The present research investigated the effects of Processing Instruction on the acquisition of English relative clauses by Syrian learners in the form of a quasi-experimental design. Three separate groups were involved in the research (Processing Instruction, Traditional Instruction and a Control Group). For assessment, a pre-test, a direct post-test and a delayed post-test were used as main tools for eliciting data. A questionnaire was also distributed to participants in the Processing Instruction group to give them the opportunity to give feedback in relation to the treatment they received in comparison with the Traditional Instruction they are used to. Four hypotheses were formulated on the possible effectivity of Processing Instruction on Syrian learners’ linguistic system. It was hypothesised that Processing Instruction would improve learners’ processing abilities leading to an improvement in learners’ linguistic system. This was expected to lead to a better performance when it comes to the comprehension and production of English relative clauses. The main source of data was analysed statistically using the ANOVA test. Cohen’s d calculations were also used to support the ANOVA test. Cohen’s d showed the magnitude of effects of the three treatments. Results of the analysis showed that both Processing Instruction and Traditional Instruction groups had improved after treatment. However, the Processing Instruction Group significantly outperformed the other two groups in the comprehension of relative clauses. The analysis concluded that Processing Instruction is a useful tool for instructing relative clauses to Syrian learners. This was enhanced by participants’ responses to the questionnaire as they were in favour of Processing Instruction, rather than Traditional Instruction. This research has theoretical and pedagogical implications. Theoretically, the study showed support for the Input hypothesis. That is, it was shown that Processing Instruction had a positive effect on input processing as it affected learners’ linguistic system. This was reflected in learners’ performance where learners were able to produce a structure which they had not been asked to produce. Pedagogically, the present research showed that Processing Instruction is a useful tool for teaching English grammar in the context where the experiment was carried out, as it had a large effect on learners’ performance.