830 resultados para TAP MTO
Resumo:
We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.
Resumo:
We derive a mean field algorithm for binary classification with Gaussian processes which is based on the TAP approach originally proposed in Statistical Physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler 'naive' mean field theory and support vector machines (SVM) as limiting cases. For both mean field algorithms and support vectors machines, simulation results for three small benchmark data sets are presented. They show 1. that one may get state of the art performance by using the leave-one-out estimator for model selection and 2. the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The latter result is a taken as a strong support for the internal consistency of the mean field approach.
Resumo:
This thesis presents details on the fabrication of microwave transversal filters using fibre Bragg grating arrays and the building of fibre Bragg grating based magnetic-field sensors. Some theoretical background about fibre Bragg gratings, photosensitivity, fibre Bragg grating sensors and filters are presented. Fibre Bragg grating sensors in other industrial applications are highlighted. Some sensing principles are also introduced. Experimental work is carried out to demonstrate a magnetic-field sensor using an established fibre Bragg grating strain sensor. System performance and trade-off are discussed. The most important part of this thesis is on the fabrication of photonic transversal filter using fibre Bragg grating arrays. In order to improve the filter performance, a novel tap multiplexing structure is presented. Further improving approaches such as apodisation are also investigated. The basis of nonrecirculating filter, some structure and performance are introduced.
Resumo:
An intelligent agent, operating in an external world which cannot be fully described in its internal world model, must be able to monitor the success of a previously generated plan and to respond to any errors which may have occurred. The process of error analysis requires the ability to reason in an expert fashion about time and about processes occurring in the world. Reasoning about time is needed to deal with causality. Reasoning about processes is needed since the direct effects of a plan action can be completely specified when the plan is generated, but the indirect effects cannot. For example, the action `open tap' leads with certainty to `tap open', whereas whether there will be a fluid flow and how long it might last is more difficult to predict. The majority of existing planning systems cannot handle these kinds of reasoning, thus limiting their usefulness. This thesis argues that both kinds of reasoning require a complex internal representation of the world. The use of Qualitative Process Theory and an interval-based representation of time are proposed as a representation scheme for such a world model. The planning system which was constructed has been tested on a set of realistic planning scenarios. It is shown that even simple planning problems, such as making a cup of coffee, require extensive reasoning if they are to be carried out successfully. The final Chapter concludes that the planning system described does allow the correct solution of planning problems involving complex side effects, which planners up to now have been unable to solve.
Resumo:
The present thesis tested the hypothesis of Stanovich, Siegel, & Gottardo (1997) that surface dyslexia is the result of a milder phonological deficit than that seen in phonological dyslexia coupled with reduced reading experience. We found that a group of adults with surface dyslexia showed a phonological deficit that was commensurate with that shown by a group of adults with phonological dyslexia (matched for chronological age and verbal and non-verbal IQ) and normal reading experience. We also showed that surface dyslexia cannot be accounted for by a semantic impairment or a deficit in the verbal learning and recall of lexical-semantic information (such as meaningful words), as both dyslexic subgroups performed the same. This study has replicated the results of our published study that surface dyslexia is not the consequence of a mild retardation or reduced learning opportunities but a separate impairment linked to a deficit in written lexical learning, an ability needed to create novel lexical representations from a series of unrelated visual units, which is independent from the phonological deficit (Romani, Di Betta, Tsouknida & Olson, 2008). This thesis also provided evidence that a selective nonword reading deficit in developmental dyslexia persists beyond poor phonology. This was shown by finding a nonword reading deficit even in the presence of normal regularity effects in the dyslexics (when compared to both reading and spelling-age matched controls). A nonword reading deficit was also found in the surface dyslexics. Crucially, this deficit was as strong as in the phonological dyslexics despite better functioning of the sublexical route for the former. These results suggest that a nonword reading deficit cannot be solely explained by a phonological impairment. We, thus, suggested that nonword reading should also involve another ability relating to the processing of novel visual orthographic strings, which we called 'orthographic coding'. We then investigated the ability to process series of independent units within multi-element visual arrays and its relationship with reading and spelling problems. We identified a deficit in encoding the order of visual sequences (involving both linguistic and nonlinguistic information) which was significantly associated with word and nonword processing. More importantly, we revealed significant contributions to orthographic skills in both dyslexic and control individuals, even after age, performance IQ and phonological skills were controlled. These results suggest that spelling and reading do not only tap phonological skills but also order encoding skills.
Resumo:
Adults show great variation in their auditory skills, such as being able to discriminate between foreign speech-sounds. Previous research has demonstrated that structural features of auditory cortex can predict auditory abilities; here we are interested in the maturation of 2-Hz frequency-modulation (FM) detection, a task thought to tap into mechanisms underlying language abilities. We hypothesized that an individual's FM threshold will correlate with gray-matter density in left Heschl's gyrus, and that this function-structure relationship will change through adolescence. To test this hypothesis, we collected anatomical magnetic resonance imaging data from participants who were tested and scanned at three time points: at 10, 11.5 and 13 years of age. Participants judged which of two tones contained FM; the modulation depth was adjusted using an adaptive staircase procedure and their threshold was calculated based on the geometric mean of the last eight reversals. Using voxel-based morphometry, we found that FM threshold was significantly correlated with gray-matter density in left Heschl's gyrus at the age of 10 years, but that this correlation weakened with age. While there were no differences between girls and boys at Times 1 and 2, at Time 3 there was a relationship between gray-matter density in left Heschl's gyrus in boys but not in girls. Taken together, our results confirm that the structure of the auditory cortex can predict temporal processing abilities, namely that gray-matter density in left Heschl's gyrus can predict 2-Hz FM detection threshold. This ability is dependent on the processing of sounds changing over time, a skill believed necessary for speech processing. We tested this assumption and found that FM threshold significantly correlated with spelling abilities at Time 1, but that this correlation was found only in boys. This correlation decreased at Time 2, and at Time 3 we found a significant correlation between reading and FM threshold, but again, only in boys. We examined the sex differences in both the imaging and behavioral data taking into account pubertal stages, and found that the correlation between FM threshold and spelling was strongest pre-pubertally, and the correlation between FM threshold and gray-matter density in left Heschl's gyrus was strongest mid-pubertally.
Resumo:
Purpose - To compare the visual outcomes after verteporfin photodynamic therapy (VPDT) administered in routine clinical practice with those observed in the Treatment of Age-related macular degeneration with Photodynamic therapy (TAP) trials and to quantify the effects of clinically important baseline covariates on outcome. Design - A prospective longitudinal study of patients treated with VPDT in 45 ophthalmology departments in the United Kingdom with expertise in the management of neovascular age-related macular degeneration (nAMD). Participants - Patients with wholly or predominantly classic choroidal neovascularization (CNV) of any cause with a visual acuity =20/200 in the eye to be treated. Methods - Refracted best-corrected visual acuity (BCVA) and contrast sensitivity were measured in VPDT-treated eyes at baseline and subsequent visits. Eyes were retreated at 3 months if CNV was judged to be active. Baseline angiograms were graded to quantify the percentages of classic and occult CNV. Treated eyes were categorized as eligible or ineligible for TAP, or unclassifiable. Main Outcome Measures - Best-corrected visual acuity and contrast sensitivity during 1 year of follow-up after initial treatment. Results - A total of 7748 treated patients were recruited. Data from 4043 patients with a diagnosis of nAMD were used in the present analysis. Reading center determination of lesion type showed that 87% were predominantly classic CNV. Eyes received 2.4 treatments in year 1 and 0.4 treatments in year 2. Deterioration of BCVA over 1 year was similar to that observed in the VPDT arms of the TAP trials and was not influenced by TAP eligibility classification. Best-corrected visual acuity deteriorated more quickly in current smokers; with increasing proportion of classic CNV, increasing age, and better baseline BCVA; and when the fellow eye was the better eye.
Resumo:
We experimentally investigate the channel estimation and compensation in a chromatic dispersion (CD) limited 20Gbit/s optical fast orthogonal frequency division multiplexing (F-OFDM) system with up to 840km transmission. It is shown that symmetric extension based guard interval (GI) is required to enable CD compensation using one-tap equalizers. As few as one optical F-OFDM symbol with four and six pilot tones per symbol can achieve near-optimal channel estimation and compensation performance for 600km and 840km respectively.
Resumo:
Over the last decade, television screens and display monitors have increased in size considerably, but has this improved our televisual experience? Our working hypothesis was that the audiences adopt a general strategy that “bigger is better.” However, as our visual perceptions do not tap directly into basic retinal image properties such as retinal image size (C. A. Burbeck, 1987), we wondered whether object size itself might be an important factor. To test this, we needed a task that would tap into the subjective experiences of participants watching a movie on different-sized displays with the same retinal subtense. Our participants used a line bisection task to self-report their level of “presence” (i.e., their involvement with the movie) at several target locations that were probed in a 45-min section of the movie “The Good, The Bad, and The Ugly.” Measures of pupil dilation and reaction time to the probes were also obtained. In Experiment 1, we found that subjective ratings of presence increased with physical screen size, supporting our hypothesis. Face scenes also produced higher presence scores than landscape scenes for both screen sizes. In Experiment 2, reaction time and pupil dilation results showed the same trends as the presence ratings and pupil dilation correlated with presence ratings, providing some validation of the method. Overall, the results suggest that real-time measures of subjective presence might be a valuable tool for measuring audience experience for different types of (i) display and (ii) audiovisual material.
Resumo:
The environment may act as a reservoir for pathogens that cause healthcare-associated infections (HCAIs). Approaches to reducing environmental microbial contamination in addition to cleaning are thus worthy of consideration. Copper is well recognised as having antimicrobial activity but this property has not been applied to the clinical setting. We explored its use in a novel cross-over study on an acute medical ward. A toilet seat, set of tap handles and a ward entrance door push plate each containing copper were sampled for the presence of micro-organisms and compared to equivalent standard, non-copper-containing items on the same ward. Items were sampled once weekly for 10 weeks at 07:00 and 17:00. After five weeks, the copper-containing and non-copper-containing items were interchanged. The total aerobic microbial counts per cm2 including the presence of ‘indicator micro-organisms’ were determined. Median numbers of microorganisms harboured by the copper-containing items were between 90% and 100% lower than their control equivalents at both 07:00 and 17:00. This reached statistical significance for each item with one exception. Based on the median total aerobic cfu counts from the study period, five out of ten control sample points and zero out of ten copper points failed proposed benchmark values of a total aerobic count of <5 cfu/cm2. All indicator micro-organisms were only isolated from control items with the exception of one item during one week. The use of copper-containing materials for surfaces in the hospital environment may therefore be a valuable adjunct for the prevention of HCAIs and requires further evaluation.
Resumo:
Under conditions of hypoxia, most eukaryotic cells undergo a shift in metabolic strategy, which involves increased flux through the glycolytic pathway. Although this is critical for bioenergetic homeostasis, the underlying mechanisms have remained incompletely understood. Here, we report that the induction of hypoxia-induced glycolysis is retained in cells when gene transcription or protein synthesis are inhibited suggesting the involvement of additional post-translational mechanisms. Post-translational protein modification by the small ubiquitin related modifier-1 (SUMO-1) is induced in hypoxia and mass spectrometric analysis using yeast cells expressing tap-tagged Smt3 (the yeast homolog of mammalian SUMO) revealed hypoxia-dependent modification of a number of key glycolytic enzymes. Overexpression of SUMO-1 in mammalian cancer cells resulted in increased hypoxia-induced glycolysis and resistance to hypoxia-dependent ATP depletion. Supporting this, non-transformed cells also demonstrated increased glucose uptake upon SUMO-1 overexpression. Conversely, cells overexpressing the de-SUMOylating enzyme SENP-2 failed to demonstrate hypoxia-induced glycolysis. SUMO-1 overexpressing cells demonstrated focal clustering of glycolytic enzymes in response to hypoxia leading us to hypothesize a role for SUMOylation in promoting spatial re-organization of the glycolytic pathway. In summary, we hypothesize that SUMO modification of key metabolic enzymes plays an important role in shifting cellular metabolic strategies toward increased flux through the glycolytic pathway during periods of hypoxic stress. © 2011 by The American Society for Biochemistry and Molecular Biology, Inc.
Resumo:
Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.
Resumo:
We tested the hypothesis that the differences in performance between developmental dyslexics and controls on visual tasks are specific for the detection of dynamic stimuli. We found that dyslexics were less sensitive than controls to coherent motion in dynamic random dot displays. However, their sensitivity to control measures of static visual form coherence was not significantly different from that of controls. This dissociation of dyslexics' performance on measures that are suggested to tap the sensitivity of different extrastriate visual areas provides evidence for an impairment specific to the detection of dynamic properties of global stimuli, perhaps resulting from selective deficits in dorsal stream functions. © 2001 Lippincott Williams & Wilkins.
Resumo:
We investigated order encoding in developmental dyslexia using a task that presented nonalphanumeric visual characters either simultaneously or sequentially—to tap spatial and temporal order encoding, respectively—and asked participants to reproduce their order. Dyslexic participants performed poorly in the sequential condition, but normally in the simultaneous condition, except for positions most susceptible to interference. These results are novel in demonstrating a selective difficulty with temporal order encoding in a dyslexic group. We also tested the associations between our order reconstruction tasks and: (a) lexical learning and phonological tasks; and (b) different reading and spelling tasks. Correlations were extensive when the whole group of participants was considered together. When dyslexics and controls were considered separately, different patterns of association emerged between orthographic tasks on the one side and tasks tapping order encoding, phonological processing, and written learning on the other. These results indicate that different skills support different aspects of orthographic processing and are impaired to different degrees in individuals with dyslexia. Therefore, developmental dyslexia is not caused by a single impairment, but by a family of deficits loosely related to difficulties with order. Understanding the contribution of these different deficits will be crucial to deepen our understanding of this disorder.
Resumo:
Background - The main processing pathway for MHC class I ligands involves degradation of proteins by the proteasome, followed by transport of products by the transporter associated with antigen processing (TAP) to the endoplasmic reticulum (ER), where peptides are bound by MHC class I molecules, and then presented on the cell surface by MHCs. The whole process is modeled here using an integrated approach, which we call EpiJen. EpiJen is based on quantitative matrices, derived by the additive method, and applied successively to select epitopes. EpiJen is available free online. Results - To identify epitopes, a source protein is passed through four steps: proteasome cleavage, TAP transport, MHC binding and epitope selection. At each stage, different proportions of non-epitopes are eliminated. The final set of peptides represents no more than 5% of the whole protein sequence and will contain 85% of the true epitopes, as indicated by external validation. Compared to other integrated methods (NetCTL, WAPP and SMM), EpiJen performs best, predicting 61 of the 99 HIV epitopes used in this study. Conclusion - EpiJen is a reliable multi-step algorithm for T cell epitope prediction, which belongs to the next generation of in silico T cell epitope identification methods. These methods aim to reduce subsequent experimental work by improving the success rate of epitope prediction.