967 resultados para Speaker Recognition, Text-constrained, Multilingual, Speaker Verification, HMMs
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
Many researchers have tried to assess the number of words adults know. A general conclusion which emerges from such studies is that vocabularies of English monolingual adults are very large with considerable variation. This variation is important given that the vocabulary size of schoolchildren in the early years of school is thought to materially affect subsequent educational attainment. The data is difficult to interpret, however, because of the different methodologies which researchers use. The study in this paper uses the frequency-based vocabulary size test from Goulden et al (1990) and investigates the vocabulary knowledge of undergraduates in three British universities. The results suggest that monolingual speaker vocabulary sizes may be much smaller than is generally thought with far less variation than is usually reported. An average figure of about 10,000 English words families emerges for entrants to university. This figure suggests that many students must struggle with the comprehension of university level texts.
Resumo:
Empathy is the lens through which we view others' emotion expressions, and respond to them. In this study, empathy and facial emotion recognition were investigated in adults with autism spectrum conditions (ASC; N=314), parents of a child with ASC (N=297) and IQ-matched controls (N=184). Participants completed a self-report measure of empathy (the Empathy Quotient [EQ]) and a modified version of the Karolinska Directed Emotional Faces Task (KDEF) using an online test interface. Results showed that mean scores on the EQ were significantly lower in fathers (p<0.05) but not mothers (p>0.05) of children with ASC compared to controls, whilst both males and females with ASC obtained significantly lower EQ scores (p<0.001) than controls. On the KDEF, statistical analyses revealed poorer overall performance by adults with ASC (p<0.001) compared to the control group. When the 6 distinct basic emotions were analysed separately, the ASC group showed impaired performance across five out of six expressions (happy, sad, angry, afraid and disgusted). Parents of a child with ASC were not significantly worse than controls at recognising any of the basic emotions, after controlling for age and non-verbal IQ (all p>0.05). Finally, results indicated significant differences between males and females with ASC for emotion recognition performance (p<0.05) but not for self-reported empathy (p>0.05). These findings suggest that self-reported empathy deficits in fathers of autistic probands are part of the 'broader autism phenotype'. This study also reports new findings of sex differences amongst people with ASC in emotion recognition, as well as replicating previous work demonstrating empathy difficulties in adults with ASC. The use of empathy measures as quantitative endophenotypes for ASC is discussed.
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.
Resumo:
The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.
Resumo:
A new, healable, supramolecular nanocomposite material has been developed and evaluated. The material comprises a blend of three components: a pyrene-functionalized polyamide, a polydiimide and pyrenefunctionalized gold nanoparticles (P-AuNPs). The polymeric components interact by forming well-defined p–p stacked complexes between p-electron rich pyrenyl residues and p-electron deficient polydiimide residues. Solution studies in the mixed solvent chloroform–hexafluoroisopropanol (6 : 1, v/v) show that mixing the three components (each of which is soluble in isolation), results in the precipitation of a supramolecular, polymer nanocomposite network. The precipitate thus formed can be re-dissolved on heating, with the thermoreversible dissolution/precipitation procedure repeatable over at least 5 cycles. Robust, self-supporting composite films containing up to 15 wt% P-AuNPs could be cast from 2,2,2- trichloroethanol. Addition of as little as 1.25 wt% P-AuNPs resulted in significantly enhanced mechanical properties compared to the supramolecular blend without nanoparticles. The nanocomposites showed a linear increase in both tensile moduli and ultimate tensile strength with increasing P-AuNP content. All compositions up to 10 wt% P-AuNPs exhibited essentially quantitative healing efficiencies. Control experiments on an analogous nanocomposite material containing dodecylamine-functionalized AuNPs (5 wt%) exhibited a tensile modulus approximately half that of the corresponding nanocomposite that incorporated 5 wt% pyrene functionalized-AuNPs, clearly demonstrating the importance of the designed interactions between the gold filler and the supramolecular polymer matrix.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
The effects of auditory distraction in memory tasks have been examined to date with procedures that minimize participants’ control over their own memory processes. Surprisingly little attention has been paid to metacognitive control factors which might affect memory performance. In this study, we investigate the effects of auditory distraction on metacognitive control of memory, examining the effects of auditory distraction in recognition tasks utilizing the metacognitive framework of Koriat and Goldsmith (1996), to determine whether strategic regulation of memory accuracy is impacted by auditory distraction. Results replicated previous findings in showing that auditory distraction impairs memory performance in tasks minimizing participants’ metacognitive control (forced-report test). However, the results revealed also that when metacognitive control is allowed (free-report tests), auditory distraction impacts upon a range of metacognitive indices. In the present study, auditory distraction undermined accuracy of metacognitive monitoring (resolution), reduced confidence in responses provided and, correspondingly, increased participants’ propensity to withhold responses in free-report recognition. Crucially, changes in metacognitive processes were related to impairment in free-report recognition performance, as the use of the ‘don’t know’ option under distraction led to a reduction in the number of correct responses volunteered in free-report tests. Overall, the present results show how auditory distraction exerts its influence on memory performance via both memory and metamemory processes.
Resumo:
Analysis of the forecasts and hindcasts from the ECMWF 32-day forecast model reveals that there is statistically significant skill in predicting weekly mean wind speeds over areas of Europe at lead times of at least 14–20 days. Previous research on wind speed predictability has focused on the short- to medium-range time scales, typically finding that forecasts lose all skill by the later part of the medium-range forecast. To the authors’ knowledge, this research is the first to look beyond the medium-range time scale by taking weekly mean wind speeds, instead of averages at hourly or daily resolution, for the ECMWF monthly forecasting system. It is shown that the operational forecasts have high levels of correlation (~0.6) between the forecasts and observations over the winters of 2008–12 for some areas of Europe. Hindcasts covering 20 winters show a more modest level of correlation but are still skillful. Additional analysis examines the probabilistic skill for the United Kingdom with the application of wind power forecasting in mind. It is also shown that there is forecast “value” for end users (operating in a simple cost/loss ratio decision-making framework). End users that are sensitive to winter wind speed variability over the United Kingdom, Germany, and some other areas of Europe should therefore consider forecasts beyond the medium-range time scale as it is clear there is useful information contained within the forecast.
Resumo:
Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.
Resumo:
In recent years, ZigBee has been proven to be an excellent solution to create scalable and flexible home automation networks. In a home automation network, consumer devices typically collect data from a home monitoring environment and then transmit the data to an end user through multi-hop communication without the need for any human intervention. However, due to the presence of typical obstacles in a home environment, error-free reception may not be possible, particularly for power constrained devices. A mobile sink based data transmission scheme can be one solution but obstacles create significant complexities for the sink movement path determination process. Therefore, an obstacle avoidance data routing scheme is of vital importance to the design of an efficient home automation system. This paper presents a mobile sink based obstacle avoidance routing scheme for a home monitoring system. The mobile sink collects data by traversing through the obstacle avoidance path. Through ZigBee based hardware implementation and verification, the proposed scheme successfully transmits data through the obstacle avoidance path to improve network performance in terms of life span, energy consumption and reliability. The application of this work can be applied to a wide range of intelligent pervasive consumer products and services including robotic vacuum cleaners and personal security robots1.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Background Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Methods Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant’s voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a ‘self-identification’ task, classifying each morph as ‘self’ voice (or face) or an ‘other’ voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Results Fifty percent ‘self’ response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = −0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = −0.020, P = 0.438). Conclusions Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.
Resumo:
Whereas there is substantial scholarship on formulaic language in L1 and L2 English, there is less research on formulaicity in other languages. The aim of this paper is to contribute to learner corpus research into formulaic language in native and non-native German. To this effect, a corpus of argumentative essays written by advanced British students of German (WHiG) was compared with a corpus of argumentative essays written by German native speakers (Falko-L1). A corpus-driven analysis reveals a larger number of 3-grams in WHiG than in Falko-L1, which suggests that British advanced learners of German are more likely to use formulaic language in argumentative writing than their native-speaker counterparts. Secondly, by classifying the formulaic sequences according to their functions, this study finds that native speakers of German prefer discourse-structuring devices to stance expressions, whilst British advanced learners display the opposite preferences. Thirdly, the results show that learners of German make greater use of macro-discourse-structuring devices and cautious language, whereas native speakers favour micro-discourse structuring devices and tend to use more direct language. This study increases our understanding of formulaic language typical of British advanced learners of German and reveals how diverging cultural paradigms can shape written native speaker and learner output.
Resumo:
‘Pragmaticist’ positions posit a three-way division within utterance content between: (i) the standing meaning of the sentence, (ii) a somewhat pragmatically enhanced meaning which captures what the speaker explicitly conveys (following Sperber and Wilson 1986, I label this the ‘explicature’), and (iii) further indirectly conveyed propositions which the speaker merely implies. Here I re-examine the notion of an explicature, asking how it is defined and what work explicatures are supposed to do. I argue that explicatures get defined in three different ways and that these distinct definitions can and do pull apart. Thus the notion of an explicature turns out to be ill-defined.