104 resultados para robust speaker verification
Resumo:
The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.
Resumo:
This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).
Resumo:
Climate models consistently predict a strengthened Brewer–Dobson circulation in response to greenhouse gas (GHG)-induced climate change. Although the predicted circulation changes are clearly the result of changes in stratospheric wave drag, the mechanism behind the wave-drag changes remains unclear. Here, simulations from a chemistry–climate model are analyzed to show that the changes in resolved wave drag are largely explainable in terms of a simple and robust dynamical mechanism, namely changes in the location of critical layers within the subtropical lower stratosphere, which are known from observations to control the spatial distribution of Rossby wave breaking. In particular, the strengthening of the upper flanks of the subtropical jets that is robustly expected from GHG-induced tropospheric warming pushes the critical layers (and the associated regions of wave drag) upward, allowing more wave activity to penetrate into the subtropical lower stratosphere. Because the subtropics represent the critical region for wave driving of the Brewer–Dobson circulation, the circulation is thereby strengthened. Transient planetary-scale waves and synoptic-scale waves generated by baroclinic instability are both found to play a crucial role in this process. Changes in stationary planetary wave drag are not so important because they largely occur away from subtropical latitudes.
Resumo:
This Forum challenges and problematizes the term incomplete acquisition, which has been widely used to describe the state of competence of heritage speaker (HS) bilinguals for well over a decade (see, e.g., Montrul, 2008). It is suggested and defended that HS competence, while often different from monolingual peers, is in fact not incomplete (given any reasonable definition by the word incomplete), but simply distinct for reasons related to the realities of their environment.
Resumo:
It has been argued that colloquial dialects of Brazilian Portuguese (BP) have undergone significant linguistic change resulting in the loss of inflected infinitives (e.g., Pires, 2002, 2006). Since BP adults, at least educated ones, have complete knowledge of inflected infinitives, the implicit claim is that they are transmitted via formal education in the standard dialect. In the present article, I test one of the latent predictions of such claims; namely, the fact that heritage speakers of BP who lack formal education in the standard dialect should never develop native-like knowledge of inflected infinitives. In doing so, I highlight two significant implications (a) that heritage speaker grammars are a good source for testing dialectal variation and language change proposals and (b) incomplete acquisition and/or attrition are not the only sources of heritage language competence differences. Employing the syntactic and semantic tests of Rothman and Iverson (2007), I compare heritage speakers' knowledge to Rothman and Iverson's advanced adult L2 learners and educated native controls. Unlike the latter groups, the data for heritage speakers indicate that they do not have target knowledge of inflected infinitives, lending support to Pires' claims, suggesting that literacy plays a significant role in the acquisition of this grammatical property in BP.
Resumo:
In this communication, we describe a new method which has enabled the first patterning of human neurons (derived from the human teratocarcinoma cell line (hNT)) on parylene-C/silicon dioxide substrates. We reveal the details of the nanofabrication processes, cell differentiation and culturing protocols necessary to successfully pattern hNT neurons which are each key aspects of this new method. The benefits in patterning human neurons on silicon chip using an accessible cell line and robust patterning technology are of widespread value. Thus, using a combined technology such as this will facilitate the detailed study of the pathological human brain at both the single cell and network level.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.
Resumo:
This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.
Resumo:
Analysis of the forecasts and hindcasts from the ECMWF 32-day forecast model reveals that there is statistically significant skill in predicting weekly mean wind speeds over areas of Europe at lead times of at least 14–20 days. Previous research on wind speed predictability has focused on the short- to medium-range time scales, typically finding that forecasts lose all skill by the later part of the medium-range forecast. To the authors’ knowledge, this research is the first to look beyond the medium-range time scale by taking weekly mean wind speeds, instead of averages at hourly or daily resolution, for the ECMWF monthly forecasting system. It is shown that the operational forecasts have high levels of correlation (~0.6) between the forecasts and observations over the winters of 2008–12 for some areas of Europe. Hindcasts covering 20 winters show a more modest level of correlation but are still skillful. Additional analysis examines the probabilistic skill for the United Kingdom with the application of wind power forecasting in mind. It is also shown that there is forecast “value” for end users (operating in a simple cost/loss ratio decision-making framework). End users that are sensitive to winter wind speed variability over the United Kingdom, Germany, and some other areas of Europe should therefore consider forecasts beyond the medium-range time scale as it is clear there is useful information contained within the forecast.
Resumo:
We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium-correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, impulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods are likely to perform well. The robust methods are applied to forecasting US GDP using autoregressive models, and also to autoregressive models with factors extracted from a large dataset of macroeconomic variables. We consider forecasting performance over the Great Recession, and over an earlier more quiescent period.