983 resultados para sliding vector fields
Resumo:
A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.
Resumo:
Techniques to improve the automated analysis of natural and spontaneous facial expressions have been developed. The outcome of the research has applications in several fields including national security (eg: expression invariant face recognition); education (eg: affect aware interfaces); mental and physical health (eg: depression and pain recognition).
Resumo:
Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.
Resumo:
This study presented a novel method for purification of three different grades of diatomite from China by scrubbing technique using sodiumhexametaphosphate (SHMP) as dispersant combinedwith centrifugation. Effects of pH value and dispersant amount on the grade of purified diatomitewere studied and the optimumexperimental conditions were obtained. The characterizations of original diatomite and derived products after purification were determined by scanning electron microscopy (SEM), X-ray diffraction (XRD), infrared spectroscopy (IR) and specific surface area analyzer (BET). The results indicated that the pore size distribution, impurity content and bulk density of purified diatomite were improved significantly. The dispersive effect of pH and SHMP on the separation of diatomite from clay minerals was discussed systematically through zeta potential test. Additionally, a possible purification mechanism was proposed in the light of the obtained experimental results.
Resumo:
Plants transformed with Agrobacterium frequently contain T-DNA concatamers with direct-repeat (d/r) or inverted-repeat (i/r) transgene integrations, and these repetitive T-DNA insertions are often associated with transgene silencing. To facilitate the selection of transgenic lines with simple T-DNA insertions, we constructed a binary vector (pSIV) based on the principle of hairpin RNA (hpRNA)-induced gene silencing. The vector is designed so that any transformed cells that contain more than one insertion per locus should generate hpRNA against the selective marker gene, leading to its silencing. These cells should, therefore, be sensitive to the selective agent and less likely to regenerate. Results from Arabidopsis and tobacco transformation showed that pSIV gave considerably fewer transgenic lines with repetitive insertions than did a conventional T-DNA vector (pCON). Furthermore, the transgene was more stably expressed in the pSIV plants than in the pCON plants. Rescue of plant DNA flanking sequences from pSIV plants was significantly more frequent than from pCON plants, suggesting that pSIV is potentially useful for T-DNA tagging. Our results revealed a perfect correlation between the presence of tail-to-tail inverted repeats and transgene silencing, supporting the view that read-through hpRNA transcript derived from i/r T-DNA insertions is a primary inducer of transgene silencing in plants. © CSIRO 2005.
Resumo:
We have tested a methodology for the elimination of the selectable marker gene after Agrobacterium-mediated transformation of barley. This involves segregation of the selectable marker gene away from the gene of interest following co-transformation using a plasmid carrying two T-DNAs, which were located adjacent to each other with no intervening region. A standard binary transformation vector was modified by insertion of a small section composed of an additional left and right T-DNA border, so that the selectable marker gene and the site for insertion of the gene of interest (GOI) were each flanked by a left and right border. Using this vector three different GOIs were transformed into barley. Analysis of transgene inheritance was facilitated by a novel and rapid assay utilizing PCR amplification from macerated leaf tissue. Co-insertion was observed in two thirds of transformants, and among these approximately one quarter had transgene inserts which segregated in the next generation to yield selectable marker-free transgenic plants. Insertion of non-T-DNA plasmid sequences was observed in only one of fourteen SMF lines tested. This technique thus provides a workable system for generating transgenic barley free from selectable marker genes, thereby obviating public concerns regarding proliferation of these genes.
Resumo:
The Smart Fields programme has been active in Shell over the last decade and has given large benefits. In order to understand the value and to underpin strategies for the future implementation programme, a study was carried out to quantify the benefits to date. This focused on actually achieved value, through increased production or lower costs. This provided an estimate of the total value achieved to date. Future benefits such as increased reserves or continued production gain were recorded separately. The paper describes the process followed in the benefits quantification. It identifies the key solutions and technologies and describes the mechanism used to understand the relation between solutions and value. Examples have been given of value from various assets around the world, in both existing fields and in green fields. Finally, the study provided the methodology for tracking of value. This helps Shell to estimate and track the benefits of the Smart Fields programme at company scale.
Resumo:
This paper proposes techniques to improve the performance of i-vector based speaker verification systems when only short utterances are available. Short-length utterance i-vectors vary with speaker, session variations, and the phonetic content of the utterance. Well established methods such as linear discriminant analysis (LDA), source-normalized LDA (SN-LDA) and within-class covariance normalisation (WCCN) exist for compensating the session variation but we have identified the variability introduced by phonetic content due to utterance variation as an additional source of degradation when short-duration utterances are used. To compensate for utterance variations in short i-vector speaker verification systems using cosine similarity scoring (CSS), we have introduced a short utterance variance normalization (SUVN) technique and a short utterance variance (SUV) modelling approach at the i-vector feature level. A combination of SUVN with LDA and SN-LDA is proposed to compensate the session and utterance variations and is shown to provide improvement in performance over the traditional approach of using LDA and/or SN-LDA followed by WCCN. An alternative approach is also introduced using probabilistic linear discriminant analysis (PLDA) approach to directly model the SUV. The combination of SUVN, LDA and SN-LDA followed by SUV PLDA modelling provides an improvement over the baseline PLDA approach. We also show that for this combination of techniques, the utterance variation information needs to be artificially added to full-length i-vectors for PLDA modelling.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Much of the existing empirical research on journalism focuses largely on hard-news journalism, at the expense of its less traditional forms, particularly the soft-news areas of lifestyle and entertainment journalism. In focussing on one particular area of lifestyle journalism – the reporting of travel stories – this paper argues for renewed scholarly efforts in this increasingly important field. Travel journalism’s location at the intersection between information and entertainment, journalism and advertising, as well as its increasingly significant role in the representation of foreign cultures makes it a significant site for scholarly research. By reviewing existing research about travel journalism and examining in detail the special exigencies that constrain it, the article proposes a number of dimensions for future research into the production practices of travel journalism. These dimensions include travel journalism’s role in mediating foreign cultures, its market orientation, motivational aspects and its ethical standards.
Resumo:
Rakaposhi is a synchronous stream cipher, which uses three main components: a non-linear feedback shift register (NLFSR), a dynamic linear feedback shift register (DLFSR) and a non-linear filtering function (NLF). NLFSR consists of 128 bits and is initialised by the secret key K. DLFSR holds 192 bits and is initialised by an initial vector (IV). NLF takes 8-bit inputs and returns a single output bit. The work identifies weaknesses and properties of the cipher. The main observation is that the initialisation procedure has the so-called sliding property. The property can be used to launch distinguishing and key recovery attacks. The distinguisher needs four observations of the related (K,IV) pairs. The key recovery algorithm allows to discover the secret key K after observing 29 pairs of (K,IV). Based on the proposed related-key attack, the number of related (K,IV) pairs is 2(128 + 192)/4 pairs. Further the cipher is studied when the registers enter short cycles. When NLFSR is set to all ones, then the cipher degenerates to a linear feedback shift register with a non-linear filter. Consequently, the initial state (and Secret Key and IV) can be recovered with complexity 263.87. If DLFSR is set to all zeros, then NLF reduces to a low non-linearity filter function. As the result, the cipher is insecure allowing the adversary to distinguish it from a random cipher after 217 observations of keystream bits. There is also the key recovery algorithm that allows to find the secret key with complexity 2 54.
Resumo:
At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. In a stream cipher setting where the filter function is known, we can extend it to the cube attack with annihilators: By applying the cube attack to Boolean functions for which we can find low-degree multiples (equivalently annihilators), the attack complexity can be improved. When the size of the filter function is smaller than the LFSR, we can improve the attack complexity further by considering a sliding window version of the cube attack with annihilators. Finally, we extend the cube attack to vectorial Boolean functions by finding implicit relations with low-degree polynomials.
Resumo:
Suppose two parties, holding vectors A = (a 1,a 2,...,a n ) and B = (b 1,b 2,...,b n ) respectively, wish to know whether a i > b i for all i, without disclosing any private input. This problem is called the vector dominance problem, and is closely related to the well-studied problem for securely comparing two numbers (Yao’s millionaires problem). In this paper, we propose several protocols for this problem, which improve upon existing protocols on round complexity or communication/computation complexity.