911 resultados para stream
Resumo:
The feasibility of stable soliton transmission system was demonstrated using a practical dispersion map in conjunction with in-line nonlinear optical loop mirrors (NOLMs). The system's performance was examined at 40 Gbit/s data rate in terms of maximum propagation distance corresponding to a bit error rate of more than 10-9. The bit error rate was estimated by means of the standard Q-factor.
Resumo:
We tested the hypothesis that the differences in performance between developmental dyslexics and controls on visual tasks are specific for the detection of dynamic stimuli. We found that dyslexics were less sensitive than controls to coherent motion in dynamic random dot displays. However, their sensitivity to control measures of static visual form coherence was not significantly different from that of controls. This dissociation of dyslexics' performance on measures that are suggested to tap the sensitivity of different extrastriate visual areas provides evidence for an impairment specific to the detection of dynamic properties of global stimuli, perhaps resulting from selective deficits in dorsal stream functions. © 2001 Lippincott Williams & Wilkins.
Resumo:
A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
Resumo:
Three experiments investigated the dynamics of auditory stream segregation. Experiment 1 used a 2.0-s constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence reduced reported test-sequence segregation substantially. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets buildup, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only 3 tone cycles - this contrasts with the more gradual build-up typically observed for alternating sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ∼10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an on-going, pre-established stream and that a deviant tone may reduce segregation by disrupting this capture. © 2013 Acoustical Society of America.
Resumo:
We examined the impact of permafrost on dissolved organic matter (DOM) composition in Caribou-Poker Creeks Research Watershed (CPCRW), a watershed underlain with discontinuous permafrost, in interior Alaska. We analyzed long term data from watersheds underlain with varying degrees of permafrost, sampled springs and thermokarsts, used fluorescence spectroscopy, and measured the bioavailabity of dissolved organic carbon (DOC). Permafrost driven patterns in hydrology and vegetation influenced DOM patterns in streams, with the stream draining the high permafrost watershed having higher DOC and dissolved organic nitrogen (DON) concentrations, higher DOC:- DON and greater specific ultraviolet absorbance (SUVA) than the streams draining the low and medium permafrost watersheds. Streams, springs and thermokarsts exhibited a wide range of DOC and DON concentrations (1.5–37.5 mgC/L and 0.14–1.26 mgN/L, respectively), DOC:DON (7.1–42.8) and SUVA (1.5–4.7 L mgC-1 m-1). All sites had a high proportion of humic components, a low proportion of protein components, and a low fluorescence index value (1.3–1.4), generally consistent with terrestrially derivedDOM. Principal component analysis revealed distinct groups in our fluorescence data determined by diagenetic processing and DOM source. The proportion of bioavailable DOC ranged from 2 to 35%, with the proportion of tyrosine- and tryptophan-like fluorophores in the DOM being a major predictor of DOC loss (p\0.05, R2 = 0.99). Our results indicate that the degradation of permafrost in CPCRW will result in a decrease in DOC and DON concentrations, a decline in DOC:DON, and a reduction in SUVA, possibly accompanied by
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Con l’avvento di Internet, il numero di utenti con un effettivo accesso alla rete e la possibilità di condividere informazioni con tutto il mondo è, negli anni, in continua crescita. Con l’introduzione dei social media, in aggiunta, gli utenti sono portati a trasferire sul web una grande quantità di informazioni personali mettendoli a disposizione delle varie aziende. Inoltre, il mondo dell’Internet Of Things, grazie al quale i sensori e le macchine risultano essere agenti sulla rete, permette di avere, per ogni utente, un numero maggiore di dispositivi, direttamente collegati tra loro e alla rete globale. Proporzionalmente a questi fattori anche la mole di dati che vengono generati e immagazzinati sta aumentando in maniera vertiginosa dando luogo alla nascita di un nuovo concetto: i Big Data. Nasce, di conseguenza, la necessità di far ricorso a nuovi strumenti che possano sfruttare la potenza di calcolo oggi offerta dalle architetture più complesse che comprendono, sotto un unico sistema, un insieme di host utili per l’analisi. A tal merito, una quantità di dati così vasta, routine se si parla di Big Data, aggiunta ad una velocità di trasmissione e trasferimento altrettanto alta, rende la memorizzazione dei dati malagevole, tanto meno se le tecniche di storage risultano essere i tradizionali DBMS. Una soluzione relazionale classica, infatti, permetterebbe di processare dati solo su richiesta, producendo ritardi, significative latenze e inevitabile perdita di frazioni di dataset. Occorre, perciò, far ricorso a nuove tecnologie e strumenti consoni a esigenze diverse dalla classica analisi batch. In particolare, è stato preso in considerazione, come argomento di questa tesi, il Data Stream Processing progettando e prototipando un sistema bastato su Apache Storm scegliendo, come campo di applicazione, la cyber security.