908 resultados para Real systems
Resumo:
Based on a comprehensive theoretical optical orthogonal frequency division multiplexing (OOFDM) system model rigorously verified by comparing numerical results with end-to-end real-time experimental measurements at 11.25Gb/s, detailed explorations are undertaken, for the first time, of the impacts of various physical factors on the OOFDM system performance over directly modulated DFB laser (DML)-based, intensity modulation and direct detection (IMDD), single-mode fibre (SMF) systems without in-line optical amplification and chromatic dispersion compensation. It is shown that the low extinction ratio (ER) of the DML modulated OOFDM signal is the predominant factor limiting the maximum achievable optical power budget, and the subcarrier intermixing effect associated with square-law photon detection in the receiver reduces the optical power budget by at least 1dB. Results also indicate that, immediately after the DML in the transmitter, the insertion of a 0.02nm bandwidth optical Gaussian bandpass filter with a 0.01nm wavelength offset with respect to the optical carrier wavelength can enhance the OOFDM signal ER by approximately 1.24dB, thus resulting in a 7dB optical power budget improvement at a total channel BER of 1 × 10(-3).
Resumo:
The feasibility of utilising low-cost, un-cooled vertical cavity surface-emitting lasers (VCSELs) as intensity modulators in real-time optical OFDM (OOFDM) transceivers is experimentally explored, for the first time, in terms of achievable signal bit rates, physical mechanisms limiting the transceiver performance and performance robustness. End-to-end real-time transmission of 11.25 Gb/s 64-QAM-encoded OOFDM signals over simple intensity modulation and direct detection, 25 km SSMF PON systems is experimentally demonstrated with a power penalty of 0.5 dB. The low extinction ratio of the VCSEL intensity-modulated OOFDM signal is identified to be the dominant factor determining the maximum obtainable transmission performance. Experimental investigations indicate that, in addition to the enhanced transceiver performance, adaptive power loading can also significantly improve the system performance robustness to variations in VCSEL operating conditions. As a direct result, the aforementioned capacity versus reach performance is still retained over a wide VCSEL bias (driving) current (voltage) range of 4.5 mA to 9 mA (275 mVpp to 320 mVpp). This work is of great value as it demonstrates the possibility of future mass production of cost-effective OOFDM transceivers for PON applications.
Resumo:
This paper describes a framework for evaluation of spoken dialogue systems. Typically, evaluation of dialogue systems is performed in a controlled test environment with carefully selected and instructed users. However, this approach is very demanding. An alternative is to recruit a large group of users who evaluate the dialogue systems in a remote setting under virtually no supervision. Crowdsourcing technology, for example Amazon Mechanical Turk (AMT), provides an efficient way of recruiting subjects. This paper describes an evaluation framework for spoken dialogue systems using AMT users and compares the obtained results with a recent trial in which the systems were tested by locally recruited users. The results suggest that the use of crowdsourcing technology is feasible and it can provide reliable results. Copyright © 2011 ISCA.
Resumo:
Predictability — the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is a formalism that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Unrealistic systems — possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing — cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed.
Resumo:
Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.
Resumo:
The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.
Resumo:
info:eu-repo/semantics/published
Resumo:
To investigate the neural systems that contribute to the formation of complex, self-relevant emotional memories, dedicated fans of rival college basketball teams watched a competitive game while undergoing functional magnetic resonance imaging (fMRI). During a subsequent recognition memory task, participants were shown video clips depicting plays of the game, stemming either from previously-viewed game segments (targets) or from non-viewed portions of the same game (foils). After an old-new judgment, participants provided emotional valence and intensity ratings of the clips. A data driven approach was first used to decompose the fMRI signal acquired during free viewing of the game into spatially independent components. Correlations were then calculated between the identified components and post-scanning emotion ratings for successfully encoded targets. Two components were correlated with intensity ratings, including temporal lobe regions implicated in memory and emotional functions, such as the hippocampus and amygdala, as well as a midline fronto-cingulo-parietal network implicated in social cognition and self-relevant processing. These data were supported by a general linear model analysis, which revealed additional valence effects in fronto-striatal-insular regions when plays were divided into positive and negative events according to the fan's perspective. Overall, these findings contribute to our understanding of how emotional factors impact distributed neural systems to successfully encode dynamic, personally-relevant event sequences.
Resumo:
Virtual manufacturing and design assessment increasingly involve the simulation of interacting phenomena, sic. multi-physics, an activity which is very computationally intensive. This chapter describes an attempt to address the parallel issues associated with a multi-physics simulation approach based upon a range of compatible procedures operating on one mesh using a single database - the distinct physics solvers can operate separately or coupled on sub-domains of the whole geometric space. Moreover, the finite volume unstructured mesh solvers use different discretization schemes (and, particularly, different ‘nodal’ locations and control volumes). A two-level approach to the parallelization of this simulation software is described: the code is restructured into parallel form on the basis of the mesh partitioning alone, that is, without regard to the physics. However, at run time, the mesh is partitioned to achieve a load balance, by considering the load per node/element across the whole domain. The latter of course is determined by the problem specific physics at a particular location.