935 resultados para LATENCY
Resumo:
Wireless Mesh Networks (WMNs), based on commodity hardware, present a promising technology for a wide range of applications due to their self-configuring and self-healing capabilities, as well as their low equipment and deployment costs. One of the key challenges that WMN technology faces is the limited capacity and scalability due to co-channel interference, which is typical for multi-hop wireless networks. A simple and relatively low-cost approach to address this problem is the use of multiple wireless network interfaces (radios) per node. Operating the radios on distinct orthogonal channels permits effective use of the frequency spectrum, thereby, reducing interference and contention. In this paper, we evaluate the performance of the multi-radio Ad-hoc On-demand Distance Vector (AODV) routing protocol with a specific focus on hybrid WMNs. Our simulation results show that under high mobility and traffic load conditions, multi-radio AODV offers superior performance as compared to its single-radio counterpart. We believe that multi-radio AODV is a promising candidate for WMNs, which need to service a large number of mobile clients with low latency and high bandwidth requirements.
Resumo:
Retrieving large amounts of information over wide area networks, including the Internet, is problematic due to issues arising from latency of response, lack of direct memory access to data serving resources, and fault tolerance. This paper describes a design pattern for solving the issues of handling results from queries that return large amounts of data. Typically these queries would be made by a client process across a wide area network (or Internet), with one or more middle-tiers, to a relational database residing on a remote server. The solution involves implementing a combination of data retrieval strategies, including the use of iterators for traversing data sets and providing an appropriate level of abstraction to the client, double-buffering of data subsets, multi-threaded data retrieval, and query slicing. This design has recently been implemented and incorporated into the framework of a commercial software product developed at Oracle Corporation.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
The Internet of Things (IoT) consists of a worldwide “network of networks,” composed by billions of interconnected heterogeneous devices denoted as things or “Smart Objects” (SOs). Significant research efforts have been dedicated to port the experience gained in the design of the Internet to the IoT, with the goal of maximizing interoperability, using the Internet Protocol (IP) and designing specific protocols like the Constrained Application Protocol (CoAP), which have been widely accepted as drivers for the effective evolution of the IoT. This first wave of standardization can be considered successfully concluded and we can assume that communication with and between SOs is no longer an issue. At this time, to favor the widespread adoption of the IoT, it is crucial to provide mechanisms that facilitate IoT data management and the development of services enabling a real interaction with things. Several reference IoT scenarios have real-time or predictable latency requirements, dealing with billions of device collecting and sending an enormous quantity of data. These features create a new need for architectures specifically designed to handle this scenario, hear denoted as “Big Stream”. In this thesis a new Big Stream Listener-based Graph architecture is proposed. Another important step, is to build more applications around the Web model, bringing about the Web of Things (WoT). As several IoT testbeds have been focused on evaluating lower-layer communication aspects, this thesis proposes a new WoT Testbed aiming at allowing developers to work with a high level of abstraction, without worrying about low-level details. Finally, an innovative SOs-driven User Interface (UI) generation paradigm for mobile applications in heterogeneous IoT networks is proposed, to simplify interactions between users and things.
Resumo:
Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation--by any amount required--until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model.