925 resultados para Running Lamps.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

R. Daly and Q. Shen. Methods to accelerate the learning of bayesian network structures. Proceedings of the Proceedings of the 2007 UK Workshop on Computational Intelligence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

J. Keppens, Q. Shen and B. Schafer. Probabilistic abductive computation of evidence collection strategies in crime investigation. Proceedings of the 10th International Conference on Artificial Intelligence and Law, pages 215-225.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International Journal of Liability and Scientific Enquiry 2007 - Vol. 1, No.1/2 pp. 29 - 49 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thatcher, Rhys, and Alan Batterham, 'Development and validation of a sport-specific exercise protocol for elite youth soccer players', Journal of Sports Medicine and Physical Fitness, (2004) 44(1) pp.15-22 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thatcher, Rhys, et al., 'A modified TRIMP to quantify the in-season training load of team sport players', Journal of Sport Sciences, (2007) 25(6) pp.629-634 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Grande, Manuel; Browning, R.; Waltham, N.; Parker, D., 'The D-CIXS X-ray mapping spectrometer on SMART-1', Planetary and Space Science (2003) 51(6) pp.427-433 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gait patterns have been widely studied in different fields of science for their particular characteristics. A dynamic approach of human locomotion considers walking and running as two stable behaviors adopted spontaneously under certain levels and natures of constraints. When no constraints are imposed, people naturally prefer to walk at the typical speed (i.e., around 4.5 km.h-1) that minimizes metabolic energy cost. The preferred walking speed (PWS) is also known to be an indicator of mobility and an important clinical factor in tracking impairements in motor behaviors. When constrained to move at higher speeds (e.g., being late), people naturally switch their preference to running for similar optimization reasons (e.g., physiological, biomechanical, perceptual, attentionnal costs). Indeed, the preferred transition speed (PTS) marks the natural seperation between walking and running and consistently falls within a speed range around 7.5 km.h-1. This chapter describes the constraint-dependant spontaneous organisation of the locomotor system, specifically on the walk-to-run speed continuum. We provide examples of the possibility of long-term adaptations of preferred behaviors to specific constraints such as factors related to traditional clothing or practice. We use knowledge from studies on preferred behaviors and on the relationship between affect and exercise adherence as a backdrop to prescribing a walk exercise program with an emphasis on populations with overweight or obesity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DSpace is an open source software platform that enables organizations to: - Capture and describe digital material using a submission workflow module, or a variety of programmatic ingest options - Distribute an organization's digital assets over the web through a search and retrieval system - Preserve digital assets over the long term This system documentation includes a functional overview of the system, which is a good introduction to the capabilities of the system, and should be readable by nontechnical personnel. Everyone should read this section first because it introduces some terminology used throughout the rest of the documentation. For people actually running a DSpace service, there is an installation guide, and sections on configuration and the directory structure. Note that as of DSpace 1.2, the administration user interface guide is now on-line help available from within the DSpace system. Finally, for those interested in the details of how DSpace works, and those potentially interested in modifying the code for their own purposes, there is a detailed architecture and design section.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Implementations are presented of two common algorithms for integer factorization, Pollard’s “p – 1” method and the SQUFOF method. The algorithms are implemented in the F# language, a functional programming language developed by Microsoft and officially released for the first time in 2010. The algorithms are thoroughly tested on a set of large integers (up to 64 bits in size), running both on a physical machine and a Windows Azure machine instance. Analysis of the relative performance between the two environments indicates comparable performance when taking into account the difference in computing power. Further analysis reveals that the relative performance of the Azure implementation tends to improve as the magnitudes of the integers increase, indicating that such an approach may be suitable for larger, more complex factorization tasks. Finally, several questions are presented for future research, including the performance of F# and related languages for more efficient, parallelizable algorithms, and the relative cost and performance of factorization algorithms in various environments, including physical hardware and commercial cloud computing offerings from the various vendors in the industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of analyzing performance of WWW servers. The web has experienced a phenomenal growth and has become the most popular Internet application. As a consequence of its large popularity, the Internet has suffered from various performance problems, such as network congestion and overloaded servers. These days, it is not uncommon to find servers refusing connections because they are overloaded. Performance has always been a key issue in the design and operation of on-line systems. With regard to Internet, performance is also critical, because users want fast and easy access to all objects (i.e., documents, pictures, audio, and video) available on the net. Thus, it is important to understand WWW performance issues. This paper focuses on the performance analysis of a Web server. Using a synthetic benchmark (WebStone), we analyze three different Web server software running on top of a Windows NT platform and performing some typical WWW tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.