927 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present here a new approach to scalable quantum computing - a 'qubus computer' - which realizes qubit measurement and quantum gates through interacting qubits with a quantum communication bus mode. The qubits could be 'static' matter qubits or 'flying' optical qubits, but the scheme we focus on here is particularly suited to matter qubits. There is no requirement for direct interaction between the qubits. Universal two-qubit quantum gates may be effected by schemes which involve measurement of the bus mode, or by schemes where the bus disentangles automatically and no measurement is needed. In effect, the approach integrates together qubit degrees of freedom for computation with quantum continuous variables for communication and interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We surveyed all nurses working at a tertiary paediatric hospital (except casual staff and those who were on leave) from 27 hospital departments. A total of 365 questionnaires were distributed. There were 40 questions in six sections: demographic details, knowledge of e-health, relevance of e-health to nursing profession, computing skills, Internet use and access to e-health education. A total of 253 surveys were completed (69%). Most respondents reported that that they had never had e-health education of any sort (87%) and their e-health knowledge and skills were low (71%). However, 11% of nurses reported some exposure to e-health through their work. Over half (56%) of respondents indicated that e-health was important, very important or critical for health professions while 26% were not sure. The lack of education and training was considered by most respondents (71%) to be the main barrier to adopting e-health. While nurses seemed to have moderate awareness of the potential benefits of e-health, their practical skills and knowledge of the topic were very limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive computing applications must be sufficiently autonomous to adapt their behaviour to changes in computing resources and user requirements. This capability is known as context-awareness. In some cases, context-aware applications must be implemented as autonomic systems which are capable of dynamically discovering and replacing context sources (sensors) at run-time. Unlike other types of application autonomy, this kind of dynamic reconfiguration has not been sufficiently investigated yet by the research community. However, application-level context models are becoming common, in order to ease programming of context-aware applications and support evolution by decoupling applications from context sources. We can leverage these context models to develop general (i.e., application-independent) solutions for dynamic, run-time discovery of context sources (i.e., context management). This paper presents a model and architecture for a reconfigurable context management system that supports interoperability by building on emerging standards for sensor description and classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A constante evolução da tecnologia disponibilizou, atualmente, ferramentas computacionais que eram apenas expectativas há 10 anos atrás. O aumento do potencial computacional aplicado a modelos numéricos que simulam a atmosfera permitiu ampliar o estudo de fenômenos atmosféricos, através do uso de ferramentas de computação de alto desempenho. O trabalho propôs o desenvolvimento de algoritmos com base em arquiteturas SIMT e aplicação de técnicas de paralelismo com uso da ferramenta OpenACC para processamento de dados de previsão numérica do modelo Weather Research and Forecast. Esta proposta tem forte conotação interdisciplinar, buscando a interação entre as áreas de modelagem atmosférica e computação científica. Foram testadas a influência da computação do cálculo de microfísica de nuvens na degradação temporal do modelo. Como a entrada de dados para execução na GPU não era suficientemente grande, o tempo necessário para transferir dados da CPU para a GPU foi maior do que a execução da computação na CPU. Outro fator determinante foi a adição de código CUDA dentro de um contexto MPI, causando assim condições de disputa de recursos entre os processadores, mais uma vez degradando o tempo de execução. A proposta do uso de diretivas para aplicar computação de alto desempenho em uma estrutura CUDA parece muito promissora, mas ainda precisa ser utilizada com muita cautela a fim de produzir bons resultados. A construção de um híbrido MPI + CUDA foi testada, mas os resultados não foram conclusivos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heterogeneous computing systems have become common in modern processor architectures. These systems, such as those released by AMD, Intel, and Nvidia, include both CPU and GPU cores on a single die available with reduced communication overhead compared to their discrete predecessors. Currently, discrete CPU/GPU systems are limited, requiring larger, regular, highly-parallel workloads to overcome the communication costs of the system. Without the traditional communication delay assumed between GPUs and CPUs, we believe non-traditional workloads could be targeted for GPU execution. Specifically, this thesis focuses on the execution model of nested parallel workloads on heterogeneous systems. We have designed a simulation flow which utilizes widely used CPU and GPU simulators to model heterogeneous computing architectures. We then applied this simulator to non-traditional GPU workloads using different execution models. We also have proposed a new execution model for nested parallelism allowing users to exploit these heterogeneous systems to reduce execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No se puede hablar de la sociedad del siglo XXI sin relacionarla con el uso de la información y con el acceso social a las telecomunicaciones y la computación para responder a las demandas de los nuevos ciudadanos. En los países de la región latinoamericana, donde el acceso a la información y a las tecnológicas no es fácil, se requiere que esos elementos sean parte de las responsabilidades de los individuos y de los gobiernos para que se conformen las políticas y programas públicos y privados de educación, cultura y ciencia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software integration testing plays an increasingly important role as the software industry has experienced a major change from isolated applications to highly distributed computing environments. Conducting integration testing is a challenging task because it is often very difficult to replicate a real enterprise environment. Emulating testing environment is one of the key solutions to this problem. However, existing specification-based emulation techniques require manual coding of their message processing engines, therefore incurring high development cost. In this paper, we present a suite of domain-specific visual modelinglanguages to describe emulated testing environments at a highabstraction level. Our solution allows domain experts to model atesting environment from abstract interface layers. These layermodels are then transformed to runtime environment for application testing. Our user study shows that our visual languages are easy to use, yet with sufficient expressive power to model complex testing applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an integrated model for an offshore wind turbine taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a permanent magnet synchronous generator, and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the total harmonic distortion on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. Proportional integral fractional-order control supports the control strategy. A comparison between the drive train models is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an integrated model for an offshore wind energy system taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a PMSG and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the THD on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. PI fractional-order control supports the control strategy. A comparison between the drive train models is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the abundant availability,of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Fir fox browser; and demonstrates the advantages of such interoperability over conventional distributed data access strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud services are becoming ever more important for everyone's life. Cloud storage? Web mails? Yes, we don't need to be working in big IT companies to be surrounded by cloud services. Another thing that's growing in importance, or at least that should be considered ever more important, is the concept of privacy. The more we rely on services of which we know close to nothing about, the more we should be worried about our privacy. In this work, I will analyze a prototype software based on a peer to peer architecture for the offering of cloud services, to see if it's possible to make it completely anonymous, meaning that not only the users using it will be anonymous, but also the Peers composing it will not know the real identity of each others. To make it possible, I will make use of anonymizing networks like Tor. I will start by studying the state of art of Cloud Computing, by looking at some real example, followed by analyzing the architecture of the prototype, trying to expose the differences between its distributed nature and the somehow centralized solutions offered by the famous vendors. After that, I will get as deep as possible into the working principle of the anonymizing networks, because they are not something that can just be 'applied' mindlessly. Some de-anonymizing techniques are very subtle so things must be studied carefully. I will then implement the required changes, and test the new anonymized prototype to see how its performances differ from those of the standard one. The prototype will be run on many machines, orchestrated by a tester script that will automatically start, stop and do all the required API calls. As to where to find all these machines, I will make use of Amazon EC2 cloud services and their on-demand instances.