927 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the instigation and development of an expert system to aid in the strategic planning of construction projects. The paper consists of four parts - the origin of the project, the development of the concepts needed for the proposed system, the building of the system itself, and assessment of its performance. The origin of the project is outlined starting with the Japanese commitment to 5th generation computing together with the increasing local reaction to theory based prescriptive research in the field. The subsequent development of activities via the Alvey Commission and the RICS in conjunction with Salford University are traced culminating in the proposal and execution of the first major expert system to be built for the UK construction industry, subsequently recognised as one of the most successful of the expert system projects commissioned under the Alvey programme

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research suggests information technology (IT) governance structures to manage cloud computing resources. The interest in acquiring IT resources as a utility from the cloud is gaining momentum. Cloud computing resources present organizations with opportunities to manage their IT expenditure on an ongoing basis, and are providing organizations access to modern IT resources to innovate and manage their continuity. However, cloud computing resources are no silver bullet. Organizations would need to have appropriate governance structures and policies in place to manage the cloud resources. The subsequent decisions from these governance structures will ensure effective management of cloud resources. This management will facilitate a better fit of cloud resources into organizations existing processes to achieve business (process-level) and financial (firm-level) objectives. Using a triangulation approach, we suggest four possible governance structures for managing the cloud computing resources. These structures are a chief cloud officer, a cloud management committee, a cloud service facilitation centre, and a cloud relationship centre. We also propose that these governance structures would relate to organizations cloud-related business objectives directly and indirectly to cloud-related financial objectives. Perceptive field survey data from actual and prospective cloud service adopters confirmed that the suggested structures would contribute directly to cloud-related business objectives and indirectly to cloud-related financial objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artist statement – Artisan Gallery I have a confession to make… I don’t wear a FitBit, I don’t want an Apple Watch and I don’t like bling LED’s. But, what excites me is a future where ‘wearables’ are discreet, seamless and potentially one with our body. Burgeoning E-textiles research will provide the ability to inconspicuously communicate, measure and enhance human health and well-being. Alongside this, next generation wearables arguably will not be worn on the body, but rather within the body…under the skin. ‘Under the Skin’ is a polemic piece provoking debate on the future of wearables – a place where they are not overt, not auxiliary and perhaps not apparent. Indeed, a future where wearables are under the skin or one with our apparel. And, as underwear closets the skin and is the most intimate and cloaked apparel item we wear, this work unashamedly teases dialogue to explore how wearables can transcend from the overt to the unseen. Context Wearable Technology, also referred to as wearable computing or ‘wearables’, is an embryonic field that has the potential to unsettle conventional notions as to how technology can interact, enhance and augment the human body. Wearable technology is the next-generation for ubiquitous consumer electronics and ‘Wearables’ are, in essence, miniature electronic devices that are worn by a person, under clothing, embedded within clothing/textiles, on top of clothing, or as stand-alone accessories/devices. This wearables market is predicted to grow somewhere between $30-$50 billion in the next 5 years (Credit Suisse, 2013). The global ‘wearables’ market, which is emergent in phase, has forecasted predictions for vast consumer revenue with the potential to become a significant cross-disciplinary disruptive space for designers and entrepreneurs. For Fashion, the field of wearables is arguably at the intersection of the second and third generation for design innovation: the first phase being purely decorative with aspects such as LED lighting; the second phase consisting of an array of wearable devices, such as smart watches, to communicate areas such as health and fitness, the third phase involving smart electronics that are woven into the textile to perform a vast range of functions such as body cooling, fabric colour change or garment silhouette change; and the fourth phase where wearable devices are surgically implanted under the skin to augment, transform and enhance the human body. Whilst it is acknowledged the wearable phases are neither clear-cut nor discreet in progression and design innovation can still be achieved with first generation decorative approaches, the later generation of technology that is less overt and at times ‘under the skin’ provides a uniquely rich point for design innovation where the body and technology intersect as one. With this context in mind, the wearable provocation piece ‘Under the Skin’ provides a unique opportunity for the audience to question and challenge conventional notions that wearables need to be a: manifest in nature, b: worn on or next to the body, and c: purely functional. The piece ‘Under the Skin’ is informed by advances in the market place for wearable innovation, such as: the Australian based wearable design firm Catapult with their discreet textile biometric sports tracking innovation, French based Spinali Design with their UV app based textile senor to provide sunburn alerts, as well as opportunities for design technology innovation through UNICEF’s ‘Wearables for Good’ design challenge to improve the quality of life in disadvantaged communities. Exhibition As part of Artisan’s Wearnext exhibition, the work was on public display from 25 July to 7 November 2015 and received the following media coverage: WEARNEXT ONLINE LISTINGS AND MEDIA COVERAGE: http://indulgemagazine.net/wear-next/ http://www.weekendnotes.com/wear-next-exhibition-gallery-artisan/ http://concreteplayground.com/brisbane/event/wear-next_/ http://www.nationalcraftinitiative.com.au/news_and_events/event/48/wear-next http://bneart.com/whats-on/wear-next_/ http://creativelysould.tumblr.com/post/124899079611/creative-weekend-art-edition http://www.abc.net.au/radionational/programs/breakfast/smartly-dressed-the-future-of-wearable-technology/6744374 http://couriermail.newspaperdirect.com/epaper/viewer.aspx RADIO COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 TELEVISION COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 https://au.news.yahoo.com/video/watch/29439742/how-you-could-soon-be-wearing-smart-clothes/#page1

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realistic and realtime computational simulation of soft biological organs (e.g., liver, kidney) is necessary when one tries to build a quality surgical simulator that can simulate surgical procedures involving these organs. Since the realistic simulation of these soft biological organs should account for both nonlinear material behavior and large deformation, achieving realistic simulations in realtime using continuum mechanics based numerical techniques necessitates the use of a supercomputer or a high end computer cluster which are costly. Hence there is a need to employ soft computing techniques like Support Vector Machines (SVMs) which can do function approximation, and hence could achieve physically realistic simulations in realtime by making use of just a desktop computer. Present work tries to simulate a pig liver in realtime. Liver is assumed to be homogeneous, isotropic, and hyperelastic. Hyperelastic material constants are taken from the literature. An SVM is employed to achieve realistic simulations in realtime, using just a desktop computer. The code for the SVM is obtained from [1]. The SVM is trained using the dataset generated by performing hyperelastic analyses on the liver geometry, using the commercial finite element software package ANSYS. The methodology followed in the present work closely follows the one followed in [2] except that [2] uses Artificial Neural Networks (ANNs) while the present work uses SVMs to achieve realistic simulations in realtime. Results indicate the speed and accuracy that is obtained by employing the SVM for the targeted realistic and realtime simulation of the liver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graph algorithms have been shown to possess enough parallelism to keep several computing resources busy-even hundreds of cores on a GPU. Unfortunately, tuning their implementation for efficient execution on a particular hardware configuration of heterogeneous systems consisting of multicore CPUs and GPUs is challenging, time consuming, and error prone. To address these issues, we propose a domain-specific language (DSL), Falcon, for implementing graph algorithms that (i) abstracts the hardware, (ii) provides constructs to write explicitly parallel programs at a higher level, and (iii) can work with general algorithms that may change the graph structure (morph algorithms). We illustrate the usage of our DSL to implement local computation algorithms (that do not change the graph structure) and morph algorithms such as Delaunay mesh refinement, survey propagation, and dynamic SSSP on GPU and multicore CPUs. Using a set of benchmark graphs, we illustrate that the generated code performs close to the state-of-the-art hand-tuned implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the design and implementation of a situation awareness application. The application gathers data from sensors including accelerometers for monitoring earthquakes, carbon monoxide sensors for monitoring fires, radiation detectors, and dust sensors. The application also gathers Internet data sources including data about traffic congestion on daily commute routes, information about hazards, news relevant to the user of the application, and weather. The application sends the data to a Cloud computing service which aggregates data streams from multiple sites and detects anomalies. Information from the Cloud service is then displayed by the application on a tablet, computer monitor, or television screen. The situation awareness application enables almost all members of a community to remain aware of critical changes in their environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large margin criteria and discriminative models are two effective improvements for HMM-based speech recognition. This paper proposed a large margin trained log linear model with kernels for CSR. To avoid explicitly computing in the high dimensional feature space and to achieve the nonlinear decision boundaries, a kernel based training and decoding framework is proposed in this work. To make the system robust to noise a kernel adaptation scheme is also presented. Previous work in this area is extended in two directions. First, most kernels for CSR focus on measuring the similarity between two observation sequences. The proposed joint kernels defined a similarity between two observation-label sequence pairs on the sentence level. Second, this paper addresses how to efficiently employ kernels in large margin training and decoding with lattices. To the best of our knowledge, this is the first attempt at using large margin kernel-based log linear models for CSR. The model is evaluated on a noise corrupted continuous digit task: AURORA 2.0. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

面向对等计算的信任度评估提出了一种新的信任管理量化算法,该算法解决了已有算法不能很好解决的信任时间衰减特性和节点联盟等问题,系统地对目前有代表性的网络信任评估算法进行了总结和分析,并对当前相关的国内外研究热点作了分类,同时给出了信任相关的一些定义以及算法应该考虑的问题,并提出一套完整解决问题的算法.定义了信任时间矫正函数、域信任矫正函数、信任值校准函数和准确度函数,并构造了信任时间矫正算法与域矫正算法,通过推导说明本算法具有良好的时间衰减性、历史经验相关性、新入节点奖励特性和联盟特性,同时给出了一般性的信任自然衰减曲线和8种典型特征域的系数变化范围.通过实验评价了算法的正确性和有效性,并和Azzedin算法进行比较,表明提出的算法效率和准确性有了显著的提高.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Capillary-based systems for measuring the input impedance of musical wind instruments were first developed in the mid-20th century and remain in widespread use today. In this paper, the basic principles and assumptions underpinning the design of such systems are examined. Inexpensive modifications to a capillary-based impedance measurement set-up made possible due to advances in computing and data acquisition technology are discussed. The modified set-up is able to measure both impedance magnitude and impedance phase even though it only contains one microphone. In addition, a method of calibration is described that results in a significant improvement in accuracy when measuring high impedance objects on the modified capillary-based system. The method involves carrying out calibration measurements on two different objects whose impedances are well-known theoretically. The benefits of performing two calibration measurements (as opposed to the one calibration measurement that has been traditionally used) are demonstrated experimentally through input impedance measurements on two test objects and a Boosey and Hawkes oboe. © S. Hirzel Verlag · EAA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a technological advancementthat provide resources through internet on pay-as-you-go basis.Cloud computing uses virtualisation technology to enhance theefficiency and effectiveness of its advantages. Virtualisation isthe key to consolidate the computing resources to run multiple instances on each hardware, increasing the utilization rate of every resource, thus reduces the number of resources needed to buy, rack, power, cool, and manage. Cloud computing has very appealing features, however, lots of enterprises and users are still reluctant to move into cloud due to serious security concerns related to virtualisation layer. Thus, it is foremost important to secure the virtual environment.In this paper, we present an elastic framework to secure virtualised environment for trusted cloud computing called Server Virtualisation Security System (SVSS). SVSS provide security solutions located on hyper visor for Virtual Machines by deploying malicious activity detection techniques, network traffic analysis techniques, and system resource utilization analysis techniques.SVSS consists of four modules: Anti-Virus Control Module,Traffic Behavior Monitoring Module, Malicious Activity Detection Module and Virtualisation Security Management Module.A SVSS prototype has been deployed to validate its feasibility,efficiency and accuracy on Xen virtualised environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time