927 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to provide an overview of advances in pervasive computing.
Design/methodology/approach
– The paper provides a critical analysis of the literature.
Findings – Tools expected to support these advances are: resource location framework, data management (e.g. replica control) framework, communication paradigms, and smart interaction mechanisms. Also, infrastructures needed to support pervasive computing applications and an information appliance should be easy for anyone to use and the interaction with the device should be intuitive.
Originality/value – The paper shows how everyday devices with embedded processing and connectivity could interconnect as a pervasive network of intelligent devices that cooperatively and autonomously collect, process and transport information, in order to adapt to the associated context and activity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clouicomputing is an emerging service technology that has ethical and entrepreneurial implications. Due to technological innovations increasing the attention placed on cloud computing services, more people are focusing on the security and privacy issues determined by ethical guidelines and how the technology is evolving as an entrepreneurial service innov.ation. This paper presents a theoretical perspective on how a person adopts cloud computing. The literature on technology innovation and adoption behaviour is examined with a focus on social cognitive theory. A theoretical framework is then presented, which indicates a number of propositions to describe the intention of a person to adopt cloud computing services. The role of technology marketing capability, sustained learning and outcome expectancy are included in helping to understand the role of cloud computing applications. Suggestions for future research and practical implications are stated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on an investigation to explore architectural design potentials with a responsive material system and physical computing. Contemporary architects and designers are seeking to integrate physical computing in responsive architectural designs; however, they have largely borrowed from engineering technology's mechanical devices and components. There is the opportunity to investigate an unexplored design approach to exploit the responsive capacity of material properties as alternatives to the current focus on mechanical components and discrete sensing devices. This opportunity creates a different design paradigm for responsive architecture that investigates the potential to integrate physical computing with responsive materials as one integrated material system. Instead of adopting highly intricate and expensive materials, this approach is explored through accessible and off-the-shelf materials to form a responsive material system, called Lumina. Lumina is implemented as an architectural installation called Cloud that serves as a morphing architectural skin. Cloud is a proof of concept to embody a responsive material system with physical computing to create a reciprocal and luminous architectural intervention for a selected dark corridor. It represents a different design paradigm for responsive architecture through alternative exploitation of contemporary materials and parametric design tools. © 2014, The Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While High Performance Computing clouds allow researchers to process large amounts of genomic data, complex resource and software configuration tasks must be carried out beforehand. The current trend exposes applications and data as services, simplifying access to clouds. This paper examines commonly used cloud-based genomic analysis services, introduces the approach of exposing data as services and proposes two new solutions (HPCaaS and Uncinus) which aim to automate service development, deployment process and data provision. By comparing and contrasting these solutions, we identify key mechanisms of service creation, execution and data access required to support non-computing specialists employing clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-tenancy is a cloud computing phenomenon. Multiple instances of an application occupy and share resources from a large pool, allowing different users to have their own version of the same application running and coexisting on the same hardware but in isolated virtual spaces. In this position paper we survey the current landscape of multi-tenancy, laying out the challenges and complexity of software engineering where multi-tenancy issues are involved. Multitenancy allows cloud service providers to better utilise computing resources, supporting the development of more exible services to customers based on economy of scale, reducing overheads and infrastructural costs. Nevertheless, there are major challenges in migration from single tenant applications to multi-tenancy. These have not been fully explored in research or practice to date. In particular, the reengineering effort of multi-tenancy in Software-as-a-Service cloud applications requires many complex and important aspects that should be taken into consideration, such as security, scalability, scheduling, data isolation, etc. Our study emphasizes scheduling policies and cloud provisioning and deployment with regards to multi-tenancy issues. We employ CloudSim and MapReduce in our experiments to simulate and analyse multi-tenancy models, scenarios, performance, scalability, scheduling and reliability on cloud platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulated annealing optimization technique has been successfully applied to a number of electrical engineering problems, including transmission system expansion planning. The method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Moreover, it has the ability to provide solutions arbitrarily close to an optimum (i.e. it is asymptotically convergent) as the cooling process slows down. The drawback of the approach is the computational burden: finding optimal solutions may be extremely expensive in some cases. This paper presents a Parallel Simulated Annealing, PSA, algorithm for solving the long term transmission network expansion planning problem. A strategy that does not affect the basic convergence properties of the Sequential Simulated Annealing algorithm have been implementeded and tested. The paper investigates the conditions under which the parallel algorithm is most efficient. The parallel implementations have been tested on three example networks: a small 6-bus network, and two complex real-life networks. Excellent results are reported in the test section of the paper: in addition to reductions in computing times, the Parallel Simulated Annealing algorithm proposed in the paper has shown significant improvements in solution quality for the largest of the test networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the rationale to build up a Telematics Engineering curriculum. Telematics is a strongly computing oriented area; then, the authors have initially intended to apply the common requirements described in the computing curricula elaborated by the ACM/EEEE-CS Joint Curriculum Task Force. This experience has revealed some problematic aspects in the ACM/IEEE-CS proposal. From the analysis of these problems, a model to guide the selection and specially the approach of the Telematics curriculum contents is proposed. This model can be easily generalized to other strongly computing oriented curricula, whose number is growing everyday

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interactivos, realizada sob a orientação científica do Doutor Osvaldo Arede dos Santos, Professor Adjunto da Unidade Técnico Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we use recent census data supplemented with case study evidence to investigate the extent to which professional computing occupations in Australia are constructed around the notion of an ‘ideal’ worker. Census data are used to compare computer professionals with other selected professional occupational groups, illustrating different models of accommodating (or not accommodating) workers who do not fit the ideal model. The computer professionals group is shown to be distinctive in combining low but consistent levels of female representation across age groups, average rates of parenthood and minimal provisions for working-time flexibility. One strategy employed by women in this environment is selection of relatively routine technical roles over more time intensive consultancy based work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pervasive computing applications must be engineered to provide unprecedented levels of flexibility in order to reconfigure and adapt in response to changes in computing resources and user requirements. To meet these challenges, appropriate software engineering abstractions and infrastructure are required as a platform on which to build adaptive applications. In this paper, we demonstrate the use of a disciplined, model-based approach to engineer a context-aware Session Initiation Protocol (SIP) based communication application. This disciplined approach builds on our previously developed conceptual models and infrastructural components, which enable the description, acquisition, management and exploitation of arbitrary types of context and user preference information to enable adaptation to context changes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fog computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage and application services to end users. In this article, we elaborate the motivation and advantages of Fog computing and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state of the art of Fog computing and similar work under the same umbrella. Distinguished from other reviewing work of Fog computing, this paper further discloses the security and privacy issues according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of system security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device. In addition, we discuss the authentication and authorization techniques that can be used in Fog computing. An example of authentication techniques is introduced to address the security scenario where the connection between Fog and Cloud is fragile.