704 resultados para cloud computing, hypervisor, virtualizzazione, live migration, infrastructure as a service
Resumo:
Many cloud-based applications employ a data centre as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
Active Grids are a form of grid infrastructure where the grid network is active and programmable. These grids directly support applications with value added services such as data migration, compression, adaptation and monitoring. Services such as these are particularly important for eResearch applications which by their very nature are performance critical and data intensive. We propose an architecture for improving the flexibility of Active Grids through web services. These enable Active Grid services to be easily and flexibly configured, monitored and deployed from practically any platform or application. The architecture is called WeSPNI ('Web Services based on Programmable Networks Infrastructure'). We present the architecture together with some early experimental results on using web services to monitor data movement in an active grid.
Resumo:
Access All was performance produced following a three-month mentorship in web-based performance that I was commissioned to conduct for the performance company Igneous. This live, triple-site performance event for three performers in three remote venues was specifically designed for presentation at Access Grid Nodes - conference rooms located around the globe equipped with a high end, open source computer teleconferencing technology that allowed multiple nodes to cross-connect with each other. Whilst each room was setup somewhat differently they all deployed the same basic infrastructre of multiple projectors, cameras, and sound as well as a reconfigurable floorspace. At that time these relatively formal setups imposed a clear series of limitations in terms of software capabilities and basic infrastructure and so there was much interest in understanding how far its capabilities might be pushed.----- Numerous performance experiments were undertaken between three Access Grid nodes in QUT Brisbane, VISLAB Sydney and Manchester Supercomputing Centre, England, culminating in the public performance staged simultaneously between the sites with local audiences at each venue and others online. Access All was devised in collaboration with interdisciplinary performance company Bonemap, Kelli Dipple (Interarts curator, Tate Modern London) and Mike Stubbs British curator and Director of FACT (Liverpool).----- This period of research and development was instigated and shaped by a public lecture I had earlier delivered in Sydney for the ‘Global Access Grid Network, Super Computing Global Conference’ entitled 'Performance Practice across Electronic Networks'. The findings of this work went on to inform numerous future networked and performative works produced from 2002 onwards.
Resumo:
This paper demonstrates that in order to design successful ubiquitous computing, designers must consider concurrently both the end user interactions in the context of use and the sustainability of the technology and its underlying infrastructure. We describe methods used to create more useful collaboration and communication between users, designers and engineers in designing ubiquitous computing systems. We tested these methods in a real domain in an attempt to create a system that is affordable, minimally disrupts the end-user's workplace and improves human-computer interaction.
Resumo:
Most infrastructure projects share the same characteristics in term of management aspects and shortcomings. Human factor is believed to be the major drawbacks due to the nature of unstructured problems which can further contribute to management conflicts. This growing complexity in infrastructure projects has shift the paradigm of policy makers to adopt Information Communication Technology (ICT) as a driving force. For this reason, it is vital to fully maximise and utilise the recent technologies to accelerate management process particularly in planning phase. Therefore, a lot of tools have been developed to assist decision making in construction project management. The variety of uncertainties and alternatives in decision making can be entertained by using useful tool such as Decision Support System (DSS). However, the recent trend shows that most DSS in this area only concentrated in model development and left few fundamentals of computing. Thus, most of them were found complicated and less efficient to support decision making within project team members. Due to the current incapability of many software aspects, it is desirable for DSS to provide more simplicity, better collaborative platform, efficient data manipulation and reflection to user needs. By considering these factors, the paper illustrates four challenges for future DSS development i.e. requirement engineering, communication framework, data management and interoperability, and software usability
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
Modern enterprise knowledge management systems typically require distributed approaches and the integration of numerous heterogeneous sources of information. A powerful foundation for these tasks can be Topic Maps, which not only provide a semantic net-like knowledge representation means and the possibility to use ontologies for modelling knowledge structures, but also offer concepts to link these knowledge structures with unstructured data stored in files, external documents etc. In this paper, we present the architecture and prototypical implementation of a Topic Map application infrastructure, the ‘Topic Grid’, which enables transparent, node-spanning access to different Topic Maps distributed in a network.
Resumo:
The Street Computing workshop, held in conjunction with OZCHI 2009, solicits papers discussing new research directions, early research results, works-in-progress and critical surveys of prior research work in the areas of ubiquitous computing and interaction design for urban environments. Urban spaces have unique characteristics. Typically, they are densely populated, buzzing with life twenty-four hours a day, seven days a week. These traits afford many opportunities, but they also present many challenges: traffic jams, smog and pollution, stress placed on public services, and more. Computing technology, particularly the kind that can be placed in the hands of citizens, holds much promise in combating some of these challenges. Yet, computation is not merely a tool for overcoming challenges; rather, when embedded appropriately in our everyday lives, it becomes a tool of opportunity, for shaping how our cities evolve, for enabling us to interact with our city and its people in new ways, and for uncovering useful, but hidden relationships and correlations between elements of the city. The increasing availability of an urban computing infrastructure has lead to new and exciting ways inhabitants can interact with their city. This includes interaction with a wide range of services (e.g. public transport, public services), conceptual representations of the city (e.g. local weather and traffic conditions), the availability of a variety of shared and personal displays (e.g. public, ambient, mobile) and the use of different interaction modes (e.g. tangible, gesture-based, token-based). This workshop solicits papers that address the above themes in some way. We encourage researchers to submit work that deals with challenges and possibilities that the availability of urban computing infrastructure such as sensors and middleware for sensor networks pose. This includes new and innovative ways of interacting with and within urban environments; user experience design and participatory design approaches for urban environments; social aspects of urban computing; and other related areas.
Resumo:
The development of research data management infrastructure and services and making research data more discoverable and accessible to the research community is a key priority at the national, state and individual university level. This paper will discuss and reflect upon a collaborative project between Griffith University and the Queensland University of Technology to commission a Metadata Hub or Metadata Aggregation service based upon open source software components. It will describe the role that metadata aggregation services play in modern research infrastructure and argue that this role is a critical one.
Resumo:
Most infrastructure project developments are complex in nature, particularly in the planning phase. During this stage, many vague alternatives are tabled - from the strategic to operational level. Human judgement and decision making are characterised by biases, errors and the use of heuristics. These factors are intangible and hard to measure because they are subjective and qualitative in nature. The problem with human judgement becomes more complex when a group of people are involved. The variety of different stakeholders may cause conflict due to differences in personal judgements. Hence, the available alternatives increase the complexities of the decision making process. Therefore, it is desirable to find ways of enhancing the efficiency of decision making to avoid misunderstandings and conflict within organisations. As a result, numerous attempts have been made to solve problems in this area by leveraging technologies such as decision support systems. However, most construction project management decision support systems only concentrate on model development and neglect fundamentals of computing such as requirement engineering, data communication, data management and human centred computing. Thus, decision support systems are complicated and are less efficient in supporting the decision making of project team members. It is desirable for decision support systems to be simpler, to provide a better collaborative platform, to allow for efficient data manipulation, and to adequately reflect user needs. In this chapter, a framework for a more desirable decision support system environment is presented. Some key issues related to decision support system implementation are also described.