704 resultados para cloud computing, hypervisor, virtualizzazione, live migration, infrastructure as a service
Resumo:
Network connectivity offers the potential for a group of musicians to play together over the network. This paper describes a trans-Atlantic networked musical livecoding performance between Andrew Sorensen in Germany (at the Schloss Daghstuhl conference on Collaboration and Learning through Live Coding) and Ben Swift in San Jose (at YL/HCC) in September 2013. In this paper we describe the infrastructure developed to enable this performance.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
Recent advances in optical and fluorescent protein technology have rapidly raised expectations in cell biology, allowing quantitative insights into dynamic intracellular processes like never before. However, quantitative live-cell imaging comes with many challenges including how best to translate dynamic microscopy data into numerical outputs that can be used to make meaningful comparisons rather than relying on representative data sets. Here, we use analysis of focal adhesion turnover dynamics as a straightforward specific example on how to image, measure, and analyze intracellular protein dynamics, but we believe this outlines a thought process and can provide guidance on how to understand dynamic microcopy data of other intracellular structures.
Resumo:
The north Australian beef industry is complex and dynamic. It is strategically positioned to access new and existing export markets. To prosper in a global economy, it will require strong processing and live cattle sectors, continued rationalisation of infrastructure, uptake of appropriate technology, and the synergy obtained when industry sectors unite and cooperate to maintain market advantage. Strategies to address food safety, animal welfare, the environment and other consumer concerns must be delivered. Strategic alliances with quality assurance systems will develop. These alliances will be based on economies of scale and on vertical cooperation, rather than vertical integration. Industry sectors will need to increase their contribution to Research, Development and Extension. These contributions need to be global in outlook. Industry sectors should also be aware that change (positive or negative) in one sector will impact on other sectors. Feedback along the food chain is essential to maximise productivity and market share.
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.
Resumo:
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application's throughput. In this paper we propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based lookahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from Amazon AWS IaaS public cloud. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
Coastal and marine ecosystems support diverse and important fisheries throughout the nation’s waters, hold vast storehouses of biological diversity, and provide unparalleled recreational opportunities. Some 53% of the total U.S. population live on the 17% of land in the coastal zone, and these areas become more crowded every year. Demands on coastal and marine resources are rapidly increasing, and as coastal areas become more developed, the vulnerability of human settlements to hurricanes, storm surges, and flooding events also increases. Coastal and marine environments are intrinsically linked to climate in many ways. The ocean is an important distributor of the planet’s heat, and this distribution could be strongly influenced by changes in global climate over the 21st century. Sea-level rise is projected to accelerate during the 21st century, with dramatic impacts in low-lying regions where subsidence and erosion problems already exist. Many other impacts of climate change on the oceans are difficult to project, such as the effects on ocean temperatures and precipitation patterns, although the potential consequences of various changes can be assessed to a degree. In other instances, research is demonstrating that global changes may already be significantly impacting marine ecosystems, such as the impact of increasing nitrogen on coastal waters and the direct effect of increasing carbon dioxide on coral reefs. Coastal erosion is already a widespread problem in much of the country and has significant impacts on undeveloped shorelines as well as on coastal development and infrastructure. Along the Pacific Coast, cycles of beach and cliff erosion have been linked to El Niño events that elevate average sea levels over the short term and alter storm tracks that affect erosion and wave damage along the coastline. These impacts will be exacerbated by long-term sea-level rise. Atlantic and Gulf coastlines are especially vulnerable to long-term sea-level rise as well as any increase in the frequency of storm surges or hurricanes. Most erosion events here are the result of storms and extreme events, and the slope of these areas is so gentle that a small rise in sea level produces a large inland shift of the shoreline. When buildings, roads and seawalls block this natural migration, the beaches and shorelines erode, threatening property and infrastructure as well as coastal ecosystems.
Resumo:
We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.
Resumo:
A number of methods are commonly used today to collect infrastructure's spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic, and cost-efficient at the same time. This paper presents a videogrammetric framework for acquiring spatial data of infrastructure which holds the promise to address this limitation. It uses a calibrated set of low-cost high resolution video cameras that is progressively traversed around the scene and aims to produce a dense 3D point cloud which is updated in each frame. It allows for progressive reconstruction as opposed to point-and-shoot followed by point cloud stitching. The feasibility of the framework is studied in this paper. Required steps through this process are presented and the unique challenges of each step are identified. Results specific to each step are also presented.
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.
Resumo:
Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.