974 resultados para Computing cost
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
The ability of cloud computing to provide almost unlimited storage, backup and recovery, and quick deployment contributes to its widespread attention and implementation. Cloud computing has also become an attractive choice for mobile users as well. Due to limited features of mobile devices such as power scarcity and inability to cater computationintensive tasks, selected computation needs to be outsourced to the resourceful cloud servers. However, there are many challenges which need to be addressed in computation offloading for mobile cloud computing such as communication cost, connectivity maintenance and incurred latency. This paper presents taxonomy of the computation offloading approaches which aim to address the challenges. The taxonomy provides guidelines to identify research scopes in computation offloading for mobile cloud computing. We also outline directions and anticipated trends for future research.
Resumo:
During the early design stages of construction projects, accurate and timely cost feedback is critical to design decision making. This is particularly challenging for cost estimators, as they must quickly and accurately estimate the cost of the building when the design is still incomplete and evolving. State-of-the-art software tools typically use a rule-based approach to generate detailed quantities from the design details present in a building model and relate them to the cost items in a cost estimating database. In this paper, we propose a generic approach for creating and maintaining a cost estimate using flexible mappings between a building model and a cost estimate. The approach uses queries on the building design that are used to populate views, and each view is then associated with one or more cost items. The benefit of this approach is that the flexibility of modern query languages allows the estimator to encode a broad variety of relationships between the design and estimate. It also avoids the use of a common standard to which both designers and estimators must conform, allowing the estimator added flexibility and functionality to their work.
Resumo:
The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.
Resumo:
Increased focus on energy cost savings and carbon footprint reduction efforts improved the visibility of building energy simulation, which became a mandatory requirement of several building rating systems. Despite developments in building energy simulation algorithms and user interfaces, there are some major challenges associated with building energy simulation; an important one is the computational demands and processing time. In this paper, we analyze the opportunities and challenges associated with this topic while executing a set of 275 parametric energy models simultaneously in EnergyPlus using a High Performance Computing (HPC) cluster. Successful parallel computing implementation of building energy simulations will not only improve the time necessary to get the results and enable scenario development for different design considerations, but also might enable Dynamic-Building Information Modeling (BIM) integration and near real-time decision-making. This paper concludes with the discussions on future directions and opportunities associated with building energy modeling simulations.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This study investigates a novel way to identify potential efficiency gains in business operations by observing how they were carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how these trade-offs can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A number of optimisation techniques are proposed to explore and assess alternative execution scenarios. The objective function is represented by a cost structure that captures different process dimensions. An experimental evaluation is conducted to analyse the performance and scalability of the optimisation techniques: integer linear programming (ILP), hill climbing, tabu search, and our earlier proposed hybrid genetic algorithm approach. The findings demonstrate that the hybrid genetic algorithm is scalable and performs better compared to other techniques. Moreover, we argue that the use of ILP is unrealistic in this setup and cannot handle complex cost functions such as the ones we propose. Finally, we show how cost-related insights can be gained from improved execution scenarios and how these can be utilised to put forward recommendations for reducing process-related cost and overhead within organisations.
Resumo:
RFID is an important technology that can be used to create the ubiquitous society. But an RFID system uses open radio frequency signal to transfer information and this leads to pose many serious threats to its privacy and security. In general, the computing and storage resources in an RFID tag are very limited and this makes it difficult to solve its secure and private problems, especially for low-cost RFID tags. In order to ensure the security and privacy of low-cost RFID systems we propose a lightweight authentication protocol based on Hash function. This protocol can ensure forward security and prevent information leakage, location tracing, eavesdropping, replay attack and spoofing. This protocol completes the strong authentication of the reader to the tag by twice authenticating and it only transfers part information of the encrypted tag’s identifier for each session so it is difficult for an adversary to intercept the whole identifier of a tag. This protocol is simple and it takes less computing and storage resources, it is very suitable to some low-cost RFID systems.
Resumo:
The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications
Resumo:
Background Advances in cancer diagnosis and treatment have significantly improved survival rates, through their subsequent health needs are often not adequately addressed by current health services. National Health and Medical Research Council (NHMRC) Partnerships Project awarded a national collaborative project to develop, trial and evaluate clinical benefits and cost effectiveness of an e-health enabled structured health promotion intervention - The Women’s Wellness after Cancer Program (WWACP). The aim of this e-health enabled multimodal intervention is to improve health related quality of life in women previously treated for target cancers. Aim The WWACP is a 12-week web based, interactive, holistic program. Primary outcomes for this project are to promote a positive change in health-related quality of life (HRQoL) and reduction in Body Mass Index (BMI) in the women undertaking WWACP compared to women who receive usual care. Secondary outcomes include managing other side effects of cancer treatment through evidence-based nutrition and exercise practices, dealing with stress, sleep, menopause and sexuality issues. Methods The single-blinded multi-center randomized controlled trial recruited a toatl of 330 women within 24 months of completion of chemotherapy and /or radiotherapy. Women were randomly assigned to either a usual care or intervention group. Women provided with the intervention were provided with an interactive iBook and journal, web interface, and three virtual consultations by experienced cancer nurses. A variety of methods were utilized, to enable positive self- efficacy and lifestyle changes. These include online coaching with a registered nurse trained in the intervention, plus written educational and health promotional information. The program has been delivered through the e-health enabled interfaces, which enables virtual delivery via desktop and mobile computing devices. Importantly this enables accessibility for rural and regional women in Australia who are frequently geographically disadvantaged in terms of health care provision. Results Research focusing on alternative methods of delivering post treatment / or survivorship care in cancer utilizing web based interfaces is limited, but emerging evidence suggests that Internet interventions can increase psychological and physical wellbeing in cancer patients. The WWACP trial aims to establish the effectiveness of delivery of the program in terms of positive patient outcomes and cost effective, flexibility. The trial will be completed in September and results will be presented at the conference. Conclusions Women after acute hematological, breast and gynecological cancer treatments demonstrate good cancer survival rates and face residual health problems which are amenable to behavioral interventions. The conclusion of active treatment is a key 'teachable moment' in which sustainable positive lifestyle change can be achieved if patients receive education and psychological support which targets key treatment related health problems and known chronic disease risk factors.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
The management and coordination of business-process collaboration experiences changes because of globalization, specialization, and innovation. Service-oriented computing (SOC) is a means towards businessprocess automation and recently, many industry standards emerged to become part of the service-oriented architecture (SOA) stack. In a globalized world, organizations face new challenges for setting up and carrying out collaborations in semi-automating ecosystems for business services. For being efficient and effective, many companies express their services electronically in what we term business-process as a service (BPaaS). Companies then source BPaaS on the fly from third parties if they are not able to create all service-value inhouse because of reasons such as lack of reasoures, lack of know-how, cost- and time-reduction needs. Thus, a need emerges for BPaaS-HUBs that not only store service offers and requests together with information about their issuing organizations and assigned owners, but that also allow an evaluation of trust and reputation in an anonymized electronic service marketplace. In this paper, we analyze the requirements, design architecture and system behavior of such a BPaaS-HUB to enable a fast setup and enactment of business-process collaboration. Moving into a cloud-computing setting, the results of this paper allow system designers to quickly evaluate which services they need for instantiationg the BPaaS-HUB architecture. Furthermore, the results also show what the protocol of a backbone service bus is that allows a communication between services that implement the BPaaS-HUB. Finally, the paper analyzes where an instantiation must assign additional computing resources vor the avoidance of performance bottlenecks.
Resumo:
In this paper we present a combination of technologies to provide an Energy-on-Demand (EoD) service to enable low cost innovation suitable for microgrid networks. The system is designed around the low cost and simple Rural Energy Device (RED) Box which in combination with Short Message Service (SMS) communication methodology serves as an elementary proxy for Smart meters which are typically used in urban settings. Further, customer behavior and familiarity in using such devices based on mobile experience has been incorporated into the design philosophy. Customers are incentivized to interact with the system thus providing valuable behavioral and usage data to the Utility Service Provider (USP). Data that is collected over time can be used by the USP for analytics envisioned by using remote computing services known as cloud computing service. Cloud computing allows for a sharing of computational resources at the virtual level across several networks. The customer-system interaction is facilitated by a third party Telecom Service provider (TSP). The approximate cost of the RED Box is envisaged to be under USD 10 on production scale.
Resumo:
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. (C) 2016 Elsevier B.V. All rights reserved.