893 resultados para Print on demand


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we develop compilation techniques for the realization of applications described in a High Level Language (HLL) onto a Runtime Reconfigurable Architecture. The compiler determines Hyper Operations (HyperOps) that are subgraphs of a data flow graph (of an application) and comprise elementary operations that have strong producer-consumer relationship. These HyperOps are hosted on computation structures that are provisioned on demand at runtime. We also report compiler optimizations that collectively reduce the overheads of data-driven computations in runtime reconfigurable architectures. On an average, HyperOps offer a 44% reduction in total execution time and a 18% reduction in management overheads as compared to using basic blocks as coarse grained operations. We show that HyperOps formed using our compiler are suitable to support data flow software pipelining.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The vision sense of standalone robots is limited by line of sight and onboard camera capabilities, but processing video from remote cameras puts a high computational burden on robots. This paper describes the Distributed Robotic Vision Service, DRVS, which implements an on-demand distributed visual object detection service. Robots specify visual information requirements in terms of regions of interest and object detection algorithms. DRVS dynamically distributes the object detection computation to remote vision systems with processing capabilities, and the robots receive high-level object detection information. DRVS relieves robots of managing sensor discovery and reduces data transmission compared to image sharing models of distributed vision. Navigating a sensorless robot from remote vision systems is demonstrated in simulation as a proof of concept.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bibliography : p. 144-148.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper asks a new question: how we can use RFID technology in marketing products in supermarkets and how we can measure its performance or ROI (Return-on-Investment). We try to answer the question by proposing a simulation model whereby customers become aware of other customers' real-time shopping behavior and may hence be influenced by their purchases and the levels of purchases. The proposed model is orthogonal to sales model and can have the similar effects: increase in the overall shopping volume. Managers often struggle with the prediction of ROI on purchasing such a technology, this simulation sets to provide them the answers of questions like the percentage of increase in sales given real-time purchase information to other customers. The simulation is also flexible to incorporate any given model of customers' behavior tailored to particular supermarket, settings, events or promotions. The results, although preliminary, are promising to use RFID technology for marketing products in supermarkets and provide several dimensions to look for influencing customers via feedback, real-time marketing, target advertisement and on-demand promotions. Several other parameters have been discussed including the herd behavior, fake customers, privacy, and optimality of sales-price margin and the ROI of investing in RFID technology for marketing purposes. © 2010 Springer Science+Business Media B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work proposes a supermarket optimization simulation model called Swarm-Moves is based on self organized complex system studies to identify parameters and their values that can influence customers to buy more on impulse in a given period of time. In the proposed model, customers are assumed to have trolleys equipped with technology like RFID that can aid the passing of products' information directly from the store to them in real-time and vice-versa. Therefore, they can get the information about other customers purchase patterns and constantly informing the store of their own shopping behavior. This can be easily achieved because the trolleys "know" what products they contain at any point. The Swarm-Moves simulation is the virtual supermarket providing the visual display to run and test the proposed model. The simulation is also flexible to incorporate any given model of customers' behavior tailored to particular supermarket, settings, events or promotions. The results, although preliminary, are promising to use RFID technology for marketing products in supermarkets and provide several dimensions to look for influencing customers via feedback, real-time marketing, target advertisement and on-demand promotions. ©2009 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sensor network applications such as environmental monitoring demand that the data collection process be carried out for the longest possible time. Our paper addresses this problem by presenting a routing scheme that ensures that the monitoring network remains connected and hence the live sensor nodes deliver data for a longer duration. We analyze the role of relay nodes (neighbours of the base-station) in maintaining network connectivity and present a routing strategy that, for a particular class of networks, approaches the optimal as the set of relay nodes becomes larger. We then use these findings to develop an appropriate distributed routing protocol using potential-based routing. The basic idea of potential-based routing is to define a (scalar) potential value at each node in the network and forward data to the neighbor with the highest potential. We propose a potential function and evaluate its performance through simulations. The results show that our approach performs better than the well known lifetime maximization policy proposed by Chang and Tassiulas (2004), as well as AODV [Adhoc on demand distance vector routing] proposed by Perkins (1997).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There are several ways of storing electrical energy in chemical and physical forms and retrieving it on demand, and ultracapacitors are one among them. This article presents the taxonomy of ultracapacitor and describes various types of rechargeable-battery electrodes that can be used to realize the hybrid ultracapacitors in conjunction with a high-surface-area-graphitic-carbon electrode. While the electrical energy is stored in a battery electrode in chemical form, it is stored in physical form as charge in the electrical double-layer formed between the electrolyte and the high-surface-area-carbon electrodes. This article discusses various types of hybrid ultracapacitors along with the possible applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cooling slope (CS) has been used in this study to prepare semi-solid slurry of A356 Al alloy, keeping in view of slurry generation on demand for Rheo-pressure die casting process. Understanding the physics of microstructure evolution during cooling slope slurry formation is important to satisfy the need of semi-sold slurry with desired shape, size and morphology of primary Al phase. Mixture of spherical and rosette shaped primary Al phase has been observed in the samples collected during melt flow through the slope as well as in the cast (mould) samples compared to that of dendritic shape, observed in case of conventionally cast A356 alloy. The liquid melt has been poured into the slope at 650 A degrees C temperature and during flow it falls below the liquidus temperature of the said alloy, which facilitates crystallization of alpha-Al crystals on the cooling slope wall. Crystal separation due to melt flow is found responsible for nearly spherical morphology of the primary Al phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peer to peer networks are being used extensively nowadays for file sharing, video on demand and live streaming. For IPTV, delay deadlines are more stringent compared to file sharing. Coolstreaming was the first P2P IPTV system. In this paper, we model New Coolstreaming (newer version of Coolstreaming) via a queueing network. We use two time scale decomposition of Markov chains to compute the stationary distribution of number of peers and the expected number of substreams in the overlay which are not being received at the required rate due to parent overloading. We also characterize the end-to-end delay encountered by a video packet received by a user and originated at the server. Three factors contribute towards the delay. The first factor is the mean shortest path length between any two overlay peers in terms of overlay hops of the partnership graph which is shown to be O (log n) where n is the number of peers in the overlay. The second factor is the mean number of routers between any two overlay neighbours which is seen to be at most O (log N-I) where N-I is the number of routers in the internet. Third factor is the mean delay at a router in the internet. We provide an approximation of this mean delay E W]. Thus, the mean end to end delay in New Coolstreaming is shown to be upper bounded by O (log E N]) (log N-I) E (W)] where E N] is the mean number of peers at a channel.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the self-organized public key management approaches, public key verification is achieved through verification routes constituted by the transitive trust relationships among the network principals. Most of the existing approaches do not distinguish among different available verification routes. Moreover, to ensure stronger security, it is important to choose an appropriate metric to evaluate the strength of a route. Besides, all of the existing self-organized approaches use certificate-chains for achieving authentication, which are highly resource consuming. In this paper, we present a self-organized certificate-less on-demand public key management (CLPKM) protocol, which aims at providing the strongest verification routes for authentication purposes. It restricts the compromise probability for a verification route by restricting its length. Besides, we evaluate the strength of a verification route using its end-to-end trust value. The other important aspect of the protocol is that it uses a MAC function instead of RSA certificates to perform public key verifications. By doing this, the protocol saves considerable computation power, bandwidth and storage space. We have used an extended strand space model to analyze the correctness of the protocol. The analytical, simulation, and the testbed implementation results confirm the effectiveness of the proposed protocol. (c) 2014 Elsevier B.V. All rights reserved.