928 resultados para Data-driven energy e ciency


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enterprises are increasingly using a wide range of heterogeneous information systems for executing and governing their business activities. Even if the adoption of service orientation has improved loose coupling and reusability, applications are still isolated data silos whose integration requires complex transformations and mediations. However, by leveraging Linked Data principles those data silos can now be seamlessly integrated, and this opens the door to new data-driven approaches for Enterprise Application Integration (EAI). In this paper we present LDP4j, an open souce Java-based framework for the development of interoperable read-write Linked Data applications, based on the W3C Linked Data Platform (LDP) specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a supply chain depends critically on the coordinating actions and decisions undertaken by the trading partners. The sharing of product and process information plays a central role in the coordination and is a key driver for the success of the supply chain. In this paper we propose the concept of "Linked pedigrees" - linked datasets, that enable the sharing of traceability information of products as they move along the supply chain. We present a distributed and decentralised, linked data driven architecture that consumes real time supply chain linked data to generate linked pedigrees. We then present a communication protocol to enable the exchange of linked pedigrees among trading partners. We exemplify the utility of linked pedigrees by illustrating examples from the perishable goods logistics supply chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi presenta uno studio della libreria grafica per web D3, sviluppata in javascript, e ne presenta una catalogazione dei grafici implementati e reperibili sul web. Lo scopo è quello di valutare la libreria e studiarne i pregi e difetti per capire se sia opportuno utilizzarla nell'ambito di un progetto Europeo. Per fare questo vengono studiati i metodi di classificazione dei grafici presenti in letteratura e viene esposto e descritto lo stato dell'arte del data visualization. Viene poi descritto il metodo di classificazione proposto dal team di progettazione e catalogata la galleria di grafici presente sul sito della libreria D3. Infine viene presentato e studiato in maniera formale un algoritmo per selezionare un grafico in base alle esigenze dell'utente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computers employing some degree of data flow organisation are now well established as providing a possible vehicle for concurrent computation. Although data-driven computation frees the architecture from the constraints of the single program counter, processor and global memory, inherent in the classic von Neumann computer, there can still be problems with the unconstrained generation of fresh result tokens if a pure data flow approach is adopted. The advantages of allowing serial processing for those parts of a program which are inherently serial, and of permitting a demand-driven, as well as data-driven, mode of operation are identified and described. The MUSE machine described here is a structured architecture supporting both serial and parallel processing which allows the abstract structure of a program to be mapped onto the machine in a logical way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collecting and analyzing consumer data is essential in today’s data-driven business environment. However, consumers are becoming more aware of the value of the information they can provide to companies, thereby being more reluctant to share it for free. Therefore, companies need to find ways to motivate consumers to disclose personal information. The main research question of the study was formed as “How can companies motivate consumers to disclose personal information?” and it was further divided into two subquestions: 1) What types of benefits motivate consumers to disclose personal information? 2) How does the disclosure context affect the consumers’ information disclosure behavior? The conceptual framework consisted of a classification of extrinsic and intrinsic benefits, and moderating factors, which were recognized on the basis of prior research in the field. The study was conducted by using qualitative research methods. The primary data was collected by interviewing ten representatives from eight companies. The data was analyzed and reported according to predetermined themes. The findings of the study confirm that consumers can be motivated to disclose personal information by offering different types of extrinsic (monetary saving, time saving, self-enhancement, and social adjustment) and intrinsic (novelty, pleasure, and altruism) benefits. However, not all the benefits are equally useful ways to convince the customer to disclose information. Moreover, different factors in the disclosure context can either alleviate or increase the effectiveness of the benefits and the consumers’ motivation to disclose personal information. Such factors include the consumer’s privacy concerns, perceived trust towards the company, the relevancy of the requested information, personalization, website elements (especially security, usability, and aesthetics of a website), and the consumer’s shopping motivation. This study has several contributions. It is essential that companies recognize the most attractive benefits regarding their business and their customers, and that they understand how the disclosure context affects the consumer’s information disclosure behavior. The likelihood of information disclosure can be increased, for example, by offering benefits that meet the consumers’ needs and preferences, improving the relevancy of the asked information, stating the reasons for data collection, creating and maintaining a trustworthy image of the company, and enhancing the quality of the company’s website.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In energy harvesting communications, users transmit messages using energy harvested from nature. In such systems, transmission policies of the users need to be carefully designed according to the energy arrival profiles. When the energy management policies are optimized, the resulting performance of the system depends only on the energy arrival profiles. In this dissertation, we introduce and analyze the notion of energy cooperation in energy harvesting communications where users can share a portion of their harvested energy with the other users via wireless energy transfer. This energy cooperation enables us to control and optimize the energy arrivals at users to the extent possible. In the classical setting of cooperation, users help each other in the transmission of their data by exploiting the broadcast nature of wireless communications and the resulting overheard information. In contrast to the usual notion of cooperation, which is at the signal level, energy cooperation we introduce here is at the battery energy level. In a multi-user setting, energy may be abundant in one user in which case the loss incurred by transferring it to another user may be less than the gain it yields for the other user. It is this cooperation that we explore in this dissertation for several multi-user scenarios, where energy can be transferred from one user to another through a separate wireless energy transfer unit. We first consider the offline optimal energy management problem for several basic multi-user network structures with energy harvesting transmitters and one-way wireless energy transfer. In energy harvesting transmitters, energy arrivals in time impose energy causality constraints on the transmission policies of the users. In the presence of wireless energy transfer, energy causality constraints take a new form: energy can flow in time from the past to the future for each user, and from one user to the other at each time. This requires a careful joint management of energy flow in two separate dimensions, and different management policies are required depending on how users share the common wireless medium and interact over it. In this context, we analyze several basic multi-user energy harvesting network structures with wireless energy transfer. To capture the main trade-offs and insights that arise due to wireless energy transfer, we focus our attention on simple two- and three-user communication systems, such as the relay channel, multiple access channel and the two-way channel. Next, we focus on the delay minimization problem for networks. We consider a general network topology of energy harvesting and energy cooperating nodes. Each node harvests energy from nature and all nodes may share a portion of their harvested energies with neighboring nodes through energy cooperation. We consider the joint data routing and capacity assignment problem for this setting under fixed data and energy routing topologies. We determine the joint routing of energy and data in a general multi-user scenario with data and energy transfer. Next, we consider the cooperative energy harvesting diamond channel, where the source and two relays harvest energy from nature and the physical layer is modeled as a concatenation of a broadcast and a multiple access channel. Since the broadcast channel is degraded, one of the relays has the message of the other relay. Therefore, the multiple access channel is an extended multiple access channel with common data. We determine the optimum power and rate allocation policies of the users in order to maximize the end-to-end throughput of this system. Finally, we consider the two-user cooperative multiple access channel with energy harvesting users. The users cooperate at the physical layer (data cooperation) by establishing common messages through overheard signals and then cooperatively sending them. For this channel model, we investigate the effect of intermittent data arrivals to the users. We find the optimal offline transmit power and rate allocation policy that maximize the departure region. When the users can further cooperate at the battery level (energy cooperation), we find the jointly optimal offline transmit power and rate allocation policy together with the energy transfer policy that maximize the departure region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 14: Interoperability and Integration

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our proposal aims to display the analysis techniques, methodologies as well as the most relevant results expected within the Exhibitium project framework (http://www.exhibitium.com). Awarded by the BBVA Foundation, the Exhibitium project is being developed by an international consortium of several research groups . Its main purpose is to build a comprehensive and structured data repository about temporary art exhibitions, captured from the web, to make them useful and reusable in various domains through open and interoperable data systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extracting knowledge from the transaction records and the personal data of credit card holders has great profit potential for the banking industry. The challenge is to detect/predict bankrupts and to keep and recruit the profitable customers. However, grouping and targeting credit card customers by traditional data-driven mining often does not directly meet the needs of the banking industry, because data-driven mining automatically generates classification outputs that are imprecise, meaningless, and beyond users' control. In this paper, we provide a novel domain-driven classification method that takes advantage of multiple criteria and multiple constraint-level programming for intelligent credit scoring. The method involves credit scoring to produce a set of customers' scores that allows the classification results actionable and controllable by human interaction during the scoring process. Domain knowledge and experts' experience parameters are built into the criteria and constraint functions of mathematical programming and the human and machine conversation is employed to generate an efficient and precise solution. Experiments based on various data sets validated the effectiveness and efficiency of the proposed methods. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop a data-driven weight learning method for weighted quasi-arithmetic means where the observed data may vary in dimension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the increasing energy consumption in cloud data centers, energy saving has become a vital objective in designing the underlying cloud infrastructures. A precise energy consumption model is the foundation of many energy-saving strategies. This paper focuses on exploring the energy consumption of virtual machines running various CPU-intensive activities in the cloud server using two types of models: traditional time-series models, such as ARMA and ES, and time-series segmentation models, such as sliding windows model and bottom-up model. We have built a cloud environment using OpenStack, and conducted extensive experiments to analyze and compare the prediction accuracy of these strategies. The results indicate that the performance of ES model is better than the ARMA model in predicting the energy consumption of known activities. When predicting the energy consumption of unknown activities, sliding windows segmentation model and bottom-up segmentation model can all have satisfactory performance but the former is slightly better than the later.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing as the latest computing paradigm has shown its promising future in business workflow systems facing massive concurrent user requests and complicated computing tasks. With the fast growth of cloud data centers, energy management especially energy monitoring and saving in cloud workflow systems has been attracting increasing attention. It is obvious that the energy for running a cloud workflow instance is mainly dependent on the energy for executing its workflow activities. However, existing energy management strategies mainly monitor the virtual machines instead of the workflow activities running on them, and hence it is difficult to directly monitor and optimize the energy consumption of cloud workflows. To address such an issue, in this paper, we propose an effective energy testing framework for cloud workflow activities. This framework can help to accurately test and analyze the baseline energy of physical and virtual machines in the cloud environment, and then obtain the energy consumption data of cloud workflow activities. Based on these data, we can further produce the energy consumption model and apply energy prediction strategies. Our experiments are conducted in an OpenStack based cloud computing environment. The effectiveness of our framework has been successfully verified through a detailed case study and a set of energy modelling and prediction experiments based on representative time-series models.