97 resultados para data sharing
em Instituto Politécnico do Porto, Portugal
Resumo:
Consider the problem of scheduling a set of tasks on a single processor such that deadlines are met. Assume that tasks may share data and that linearizability, the most common correctness condition for data sharing, must be satisfied. We find that linearizability can severely penalize schedulability. We identify, however, two special cases where linearizability causes no or not too large penalty on schedulability.
Resumo:
Despite the abundant literature in knowledge management, few empirical studies have explored knowledge management in connection with international assignees. This phenomenon has a special relevance in the Portuguese context, since (a) there are no empirical studies concerning this issue that involves international Portuguese companies; (b) the national business reality is incipient as far as internationalisation is concerned, and; (c) the organisational and national culture presents characteristics that are distinctive from the most highly studied contexts (e.g., Asia, USA, Scandinavian countries, Spain, France, The Netherlands, Germany, England and Russia). We examine the role of expatriates in transfer and knowledge sharing within the Portuguese companies with operations abroad. We focus specifically on expatriates’ role on knowledge sharing connected to international Portuguese companies and our findings take into account organizational representatives’ and expatriates’ perspectives. Using a comparative case study approach, we examine how three main dimensions influence the role of expatriates in knowledge sharing among headquarters and their subsidiaries (types of international assignment, reasons for using expatriation and international assignment characteristics). Data were collected using semi‐structured interviews to 30 Portuguese repatriates and 14 organizational representatives from seven Portuguese companies. The findings suggest that the reasons that lead Portuguese companies to expatriating employees are connected to: (1) business expansion needs; (2) control of international operations and; (3) transfer and knowledge sharing. Our study also shows that Portuguese companies use international assignments in order to positively respond to the increasingly decaying domestic market in the economic areas in which they operate. Evidence also reveals that expatriation is seen as a strategy to fulfill main organizational objectives through their expatriates (e.g., business internationalization, improvement of the coordination and control level of the units/subsidiaries abroad, replication of aspects of the home base, development and incorporation of new organizational techniques and processes). We also conclude that Portuguese companies have developed an International Human Resources Management strategy, based on an ethnocentric approach, typically associated with companies in early stages of internationalization, i.e., the authority and decision making are centered in the home base. Expatriates have a central role in transmitting culture and technical knowledge from company’s headquarters to the company’s branches. Based on the findings, the article will discuss in detail the main theoretical and managerial implications. Suggestions for further research will also be presented.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.
Resumo:
C3S2E '16 Proceedings of the Ninth International C* Conference on Computer Science & Software Engineering
Resumo:
Orientador Prof. Dr. João Domingues Costa
Resumo:
Paper to be presented at the ESREA Conference Learning to Change? The Role of Identity and Learning Careers in Adult Education, 7-8 December, 2006, Université Catholique Louvain, Louvain–la-Neuve, Belgium
Resumo:
The main purpose of this study was to examine the applicability of geostatistical modeling to obtain valuable information for assessing the environmental impact of sewage outfall discharges. The data set used was obtained in a monitoring campaign to S. Jacinto outfall, located off the Portuguese west coast near Aveiro region, using an AUV. The Matheron’s classical estimator was used the compute the experimental semivariogram which was fitted to three theoretical models: spherical, exponential and gaussian. The cross-validation procedure suggested the best semivariogram model and ordinary kriging was used to obtain the predictions of salinity at unknown locations. The generated map shows clearly the plume dispersion in the studied area, indicating that the effluent does not reach the near by beaches. Our study suggests that an optimal design for the AUV sampling trajectory from a geostatistical prediction point of view, can help to compute more precise predictions and hence to quantify more accurately dilution. Moreover, since accurate measurements of plume’s dilution are rare, these studies might be very helpful in the future for validation of dispersion models.
Resumo:
Business Intelligence (BI) is one emergent area of the Decision Support Systems (DSS) discipline. Over the last years, the evolution in this area has been considerable. Similarly, in the last years, there has been a huge growth and consolidation of the Data Mining (DM) field. DM is being used with success in BI systems, but a truly DM integration with BI is lacking. Therefore, a lack of an effective usage of DM in BI can be found in some BI systems. An architecture that pretends to conduct to an effective usage of DM in BI is presented.
Resumo:
Revista Fiscal Maio 2006
Resumo:
This paper deals with the establishment of a characterization methodology of electric power profiles of medium voltage (MV) consumers. The characterization is supported on the data base knowledge discovery process (KDD). Data Mining techniques are used with the purpose of obtaining typical load profiles of MV customers and specific knowledge of their customers’ consumption habits. In order to form the different customers’ classes and to find a set of representative consumption patterns, a hierarchical clustering algorithm and a clustering ensemble combination approach (WEACS) are used. Taking into account the typical consumption profile of the class to which the customers belong, new tariff options were defined and new energy coefficients prices were proposed. Finally, and with the results obtained, the consequences that these will have in the interaction between customer and electric power suppliers are analyzed.
Resumo:
The introduction of Electric Vehicles (EVs) together with the implementation of smart grids will raise new challenges to power system operators. This paper proposes a demand response program for electric vehicle users which provides the network operator with another useful resource that consists in reducing vehicles charging necessities. This demand response program enables vehicle users to get some profit by agreeing to reduce their travel necessities and minimum battery level requirements on a given period. To support network operator actions, the amount of demand response usage can be estimated using data mining techniques applied to a database containing a large set of operation scenarios. The paper includes a case study based on simulated operation scenarios that consider different operation conditions, e.g. available renewable generation, and considering a diversity of distributed resources and electric vehicles with vehicle-to-grid capacity and demand response capacity in a 33 bus distribution network.
Resumo:
The study of electricity markets operation has been gaining an increasing importance in last years, as result of the new challenges that the electricity markets restructuring produced. This restructuring increased the competitiveness of the market, but with it its complexity. The growing complexity and unpredictability of the market’s evolution consequently increases the decision making difficulty. Therefore, the intervenient entities are forced to rethink their behaviour and market strategies. Currently, lots of information concerning electricity markets is available. These data, concerning innumerous regards of electricity markets operation, is accessible free of charge, and it is essential for understanding and suitably modelling electricity markets. This paper proposes a tool which is able to handle, store and dynamically update data. The development of the proposed tool is expected to be of great importance to improve the comprehension of electricity markets and the interactions among the involved entities.
Resumo:
This paper presents ELECON - Electricity Consumption Analysis to Promote Energy Efficiency Considering Demand Response and Non-technical Losses, an international research project that involves European and Brazilian partners. ELECON focuses on energy efficiency increasing through consumer´s active participation which is a key area for Europe and Brazil cooperation. The project aims at significantly contributing towards the successful implementation of smart grids, focussing on the use of new methods that allow the efficient use of distributed energy resources, namely distributed generation, storage and demand response. ELECON puts together researchers from seven European and Brazilian partners, with consolidated research background and evidencing complementary competences. ELECON involves institutions of 3 European countries (Portugal, Germany, and France) and 4 Brazilian institutions. The complementary background and experience of the European and Brazilian partners is of main relevance to ensure the capacities required to achieve the proposed goals. In fact, the European Union (EU) and Brazil have very different resources and approaches in what concerns this area. Having huge hydro and fossil resources, Brazil has not been putting emphasis on distributed renewable based electricity generation. On the contrary, EU has been doing huge investments in this area, taking into account environmental concerns and also the economic EU external dependence dictated by huge requirements of energy related products imports. Sharing these different backgrounds allows the project team to propose new methodologies able to efficiently address the new challenges of smart grids.
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.