65 resultados para enterprise network cooperation
em Instituto Politécnico do Porto, Portugal
Resumo:
Devido ao facto de hoje em dia a informação que é processada numa rede informática empresarial, ser cada vez mais de ordem confidencial, torna-se necessário que essa informação esteja o mais protegida possível. Ao mesmo tempo, é necessário que esta a informação esteja disponível com a devida rapidez, para os parceiros certos, num mundo cada vez mais globalizado. Com este trabalho pretende-se efectuar o estudo e implementação da segurança, numa pequena e genérica rede de testes, que facilmente seja extrapolada, para uma rede da dimensão, de uma grande empresa com potenciais ramificações por diversos locais. Pretende-se implementar/monitorização segurança quer externamente, (Internet service provider ISP) quer internamente (activos de rede, postos de trabalho/utilizadores). Esta análise é baseada na localização (local, wireless ou remota), e, sempre que seja detectada qualquer anomalia, seja identificada a sua localização, sendo tomadas automaticamente acções de protecção. Estas anomalias poderão ser geridas recorrendo a ferramentas open source ou comerciais, que façam a recolha de toda a informação necessária, e tomem acções de correcção ou alerta mediante o tipo de anomalia.
Resumo:
Most definitions of virtual enterprise (VE) incorporate the idea of extended and collaborative outsourcing to suppliers and subcontractors in order to achieve a competitive response to market demands (Webster, Sugden, & Tayles, 2004). As suggested by several authors (Browne & Zhang, 1999; Byrne, 1993; Camarinha-Matos & Afsarmanesh, 1999; Cunha, Putnik, & Ávila, 2000; Davidow & Malone, 1992; Preiss, Goldman, & Nagel, 1996), a VE consists of a network of independent enterprises (resources providers) with reconfiguration capability in useful time, permanently aligned with the market requirements, created to take profit from a specific market opportunity, and where each participant contributes with its best practices and core competencies to the success and competitiveness of the structure as a whole. Even during the operation phase of the VE, the configuration can change, to assure business alignment with the market demands, traduced by the identification of reconfiguration opportunities and continuous readjustment or reconfiguration of the VE network, to meet unexpected situations or to keep permanent competitiveness and maximum performance (Cunha & Putnik, 2002, 2005a, 2005b).
Resumo:
Multi-standard mobile devices are allowing users to enjoy higher data rates with ubiquitous connectivity. However, the benefits gained from multiple interfaces come at an expense—that being higher energy consumption in an era where mobile devices need to be energy compliant. One promising solution is the usage of short-range cooperative communication as an overlay for infrastructure-based networks taking advantage of its context information. However, the node discovery mechanism, which is pivotal to the bearer establishment process, still represents a major burden in terms of the total energy budget. In this paper, we propose a technology agnostic approach towards enhancing the MAC energy ratings by presenting a context-aware node discovery (CANDi) algorithm, which provides a priori knowledge towards the node discovery mechanism by allowing it to search nodes in the near vicinity at the ‘right time and at the right place’. We describe the different beacons required for establishing the cooperation, as well as the context information required, including battery level, modes, location and so on. CANDi uses the long-range network (WiMAX and WiFi) to distribute the context information about cooperative clusters (Ultra-wideband-based) in the vicinity. The searching nodes can use this context in locating the cooperative clusters/nodes, which facilitates the establishing of short-range connections. Analytical and simulation results are obtained, and the energy saving gains are further demonstrated in the laboratory using a customised testbed. CANDi saves up to 50% energy during the node discovery process, while the demonstrative testbed shows up to 75% savings in the total energy budget, thus validating the algorithm, as well as providing viable evidence to support the usage of short-range cooperative communications for energy savings.
Resumo:
4th International Conference on Future Generation Communication Technologies (FGCT 2015), Luton, United Kingdom.
Resumo:
Innovation is recognized by academics and practitioners as an essential competitive enabler for any company to survive, to remain competitive and to grow. Investments in tasks of R&D have not always brought the expected results. But that doesn't mean that the outcomes would not be useful to other companies of the same business area or even from another area. Thus, there is much knowledge already available in the market that can be helpful to some and profitable to others. So, the ideas and expertise can be found outside a company's boundaries and also exported from within. Information, knowledge, experience, wisdom is already available in the millions of the human beings of this planet, the challenge is to use them through a network to produce new ideas and tips that can be useful to a company with less costs. This was the reason for the emergence of the area of crowdsourcing innovation. Crowdsourcing innovation is a way of using the Web 2.0 tools to generate new ideas through the heterogeneous knowledge available in the global network of individuals highly qualified and with easy access to information and technology. So, a crowdsourcing innovation broker is an organization that mediates the communication and relationship between the seekers - companies that aspire to solve some problem or to take advantage of any business opportunity - with a crowd that is prone to give ideas based on their knowledge, experience and wisdom. This paper makes a literature review on models of open innovation, crowdsourcing innovation, and technology and knowledge intermediaries, and discusses this new phenomenon as a way to leverage the innovation capacity of enterprises. Finally, the paper outlines a research design agendafor explaining crowdsourcing innovation brokering phenomenon, exploiting its players, main functions, value creation process, and knowledge creation in order to define a knowledge metamodel of such intermediaries.
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
In recent years, power systems have experienced many changes in their paradigm. The introduction of new players in the management of distributed generation leads to the decentralization of control and decision-making, so that each player is able to play in the market environment. In the new context, it will be very relevant that aggregator players allow midsize, small and micro players to act in a competitive environment. In order to achieve their objectives, virtual power players and single players are required to optimize their energy resource management process. To achieve this, it is essential to have financial resources capable of providing access to appropriate decision support tools. As small players have difficulties in having access to such tools, it is necessary that these players can benefit from alternative methodologies to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), and intended to support smaller players. In this case the present methodology uses a training set that is created using energy resource scheduling solutions obtained using a mixed-integer linear programming (MIP) approach as the reference optimization methodology. The trained network is used to obtain locational marginal prices in a distribution network. The main goal of the paper is to verify the accuracy of the ANN based approach. Moreover, the use of a single ANN is compared with the use of two or more ANN to forecast the locational marginal price.
Resumo:
Smart Grids (SGs) appeared as the new paradigm for power system management and operation, being designed to integrate large amounts of distributed energy resources. This new paradigm requires a more efficient Energy Resource Management (ERM) and, simultaneously, makes this a more complex problem, due to the intensive use of distributed energy resources (DER), such as distributed generation, active consumers with demand response contracts, and storage units. This paper presents a methodology to address the energy resource scheduling, considering an intensive use of distributed generation and demand response contracts. A case study of a 30 kV real distribution network, including a substation with 6 feeders and 937 buses, is used to demonstrate the effectiveness of the proposed methodology. This network is managed by six virtual power players (VPP) with capability to manage the DER and the distribution network.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. Grid operators and utilities are taking new initiatives, recognizing the value of demand response for grid reliability and for the enhancement of organized spot markets’ efficiency. This paper proposes a methodology for the selection of the consumers that participate in an event, which is the responsibility of the Portuguese transmission network operator. The proposed method is intended to be applied in the interruptibility service implemented in Portugal, in convergence with Spain, in the context of the Iberian electricity market. This method is based on the calculation of locational marginal prices (LMP) which are used to support the decision concerning the consumers to be schedule for participation. The proposed method has been computationally implemented and its application is illustrated in this paper using a 937 bus distribution network with more than 20,000 consumers.
Resumo:
This paper presents an artificial neural network applied to the forecasting of electricity market prices, with the special feature of being dynamic. The dynamism is verified at two different levels. The first level is characterized as a re-training of the network in every iteration, so that the artificial neural network can able to consider the most recent data at all times, and constantly adapt itself to the most recent happenings. The second level considers the adaptation of the neural network’s execution time depending on the circumstances of its use. The execution time adaptation is performed through the automatic adjustment of the amount of data considered for training the network. This is an advantageous and indispensable feature for this neural network’s integration in ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to the market negotiating players of MASCEM (Multi-Agent Simulator of Competitive Electricity Markets).
Resumo:
In smart grids context, the distributed generation units based in renewable resources, play an important rule. The photovoltaic solar units are a technology in evolution and their prices decrease significantly in recent years due to the high penetration of this technology in the low voltage and medium voltage networks supported by governmental policies and incentives. This paper proposes a methodology to determine the maximum penetration of photovoltaic units in a distribution network. The paper presents a case study, with four different scenarios, that considers a 32-bus medium voltage distribution network and the inclusion storage units.
Resumo:
Energy resource scheduling becomes increasingly important, as the use of distributed resources is intensified and massive gridable vehicle use is envisaged. The present paper proposes a methodology for dayahead energy resource scheduling for smart grids considering the intensive use of distributed generation and of gridable vehicles, usually referred as Vehicle- o-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with V2G owners. It takes into account these contracts, the user´s requirements subjected to the VPP, and several discharge price steps. Full AC power flow calculation included in the model allows taking into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33 bus distribution network and V2G is used to illustrate the good performance of the proposed method.
Resumo:
Natural gas industry has been confronted with big challenges: great growth in demand, investments on new GSUs – gas supply units, and efficient technical system management. The right number of GSUs, their best location on networks and the optimal allocation to loads is a decision problem that can be formulated as a combinatorial programming problem, with the objective of minimizing system expenses. Our emphasis is on the formulation, interpretation and development of a solution algorithm that will analyze the trade-off between infrastructure investment expenditure and operating system costs. The location model was applied to a 12 node natural gas network, and its effectiveness was tested in five different operating scenarios.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.