36 resultados para Distributed generations
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Currently, a high penetration level of Distributed Generations (DGs) has been observed in the Danish distribution systems, and even more DGs are foreseen to be present in the upcoming years. How to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. This master project is intended to develop a control architecture for studying purposes of distribution systems with large scale integration of solar power. As part of the EcoGrid EU Smart Grid project, it focuses on the system modelling and simulation of a Danish representative LV network located in Bornholm island. Regarding the control architecture, two types of reactive control techniques are implemented and compare. In addition, a network voltage control based on a tap changer transformer is tested. The optimized results after applying a genetic algorithm to five typical Danish domestic loads are lower power losses and voltage deviation using Q(U) control, specially with large consumptions. Finally, a communication and information exchange system is developed with the objective of regulating the reactive power and thereby, the network voltage remotely and real-time. Validation test of the simulated parameters are performed as well.
Resumo:
Recent developments in power electronics technology have made it possible to develop competitive and reliable low-voltage DC (LVDC) distribution networks. Further, islanded microgrids—isolated small-scale localized distribution networks— have been proposed to reliably supply power using distributed generations. However, islanded operations face many issues such as power quality, voltage regulation, network stability, and protection. In this thesis, an energy management system (EMS) that ensures efficient energy and power balancing and voltage regulation has been proposed for an LVDC island network utilizing solar panels for electricity production and lead-acid batteries for energy storage. The EMS uses the master/slave method with robust communication infrastructure to control the production, storage, and loads. The logical basis for the EMS operations has been established by proposing functionalities of the network components as well as by defining appropriate operation modes that encompass all situations. During loss-of-powersupply periods, load prioritizations and disconnections are employed to maintain the power supply to at least some loads. The proposed EMS ensures optimal energy balance in the network. A sizing method based on discrete-event simulations has also been proposed to obtain reliable capacities of the photovoltaic array and battery. In addition, an algorithm to determine the number of hours of electric power supply that can be guaranteed to the customers at any given location has been developed. The successful performances of all the proposed algorithms have been demonstrated by simulations.
Knowledge Sharing between Generations in an Organisation - Retention of the Old or Building the New?
Resumo:
The study explores knowledge transfer between retiring employees and their successors in expert work. My aim is to ascertain whether there is knowledge development or building new knowledge related to this organisational knowledge transfer between generations; in other words, is the transfer of knowledge from experienced, retiring employees to their successors merely retention of the existing organisational knowledge by distributing it from one individual to another or does this transfer lead to building new and meaningful organisational knowledge. I call knowledge transfer between generations and the possibly related knowledge building in this study knowledge sharing between generations. The study examines the organisation and knowledge management from a knowledge-based and constructionist view. From this standpoint, I see knowledge transfer as an interactive process, and the exploration is based on how the people involved in this process understand and experience the phenomenon studied. The research method is organisational ethnography. I conducted the analysis of data using thematic analysis and the articulation method, which has not been used before in organisational knowledge studies. The primary empirical data consists of theme interviews with twelve employees involved in knowledge transfer in the organisation being studied and five follow-up theme interviews. Six of the interviewees are expert duty employees due to retire shortly, and six are their successors. All those participating in the follow-up interviews are successors of those soon to retire from their expert responsibilities. The organisation in the study is a medium-sized Finnish firm, which designs and manufactures electrical equipment and systems for the global market. The results of the study show that expert work-related knowledge transfer between generations can mean knowledge building which produces new, meaningful knowledge for the organisation. This knowledge is distributed in the organisation to all those that find it useful in increasing the efficiency and competitiveness of the whole organisation. The transfer and building of knowledge together create an act of knowledge sharing between generations where the building of knowledge presupposes transfer. Knowledge sharing proceeds between the expert and the novice through eight phases. During the phases of knowledge transfer the expert guides the novice to absorb the knowledge to be transferred. With the expert’s help the novice gradually comes to understand the knowledge and in the end he or she is capable of using it in his or her work. During the phases of knowledge building the expert helps the novice to further develop the knowledge being transferred so that it becomes new, useful knowledge for the organisation. After that the novice takes the built knowledge to use in his or her work. Based on the results of the study, knowledge sharing between generations takes place in interaction and ends when knowledge is taken to use. The results I obtained in the interviews by the articulation method show that knowledge sharing between generations is shaped by the novices’ conceptions of their own work goals, knowledge needs and duties. These are not only based on the official definition of the work, but also how the novices find their work or how they prioritise the given objectives and responsibilities. The study shows that the novices see their work primarily as maintenance or development. Those primarily involved in maintenance duties do not necessarily need knowledge defined as transferred between generations. Therefore, they do not necessarily transfer knowledge with their assigned experts, even though this can happen in favourable circumstances. They do not build knowledge because their view of their work goals and duties does not require the building of new knowledge. Those primarily involved in development duties, however, do need knowledge available from their assigned experts. Therefore, regardless of circumstances they transfer knowledge with their assigned experts and also build knowledge because their work goals and duties create a basis for building new knowledge. The literature on knowledge transfer between generations has focused on describing either the knowledge being transferred or the means by which it is transferred. Based on the results of this study, however, knowledge sharing between generations, that is, transfer and building is determined by how the novice considers his or her own knowledge needs and work practices. This is why studies on knowledge sharing between generations and its implementation should be based not only on the knowledge content and how it is shared, but also on the context of the work in which the novice interprets and shares knowledge. The existing literature has not considered the possibility that knowledge transfer between generations may mean building knowledge. The results of this study, however, show that this is possible. In knowledge building, the expert’s existing organisational knowledge is combined with the new knowledge that the novice brings to the organisation. In their interaction this combination of the expert’s “old” and the novice’s “new” knowledge becomes new, meaningful organisational knowledge. Previous studies show that knowledge development between the members of an organisation is the prerequisite for organisational renewal which in turn is essential for improved competitiveness. Against this background, knowledge building enables organisational renewal and thus enhances competitiveness. Hence, when knowledge transfer between generations is followed by knowledge building, the organisation kills two birds with one stone. In knowledge transfer the organisation retains the existing knowledge and thus maintains its competitiveness. In knowledge building the organisation developsnew knowledge and thus improves its competitiveness.
Resumo:
The energy system of Russia is the world's fourth largest measured by installed power. The largest are that of the the United States of America, China and Japan. After 1990, the electricity consumption decreased as a result of the Russian industry crisis. The vivid economic growth during the latest few years explains the new increase in the demand for energy resources within the State. In 2005 the consumption of electricity achieved the maximum level of 1990 and continues to growth. In the 1980's, the renewal of power facilities was already very slow and practically stopped in the 1990's. At present, the energy system can be very much characterized as outdated, inefficient and uneconomic because of the old equipment, non-effective structure and large losses in the transmission lines. The aim of Russia's energy reform, which was started in 2001, is to achieve a market based energy policy by 2011. This would thus remove the significantly state-controlled monopoly in Russia's energy policy. The reform will stimulateto decrease losses, improve the energy system and employ energy-saving technologies. The Russian energy system today is still based on the use of fossil fuels, and it almost totally ignores the efficient use of renewable sources such as wind, solar, small hydro and biomass, despite of their significant resources in Russia. The main target of this project is to consider opportunities to apply renewable energy production in the North-West Federal Region of Russia to partly solve the above mentioned problems in the energy system.
Resumo:
The nature of client-server architecture implies that some modules are delivered to customers. These publicly distributed commercial software components are under risk, because users (and simultaneously potential malefactors) have physical access to some components of the distributed system. The problem becomes even worse if interpreted programming languages are used for creation of client side modules. The language Java, which was designed to be compiled into platform independent byte-code is not an exception and runs the additional risk. Along with advantages like verifying the code before execution (to ensure that program does not produce some illegal operations)Java has some disadvantages. On a stage of byte-code a java program still contains comments, line numbers and some other instructions, which can be used for reverse-engineering. This Master's thesis focuses on protection of Java code based client-server applications. I present a mixture of methods to protect software from tortious acts. Then I shall realize all the theoretical assumptions in a practice and examine their efficiency in examples of Java code. One of the criteria's to evaluate the system is that my product is used for specialized area of interactive television.
Resumo:
Globaalin talouden rakenteet muuttuvat jatkuvasti. Yritykset toimivat kansainvälisillä markkinoilla aiempaa enemmän. Tuotannon lisäämiseksi monet yritykset ovat ulkoistaneet tuotteidensa tuki- ja ylläpitotoiminnot halvan työvoiman maihin. Yritykset voivat tällöin keskittää toimintansa ydinosamiseensa. Vapautuneita resursseja voidaan käyttää yrityksen sisäisessä tuotekehityksessä ja panostaa seuraavan sukupolven tuotteiden ja teknologioiden kehittämiseen. Diplomityö esittelee Globaalisti hajautetun toimitusmallin Internet-palveluntarjoajalle jossa tuotteiden tuki- ja ylläpito on ulkoistettu Intiaan. Teoriaosassa esitellään erilaisia toimitusmalleja ja keskitytään erityisesti hajautettuun toimitusmalliin. Tämän lisäksi luetellaan valintakriteerejä joilla voidaan arvioida projektin soveltuvuutta ulkoistettavaksi sekä esitellään mahdollisuuksia ja uhkia jotka sisältyvät globaaliin ulkoistusprosessiin. Käytäntöosassa esitellään globaali palvelun toimittamisprosessi joka on kehitetty Internet-palveluntarjoajan tarpeisiin.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Tutkimuksen päätavoitteena on ollut selvittää miten kansallinen ja organisaatiokulttuuri, niihin liittyvät normit ja arvot edesauttavat tai vaikeuttavat luottamuksen kehittymistä monikulttuurisissa tiimeissä maailmanlaajuisessa organisaatiossa. Tutkimuksen avulla haluttiin myös selvittää miten luottamus kehittyy hajautetuissa monikansallisissa tiimeissä WorldCom Internationalissa. Empiirinen tutkimusmenetelmä perustuu kvalitatiivisiin teemahaastatteluihin, jotka tehtiin WorldComin työntekijöille. Tutkimuksessa havaittiin, ettei yhteisten sosiaalisten normien merkitys luottamuksen syntymiselle ole niin merkittävä, koska WorldComin yhtenäiset toimintatavat sekä hallitseva amerikkalaisen emoyhtiön "kotikulttuuri" muodostavat yhtenäiset toimintalinjat tiimeissä. Tietokonevälitteisen kommunikoinnin jatkuva käyttö on edesauttanut työntekijöiden ns. sosiaalisen älyn kehittymistä, sillä henkilökohtaisen tapaamisen puuttuminen kehittää vastaavasti taitoja aistia ja tulkita sähköpostien tai puhelinneuvotteluiden aikana välittyviä vastapuolen "näkymättömiä" vihjeitä.
Resumo:
The thesis studies the representations of different elements of contemporary work as present in Knowledge Management (KM). KM is approached as management discourse that is seen to affect and influence managerial practices in organizations. As representatives of KM discourse four journal articles are analyzed, using the methodology of Critical Discourse Analysis and the framework of Critical Management Studies, with a special emphasis on the question of structure and agency. The results of the analysis reveal that structural elements such as information technology and organizational structures are strongly present in the most influential KM representations, making their improvement also a desirable course of action for managers. In contrast agentic properties are not in a central role, they are subjugated to structural constraints of varying kind and degree. The thesis claims that one such constraint is KM discourse itself, influencing managerial and organizational choices and decision making. The thesis concludes that the way human beings are represented, studied and treated in management studies such as KM needs to be re-examined. Pro gradu-tutkielmassa analysoidaan työhön ja sen tekijään liittyviä representaatioita Tietojohtamisen kirjallisuudessa. Tietojohtamista tarkastellaan liikkeenjohdollisena diskurssina, jolla nähdään olevan vaikutus organisaatioiden päätöksentekoon ja toimintaan. Tutkielmassa analysoidaan neljä Tietojohtamisen tieteellistä artikkelia, käyttäen metodina kriittistä diskurssianalyysiä. Tutkielman viitekehyksenä on kriittinen liikkeenjohdon tutkimus. Lisäksi työssä pohditaan kysymystä rakenteen ja toimijan välisestä vuorovaikutuksesta. Tutkielman analyysi paljastaa, että tietojohtamisen vaikutusvaltaisimmat representaatiot painottavat rakenteellisia tekijöitä, kuten informaatioteknologiaa ja organisaatiorakenteita. Tämän seurauksena mm. panostukset em. tekijöihin nähdään organisaatioissa toivottavana toimintana. Vastaavasti representaatiot jotka painottavat yksilöitä ja toimintaa ovat em. tekijöille alisteisessa asemassa. Tapaa, jolla yksilöitä kuvataan ja käsitellään Tietojohtamisen diskurssissa, tulisikin laajentaa ja monipuolistaa.
Resumo:
Today’s commercial web sites are under heavy user load and they are expected to be operational and available at all times. Distributed system architectures have been developed to provide a scalable and failure tolerant high availability platform for these web based services. The focus on this thesis was to specify and implement resilient and scalable locally distributed high availability system architecture for a web based service. Theory part concentrates on the fundamental characteristics of distributed systems and presents common scalable high availability server architectures that are used in web based services. In the practical part of the thesis the implemented new system architecture is explained. Practical part also includes two different test cases that were done to test the system's performance capacity.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.