949 resultados para Hydrothermal generation scheduling
Resumo:
Kirjallisuusarvostelu
Resumo:
The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.
Resumo:
At present stage the analytical design of wave tolerance for floating structures and vessels is still imperfect due to the mutually complex and nonlinear phenomena between structures and waves. Wave tolerance design is usually carried out through iterative evaluations of results from model tests in a wave basin, and this is done in order to reach a final structural design. The wave generation has then become an important technology in the field of the coastal and ocean engineering. This paper summarizes the facilities of a test basin and a wave maker in Japan and also surveys the methodology of the generation of ocean waves in a test basin.
Resumo:
This paper discusses the effect of tool wear on surface finish in single-point diamond turning of single crystal silicon. The morphology and topography of the machined surface clearly show the type of cutting edge wear reproduced onto the cutting grooves. Scanning electron microscopy is used in order to correlate the cutting edge damage and microtopography features observed through atomic force microscopy. The possible wear mechanisms affecting tool performance and surface generation during cutting are also discussed. The zero degree rake angle single point diamond tool presented small nicks on the cutting edge. The negative rake angle tools presented more a type of crater wear on the rake face. No wear was detected on flank face of the diamond tools.
Resumo:
Presentation at the Nordic Perspectives on Open Access and Open Science seminar, Helsinki, October 15, 2013
Resumo:
Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.
Resumo:
Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.
Resumo:
Kristiina Hormia-Poutasen esitys CBUC-konferenssissa Barcelonassa 12.4.2013.
Resumo:
Kristiina Hormia-Poutasen esitys CBUC-konferenssissa 12.4.2013 Barcelonassa.
Resumo:
New challenges have been created in the modern work environment as the diversity of the workforce is greater than ever in terms of generations. There will become a large demand of generation Y employees as the baby boomer generation employees retire at an accelerated rate. The purpose of this study is to investigate Y generation specific characteristics and to identify motivational systems to enhance performance. The research questions are: 1. What are Y generation characteristics? 2. What motivational systems organizations can form to motivate Y generation employees and in turn, create better performance? The Y generation specific characteristics identified from the literature include; achievement oriented; confident; educated; multitasking; having a need for feedback; needing management support; sociable and tech savvy. The proposed motivational systems can be found in four areas of the organization; HRM, training and development, communication and decision making policies. Three focus groups were held to investigate what would motivate generation Y employees to achieve better performance. Two of these focus groups were Finnish natives and the third consisted of international students. The HRM systems included flexibility and a culture of fun. It was concluded that flexibility within the workplace and role was a great source of motivation. Culture of fun was not responded to as favorably although most focus group participants rated enjoyableness as one of their top motivating factors. Training and development systems include training programs and mentoring as sources of potential motivation. Training programs were viewed as a mode to gain a better position and were not necessarily seen as motivational systems. Mentoring programs were not concluded to have a significant effect on motivation. Communication systems included keeping up with technology, clarity and goals as well as feedback. Keeping up with technology was seen as an ineffective tool to motivate. Clarity and goal setting was seen as very important to be able to perform but not necessarily motivating. Feedback had a highly motivating effect on these focus groups. Decision making policies included collaboration and teamwork as well as ownership. Teams were familiar and meet the social needs of Y generation employees and are motivating. Ownership was equated with trust and responsibility and was highly valued as well as motivating to these focus group participants.
Resumo:
Hydrothermal carbonization (HTC) is a thermochemical process used in the production of charred matter similar in composition to coal. It involves the use of wet, carbohydrate feedstock, a relatively low temperature environment (180 °C-350 °C) and high autogenous pressure (up to 2,4 MPa) in a closed system. Various applications of the solid char product exist, opening the way for a range of biomass feedstock materials to be exploited that have so far proven to be troublesome due to high water content or other factors. Sludge materials are investigated as candidates for industrial-scale HTC treatment in fuel production. In general, HTC treatment of pulp and paper industry sludge (PPS) and anaerobically digested municipal sewage sludge (ADS) using existing technology is competitive with traditional treatment options, which range in price from EUR 30-80 per ton of wet sludge. PPS and ADS can be treated by HTC for less than EUR 13 and 33, respectively. Opportunities and challenges related to HTC exist, as this relatively new technology moves from laboratory and pilot-scale production to an industrial scale. Feedstock materials, end-products, process conditions and local markets ultimately determine the feasibility of a given HTC operation. However, there is potential for sludge materials to be converted to sustainable bio-coal fuel in a Finnish context.
Resumo:
Innovations diffuse at different speed among the members of a social system through various communication channels. The group of early adopters can be seen as the most influential reference group for majority of people to base their innovation adoption decisions on. Thus, the early adopters can often accelerate the diffusion of innovations. The purpose of this research is to discover means of diffusion for an innovative product in Finnish market through the influential early adopters in respect to the characteristics of the case product. The purpose of the research can be achieved through the following sub objectives: Who are the potential early adopters for the case product and why? How the potential early adopters of the case product should be communicated with? What would be the expectations, preferences, and experiences of the early adopters of the case product? The case product examined in this research is a new board game called Rock Science which is considered to be incremental innovation bringing board gaming and hard rock music together in a new way. The research was conducted in two different parts using both qualitative and quantitative research methods. This mixed method research began with expert interviews of six music industry experts. The information gathered from the interviews enabled researcher to compose the questionnaire for the quantitative part of the study. Internet survey that was sent out resulted with a sample of 97 responses from the targeted population. The key findings of the study suggest that (1) the potential early adopters for the case product are more likely to be young adults from the capital city area with great interest in rock music, (2) the early adopters can be reached effectively through credible online sources of information, and (3) the respondents overall product feedback is highly positive, except in the case of quality-price ratio of the product. This research indicates that more effective diffusion of Rock Science board game in Finland can be reached through (1) strategic alliances with music industry and media partnerships, (2) pricing adjustments, (3) use of supporting game formats, and (4) innovative use of various social media channels.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Seven selection indexes based on the phenotypic value of the individual and the mean performance of its family were assessed for their application in breeding of self-pollinated plants. There is no clear superiority from one index to another although some show one or more negative aspects, such as favoring the selection of a top performing plant from an inferior family in detriment of an excellent plant from a superior family