52 resultados para Hydrothermal generation scheduling
Resumo:
Innovations diffuse at different speed among the members of a social system through various communication channels. The group of early adopters can be seen as the most influential reference group for majority of people to base their innovation adoption decisions on. Thus, the early adopters can often accelerate the diffusion of innovations. The purpose of this research is to discover means of diffusion for an innovative product in Finnish market through the influential early adopters in respect to the characteristics of the case product. The purpose of the research can be achieved through the following sub objectives: Who are the potential early adopters for the case product and why? How the potential early adopters of the case product should be communicated with? What would be the expectations, preferences, and experiences of the early adopters of the case product? The case product examined in this research is a new board game called Rock Science which is considered to be incremental innovation bringing board gaming and hard rock music together in a new way. The research was conducted in two different parts using both qualitative and quantitative research methods. This mixed method research began with expert interviews of six music industry experts. The information gathered from the interviews enabled researcher to compose the questionnaire for the quantitative part of the study. Internet survey that was sent out resulted with a sample of 97 responses from the targeted population. The key findings of the study suggest that (1) the potential early adopters for the case product are more likely to be young adults from the capital city area with great interest in rock music, (2) the early adopters can be reached effectively through credible online sources of information, and (3) the respondents overall product feedback is highly positive, except in the case of quality-price ratio of the product. This research indicates that more effective diffusion of Rock Science board game in Finland can be reached through (1) strategic alliances with music industry and media partnerships, (2) pricing adjustments, (3) use of supporting game formats, and (4) innovative use of various social media channels.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Operational excellence of individual tramp shipping companies is important in today’s market, where competition is intense, freight revenues are modest and capital costs high due to global financial crisis, and tighter regulatory framework is generating additional costs and challenges to the industry. This thesis concentrates on tramp shipping, where a tramp operator in a form of an individual case company, specialized in short-sea shipping activities in the Baltic Sea region, is searching ways to map their current fleet operations and better understand potential ways to improve the overall routing and scheduling decisions. The research problem is related to tramp fleet planning where several cargoes are carried on board at the same time, which are here systematically referred to as part cargoes. The purpose is to determine the pivotal dimensions and characteristics of these part cargo operations in tramp shipping, and offer both the individual case company and wider research community better understanding of potential risks and benefits related to utilization of part cargo operations. A mixed method research approach is utilized in this research, as the objectives are related to complex, real-life business practices in the field of supply chain management and more specifically, maritime logistics. A quantitative analysis of different voyage scenarios is executed, including alternative voyage legs with varying cost structure and customer involvement. An on-line-based questionnaire designed and prepared by case company’s decision group again provides desired data of predominant attitudes and views of most important industrial customers regarding the part cargo-related operations and potential future utilization of this business model. The results gained from these quantitative methods are complied with qualitative data collection tools, along with suitable secondary data sources. Based on results and logical analysis of different data sources, a framework for characterizing the different aspects of part cargo operations is developed, utilizing both existing research and empirical investigation of the phenomenon. As conclusions, part cargoes have the ability to be part of viable fleet operations, and even increase flexibility among the fleet to a certain extent. Naturally, several hinderers for this development is recognized as well, such as potential issues with information gathering and sharing, inefficient port activities, and increased transit times.