855 resultados para Learning Management Systems
Resumo:
Tämän kandidaatintutkimuksen tarkoituksena on löytää vastaus siihen, miten vahva voi olla DRM-systeemi, ennen kuin kuluttajat eivät enää hyväksy sitä. DRM-systeemejä on monen tasoisia, mutta ne eivät ole soveltuvia sellaisenaan kaikille eri alustoille. Peliteollisuuden digitaalisten käyttöoikeuksien hallintajärjestelmillä on omanlaisensa lainalaisuudet kuin esimerkiksi musiikkiteollisuudella. Lisäksi on olemassa tietty tämän hetkinen hyväksytty DRM:n taso, josta voi olla vaarallista poiketa. Tutkimus on luonteeltaan laadullinen tutkimus. Työssä on sovellettu sekä diskurssi- että sisällönanalyysin oppeja. Tutkimuksen aineistona on erilaisten viestiketjujen tekstit, joiden pohjalta pyritään löytämään vastaus tutkimuskysymykseen. Ketjut on jaettu eri vahvuisiksi sen perusteella, miten vahva on DRM:ää koskeva uutinen, jonka pohjalta viestiketju on syntynyt. Koska aineisto on puhuttua kieltä ja sillä on aina oma merkityksensä kontekstissaan, ovat valitut menetelmät soveltuvia analysoimaan aineistoa. Eri ketjujen analyysien tuloksien pohjalta voidaan sanoa, että DRM ei voi olla sitä tasoa suurempi kuin mikä on sen hetkinen vallitseva taso. Jos tästä tasosta poiketaan pikkaisenkin, voi se aiheuttaa suurta närästystä kuluttajien keskuudessa, jopa siihen saakka, että yritys menettää tuloja. Sen hetkiseen tasoon on päästy erinäisten kokeilujen kautta, joista kuluttajat ovat kärsineet, joten he eivät suosiolla hyväksy yhtään sen suurempaa tasoa kuin mikä vallitsee sillä hetkellä. Jos yritys näkee, että tasoa on pakko tiukentaa, täytyy tiukennus tehdä pikkuhiljaa ja naamioida se lisäominaisuuksilla. Kuluttajat ovat tietoisia omista oikeuksistaan, eivätkä he helpolla halua luopua niistä yhtään sen enempää kuin on tarpeellista.
Resumo:
Fisheries plays a significant and important part in the economy of the country contributing to foreign exchange, food security and employment creation. Lake Victoria contributes over 50% of the total annual fish catch. The purpose of fisheries management is to ensure conservation, protection, proper use, economic efficiency and equitable distribution of the fisheries resources both for the present and future generations through sustainable utilization. The earliest fisheries were mainly at the subsistence level. Fishing gear consisted of locally made basket traps, hooks and seine nets of papyrus. Fishing effort begun to increase with the introduction of more efficient flax gillnets in 1905. Fisheries management in Uganda started in 1914. Before then, the fishery was under some form of traditional management based on the do and don'ts. History shows that the Baganda had strong spiritual beliefs in respect of "god Mukasa" (god of the Lake) and these indirectly contributed to sustainable management of the lake. If a fisherman neglected to comply witt'l any of the ceremonies related to fishing he was expected to encounter a bad omen (Rev. Roscoe, 1965) However, with the introduction of the nylon gill nets, which could catch more fish, traditional management regime broke down. By 1955 the indigenous fish species like Oreochromis variabilis and Oreochromis esculentus had greatly declined in catches. Decline in catches led to introduction of poor fishing methods because of competition for fish. Government in an attempt to regulate the fishing irldustry enacted the first Fisheries Ordinance in 1951 and recruited Fisheries Officers to enforce them. The government put in place minimum net mesh-sizes and Fisheries Officers arrested fishermen without explaining the reason. This led to continued poor fishing practices. The development of government centred management systems led to increased alienation of resource users and to wilful disregard of specific regulations. The realisation of the problems faced by the central management system led to the recognition that user groups need to be actively involved in fisheries management if the systems are to be consistent with sustainable fisheries and be legitimate. Community participation in fisheries management under the Comanagement approach has been adopted in Lake Victoria including other water bodies.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.
Resumo:
Beef businesses in northern Australia are facing increased pressure to be productive and profitable with challenges such as climate variability and poor financial performance over the past decade. Declining terms of trade, limited recent gains in on-farm productivity, low profit margins under current management systems and current climatic conditions will leave little capacity for businesses to absorb climate change-induced losses. In order to generate a whole-of-business focus towards management change, the Climate Clever Beef project in the Maranoa-Balonne region of Queensland trialled the use of business analysis with beef producers to improve financial literacy, provide a greater understanding of current business performance and initiate changes to current management practices. Demonstration properties were engaged and a systematic approach was used to assess current business performance, evaluate impacts of management changes on the business and to trial practices and promote successful outcomes to the wider industry. Focus was concentrated on improving financial literacy skills, understanding the business’ key performance indicators and modifying practices to improve both business productivity and profitability. To best achieve the desired outcomes, several extension models were employed: the ‘group facilitation/empowerment model’, the ‘individual consultant/mentor model’ and the ‘technology development model’. Providing producers with a whole-of-business approach and using business analysis in conjunction with on-farm trials and various extension methods proved to be a successful way to encourage producers in the region to adopt new practices into their business, in the areas of greatest impact. The areas targeted for development within businesses generally led to improvements in animal performance and grazing land management further improving the prospects for climate resilience.
Resumo:
Relatório de estágio apresentado para obtenção do grau de mestre em Educação e Comunicação Multimédia.
Resumo:
The research investigates the feasibility of using web-based project management systems for dredging. To achieve this objective the research assessed both the positive and negative aspects of using web-based technology for the management of dredging projects. Information gained from literature review and prior investigations of dredging projects revealed that project performance, social, political, technical, and business aspects of the organization were important factors in deciding to use web-based systems for the management of dredging projects. These factors were used to develop the research assumptions. An exploratory case study methodology was used to gather the empirical evidence and perform the analysis. An operational prototype of the system was developed to help evaluate developmental and functional requirements, as well as the influence on performance, and on the organization. The evidence gathered from three case study projects, and from a survey of 31 experts, were used to validate the assumptions. Baselines, representing the assumptions, were created as a reference to assess the responses and qualitative measures. The deviation of the responses was used to evaluate for the analysis. Finally, the conclusions were assessed by validating the assumptions with the evidence, derived from the analysis. The research findings are as follows: 1. The system would help improve project performance. 2. Resistance to implementation may be experienced if the system is implemented. Therefore, resistance to implementation needs to be investigated further and more R&D work is needed in order to advance to the final design and implementation. 3. System may be divided into standalone modules in order to simplify the system and facilitate incremental changes. 4. The QA/QC conceptual approach used by this research needs to be redefined during future R&D to satisfy both owners and contractors. Yin (2009) Case Study Research Design and Methods was used to develop the research approach, design, data collection, and analysis. Markus (1983) Resistance Theory was used during the assumptions definition to predict potential problems to the implementation of web-based project management systems for the dredging industry. Keen (1981) incremental changes and facilitative approach tactics were used as basis to classify solutions, and how to overcome resistance to implementation of the web-based project management system. Davis (1989) Technology Acceptance Model (TAM) was used to assess the solutions needed to overcome the resistances to the implementation of web-base management systems for dredging projects.
Resumo:
Relatório de estágio apresentado para obtenção do grau de mestre em Educação e Comunicação Multimédia.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Climate change and carbon (C) sequestration are a major focus of research in the twenty-first century. Globally, soils store about 300 times the amount of C that is released per annum through the burning of fossil fuels (Schulze and Freibauer 2005). Land clearing and introduction of agricultural systems have led to rapid declines in soil C reserves. The recent introduction of conservation agricultural practices has not led to a reversing of the decline in soil C content, although it has minimized the rate of decline (Baker et al. 2007; Hulugalle and Scott 2008). Lal (2003) estimated the quantum of C pools in the atmosphere, terrestrial ecosystems, and oceans and reported a “missing C” component in the world C budget. Though not proven yet, this could be linked to C losses through runoff and soil erosion (Lal 2005) and a lack of C accounting in inland water bodies (Cole et al. 2007). Land management practices to minimize the microbial respiration and soil organic C (SOC) decline such as minimum tillage or no tillage were extensively studied in the past, and the soil erosion and runoff studies monitoring those management systems focused on other nutrients such as nitrogen (N) and phosphorus (P).
Resumo:
Fault tolerance allows a system to remain operational to some degree when some of its components fail. One of the most common fault tolerance mechanisms consists on logging the system state periodically, and recovering the system to a consistent state in the event of a failure. This paper describes a general fault tolerance logging-based mechanism, which can be layered over deterministic systems. Our proposal describes how a logging mechanism can recover the underlying system to a consistent state, even if an action or set of actions were interrupted mid-way, due to a server crash. We also propose different methods of storing the logging information, and describe how to deploy a fault tolerant master-slave cluster for information replication. We adapt our model to a previously proposed framework, which provided common relational features, like transactions with atomic, consistent, isolated and durable properties, to NoSQL database management systems.
Resumo:
Between 2009 and 2011, a joint academia-industry effort took place to integrate Second Life and OpenSimulator platforms into a corporate elearning provider’s learning management platform. The process involved managers and lead developers at the provider and an academic engineering research team. We performed content analysis on the documents produced in this process, seeking data on the corporate perspective of requirements for virtual world platforms to be usable in everyday practice. In this paper, we present the requirements found in the documents, and detail how they emerged and evolved throughout the process.
Resumo:
This dissertation investigates customer behavior modeling in service outsourcing and revenue management in the service sector (i.e., airline and hotel industries). In particular, it focuses on a common theme of improving firms’ strategic decisions through the understanding of customer preferences. Decisions concerning degrees of outsourcing, such as firms’ capacity choices, are important to performance outcomes. These choices are especially important in high-customer-contact services (e.g., airline industry) because of the characteristics of services: simultaneity of consumption and production, and intangibility and perishability of the offering. Essay 1 estimates how outsourcing affects customer choices and market share in the airline industry, and consequently the revenue implications from outsourcing. However, outsourcing decisions are typically endogenous. A firm may choose whether to outsource or not based on what a firm expects to be the best outcome. Essay 2 contributes to the literature by proposing a structural model which could capture a firm’s profit-maximizing decision-making behavior in a market. This makes possible the prediction of consequences (i.e., performance outcomes) of future strategic moves. Another emerging area in service operations management is revenue management. Choice-based revenue systems incorporate discrete choice models into traditional revenue management algorithms. To successfully implement a choice-based revenue system, it is necessary to estimate customer preferences as a valid input to optimization algorithms. The third essay investigates how to estimate customer preferences when part of the market is consistently unobserved. This issue is especially prominent in choice-based revenue management systems. Normally a firm only has its own observed purchases, while those customers who purchase from competitors or do not make purchases are unobserved. Most current estimation procedures depend on unrealistic assumptions about customer arriving. This study proposes a new estimation methodology, which does not require any prior knowledge about the customer arrival process and allows for arbitrary demand distributions. Compared with previous methods, this model performs superior when the true demand is highly variable.
Resumo:
Nowadays, Power grids are critical infrastructures on which everything else relies, and their correct behavior is of the highest priority. New smart devices are being deployed to be able to manage and control power grids more efficiently and avoid instability. However, the deployment of such smart devices like Phasor Measurement Units (PMU) and Phasor Data Concentrators (PDC), open new opportunities for cyber attackers to exploit network vulnerabilities. If a PDC is compromised, all data coming from PMUs to that PDC is lost, reducing network observability. Our approach to solve this problem is to develop an Intrusion detection System (IDS) in a Software-defined network (SDN). allowing the IDS system to detect compromised devices and use that information as an input for a self-healing SDN controller, which redirects the data of the PMUs to a new, uncompromised PDC, maintaining the maximum possible network observability at every moment. During this research, we have successfully implemented Self-healing in an example network with an SDN controller based on Ryu controller. We have also assessed intrinsic vulnerabilities of Wide Area Management Systems (WAMS) and SCADA networks, and developed some rules for the Intrusion Detection system which specifically protect vulnerabilities of these networks. The integration of the IDS and the SDN controller was also successful. \\To achieve this goal, the first steps will be to implement an existing Self-healing SDN controller and assess intrinsic vulnerabilities of Wide Area Measurement Systems (WAMS) and SCADA networks. After that, we will integrate the Ryu controller with Snort, and create the Snort rules that are specific for SCADA or WAMS systems and protocols.
Resumo:
Regulated Transformer Rectifier Units contain several power electronic boards to facilitate AC to DC power conversion. As these units become smaller, the number of devices on each board increases while their distance from each other decreases, making active cooling essential to maintaining reliable operation. Although it is widely accepted that liquid is a far superior heat transfer medium to air, the latter is still capable of yielding low device operating temperatures with proper heat sink and airflow design. The purpose of this study is to describe the models and methods used to design and build the thermal management system for one of the power electronic boards in a compact, high power regulated transformer rectifier unit. Maximum device temperature, available pressure drop and manufacturability were assessed when selecting the final design for testing. Once constructed, the thermal management system’s performance was experimentally verified at three different power levels.