857 resultados para clustering and QoS-aware routing
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
During the last decades, we assisted to what is called “information explosion”. With the advent of the new technologies and new contexts, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Among them, we emphasize telecommunications operators, which gather, using network monitoring equipment, millions of network event records, the Call Detail Records (CDRs) and the Event Detail Records (EDRs), commonly known as xDRs. These records are stored and later processed to compute network performance and quality of service metrics. With the ever increasing number of collected xDRs, its generated volume needing to be stored has increased exponentially, making the current solutions based on relational databases not suited anymore. To tackle this problem, the relational data store can be replaced by Hadoop File System (HDFS). However, HDFS is simply a distributed file system, this way not supporting any aspect of the relational paradigm. To overcome this difficulty, this paper presents a framework that enables the current systems inserting data into relational databases, to keep doing it transparently when migrating to Hadoop. As proof of concept, the developed platform was integrated with the Altaia - a performance and QoS management of telecommunications networks and services.
Resumo:
This study aims to acknowledge the domain level and influence of the neuromarketing construct. This is done considering professionals at advertising agencies in Brazil. The presence of concepts related to this new approach is very little divulged, and there are little analysis performed on this area. Thus, the research is of qualitative and exploratory nature and used as primary fonts books, articles related to marketing, neuroscience, and psychology as well as secondary fonts. A profound interview was realized aiming the main advertising agencies in Brazil. The public was composed by managers responsible for planning. A content analysis was performed afterwards. The advances related to the brain science have permitted the development of technological innovation. These go primarily towards knowledge and unconscious experiences of consumers, which are responsible for the impulse of decision making and consumer behavior. These issues are related to Neuromarketing, that in turn, uses techniques such as FMRI, PET and FDOT. These scan the consumer s brain and produces imagines on the neuron s structures and functioning. This is seen while activities such as mental tasks for the visualization of brands, images or products, watching videos and commercials are performed. It is observed that the agencies are constantly in search of new technologies and are aware of the limitations of the current research instruments. On the other hand, they are not totally familiar with concepts related to neuromarketing. In relation to the neuroimage techniques it is pointed out by the research that there is full unawareness, but some agencies seem to visualize positive impacts with the use of these techniques for the evaluation of films and in ways that permit to know the consumer better. It is also seen that neuroimage is perceived as a technique amongst others, but its application is not real, there are some barriers in the market and in the agencies itself. These barriers as well as some questioning allied to the scarce knowledge of neuromarketing, make it not possible to be put into practice in the advertising market. It is also observed that even though there is greater use of neuromarketing; there would not be any meaningful changes in functioning and structuring of these agencies. The use of the neuro-image machines should be done in research institutes and centers of big companies. Results show that the level of domain of the neuromarketing construct in the Brazilian advertising agencies is only a theoretical one. Little is known of this subject and the neurological studies and absolutely nothing of neuroimage techniques
Resumo:
Background: Despite a number of programs aimed at the transfer of reproductive health information, adolescents in Zimbabwe still face unprecedented reproductive challenges. Objectives: The study sought to explore adolescent girls’ knowledge of their sexual and reproductive health; the factors that influence their sexual behaviors and to determine the extent to which adolescents had access to sexual and reproductive health information. Methods: The case study methodology was used for the study. The interpretive paradigm was used as the methodological theory and Grunig’s model of excellence in communication was used as the substantive theory. Data was obtained through the use of focus group discussions and indepth interviews. Results: Although adolescents knew the different types of sexually transmitted diseases and were aware of the consequences of engaging in risky sexual behaviors, they engaged in health behaviors which had potential for serious consequences. The study established that adolescents did not have adequate access to sexual and reproductive health information. Sexual issues were not adequately addressed both at school and at home. Conclusion: Adolescents lack adequate access to reproductive health information and there is need for effective communication programs that contribute towards the understanding of communicated messages by audiences and the understanding of audiences by communicators.
Resumo:
Trabalho apresentado em PAEE/ALE’2016, 8th International Symposium on Project Approaches in Engineering Education (PAEE) and 14th Active Learning in Engineering Education Workshop (ALE)
Resumo:
A relação marca/consumidor é importante e, atualmente, com a evolução tecnológica e com a disseminação das redes sociais, a sua importância subiu de nível pois o meio online permite que os consumidores estejam mais informados e tenham mais consciência das diversas opções que existem no mercado. Como tal, as marcas aproveitam as redes sociais, tais como o Facebook, para se fazerem notar, para mostrarem o seu lado mais humano e assim estabelecer comunicação com os utilizadores de forma a criar ou manter uma relação mais íntima com o mesmo. Contudo, a liberdade de expressão que existe nas redes sociais nem sempre é favorável às marcas, o que faz com que estas optem por utilizar critérios de gatekeeping para filtrar alguns conteúdos. O que se pretende deste estudo não experimental de tipo exploratório e de método qualitativo, é aferir que critérios de gatekeeping é que são mais suscetíveis de serem utilizados pelas marcas, de forma a evitarem que a relação forte que detêm com o consumidor fique comprometida e manchada pelo ódio e, consequentemente, perceber como é feita a gestão das próprias páginas de Facebook. O objeto do estudo em causa envolve, assim, duas empresas do setor eletrónico que são concorrentes diretas e que têm consumidores muito dedicados e que foram selecionadas através de um método de amostragem não aleatório intencional.
Resumo:
Die Erfahrung des Verlusts einer absoluten Gemeinschaft, die für die Pädagogik als modernes theoretisches Projekt grundlegend ist, ergibt sich als Erfahrung von Kontingenz und Widerstreben (Schleiermacher). Ich will nachweisen, daß sich eine Konzeptualisierung der Erziehung entwickelt hat, die zwar von dieser Kontingenz weiß und sich des Widerstands bewußt ist, jedoch zugleich diesen Phänomenen keine systematische Bedeutung beimißt. Ich argumentiere, daß die Neutralisierung dieser Erfahrungen stattfindet, weil die ,vollkommene Gemeinschaft', wie Schleiermacher darstellt, Richtpunkt der Erziehung bleibt, und diese .vollkommene Gemeinschaft' eigentlich in gewissem Sinne als immanent betrachtet wird. Das (verlorene) Absolute (im Sinne von ,ohne Beziehung') kehrt wieder in einer doppelten und symmetrischen Figur dieser Immanenz: das ,bildsame' Individuum und die ,sich selbst aufklärende Öffentlichkeit'. Ich schlage eine alternative Interpretation vor, in der der Verlust der Gemeinschaft begriffen wird als ein Verlust der Immanenz. Dieser Verlust ist genau in einem anderen Sinne konstitutiv für die Gemeinschaft. Die Gemeinschaft ist nicht .vollkommen', das Prinzip der Gemeinschaft ist Unvollendung und Unterbrechung. Kontingenz und Widerstand, als Ausdruck dieser Gemeinschaft, sind nicht nur als Probleme' für die Erziehung zu betrachten, sondern als konstitutiv für sie. Die Erziehung hat nicht die Aufgabe, die Kontingenz und den Widerstand zu neutralisieren, sondern die Gemeinschaft ,offen' zu halten. (DIPF/Orig.)
Resumo:
The rolling stock circulation depends on two different problems: the rolling stock assignment and the train routing problems, which up to now have been solved sequentially. We propose a new approach to obtain better and more robust circulations of the rolling stock train units, solving the rolling stock assignment while accounting for the train routing problem. Here robustness means that difficult shunting operations are selectively penalized and propagated delays together with the need for human resources are minimized. This new integrated approach provides a huge model. Then, we solve the integrated model using Benders decomposition, where the main decision is the rolling stock assignment and the train routing is in the second level. For computational reasons we propose a heuristic based on Benders decomposition. Computational experiments show how the current solution operated by RENFE (the main Spanish train operator) can be improved: more robust and efficient solutions are obtained
Resumo:
Over the last few years, football entered in a period of accelerated access to large amount of match analysis data. Social networks have been adopted to reveal the structure and organization of the web of interactions, such as the players passing distribution tendencies. In this study we investigated the influence of ball possession characteristics in the competitive success of Spanish La Liga teams. The sample was composed by OPTA passing distribution raw data (n=269,055 passes) obtained from 380 matches involving all the 20 teams of the 2012/2013 season. Then, we generated 760 adjacency matrixes and their corresponding social networks using Node XL software. For each network we calculated three team performance measures to evaluate ball possession tendencies: graph density, average clustering and passing intensity. Three levels of competitive success were determined using two-step cluster analysis based on two input variables: the total points scored by each team and the scored per conceded goals ratio. Our analyses revealed significant differences between competitive performances on all the three team performance measures (p < .001). Bottom-ranked teams had less number of connected players (graph density) and triangulations (average clustering) than intermediate and top-ranked teams. However, all the three clusters diverged in terms of passing intensity, with top-ranked teams having higher number of passes per possession time, than intermediate and bottom-ranked teams. Finally, similarities and dissimilarities in team signatures of play between the 20 teams were displayed using Cohen’s effect size. In sum, findings suggest the competitive performance was influenced by the density and connectivity of the teams, mainly due to the way teams use their possession time to give intensity to their game.
Resumo:
Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Biológicas, Departamento de Biologia Molecular, 2016.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.
Resumo:
Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.
Resumo:
Opportunistic routing (OR) takes advantage of the broadcast nature and spatial diversity of wireless transmission to improve the performance of wireless ad-hoc networks. Instead of using a predetermined path to send packets, OR postpones the choice of the next-hop to the receiver side, and lets the multiple receivers of a packet to coordinate and decide which one will be the forwarder. Existing OR protocols choose the next-hop forwarder based on a predefined candidate list, which is calculated using single network metrics. In this paper, we propose TLG - Topology and Link quality-aware Geographical opportunistic routing protocol. TLG uses multiple network metrics such as network topology, link quality, and geographic location to implement the coordination mechanism of OR. We compare TLG with well-known existing solutions and simulation results show that TLG outperforms others in terms of both QoS and QoE metrics.