947 resultados para CBN griding wheel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The highly dynamic nature of some sandy shores with continuous morphological changes require the development of efficient and accurate methodological strategies for coastal hazard assessment and morphodynamic characterisation. During the past decades, the general methodological approach for the establishment of coastal monitoring programmes was based on photogrammetry or classical geodetic techniques. With the advent of new geodetic techniques, space-based and airborne-based, new methodologies were introduced in coastal monitoring programmes. This paper describes the development of a monitoring prototype that is based on the use of global positioning system (GPS). The prototype has a GPS multiantenna mounted on a fast surveying platform, a land vehicle appropriate for driving in the sand (four-wheel quad). This system was conceived to perform a network of shore profiles in sandy shores stretches (subaerial beach) that extend for several kilometres from which high-precision digital elevation models can be generated. An analysis of the accuracy and precision of some differential GPS kinematic methodologies is presented. The development of an adequate survey methodology is the first step in morphodynamic shore characterisation or in coastal hazard assessment. The sample method and the computational interpolation procedures are important steps for producing reliable three-dimensional surface maps that are real as possible. The quality of several interpolation methods used to generate grids was tested in areas where there were data gaps. The results obtained allow us to conclude that with the developed survey methodology, it is possible to Survey sandy shores stretches, under spatial scales of kilometers, with a vertical accuracy of greater than 0.10 m in the final digital elevation models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software Architecture is a high level description of a software intensive system that enables architects to have a better intellectual control over the complete system. It is also used as a communication vehicle among the various system stakeholders. Variability in software-intensive systems is the ability of a software artefact (e.g., a system, subsystem, or component) to be extended, customised, or configured for deployment in a specific context. Although variability in software architecture is recognised as a challenge in multiple domains, there has been no formal consensus on how variability should be captured or represented. In this research, we addressed the problem of representing variability in software architecture through a three phase approach. First, we examined existing literature using the Systematic Literature Review (SLR) methodology, which helped us identify the gaps and challenges within the current body of knowledge. Equipped with the findings from the SLR, a set of design principles have been formulated that are used to introduce variability management capabilities to an existing Architecture Description Language (ADL). The chosen ADL was developed within our research group (ALI) and to which we have had complete access. Finally, we evaluated the new version of the ADL produced using two distinct case studies: one from the Information Systems domain, an Asset Management System (AMS); and another from the embedded systems domain, a Wheel Brake System (WBS). This thesis presents the main findings from the three phases of the research work, including a comprehensive study of the state-of-the-art; the complete specification of an ADL that is focused on managing variability; and the lessons learnt from the evaluation work of two distinct real-life case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Correctly diagnosing basal cell carcinoma (BCC) clinical type is crucial for the therapeutic management. A systematic description of the variability of all reported BCC dermoscopic features according to clinical type and anatomic location is lacking. Objectives To describe the dermoscopic variability of BCC according to clinical type and anatomic location and to test the hypothesis of a clinical/dermoscopic continuum across superficial BCCs (sBCCs) with increasing palpability. Methods Clinical/dermoscopic images of nodular BCCs (nBCCs) and sBCCs with different degrees of palpability were retrospectively evaluated for the presence of dermoscopic criteria including degree of pigmentation, BCC-associated patterns, diverse vascular patterns, melanocytic patterns and polarized light patterns. Results We examined 501 histopathologically proven BCCs (66.9% sBCCs; 33.1% nBCCs), mainly located on trunk (46.7%; mostly sBCCs) and face (30.5%; mostly nBCCs). Short fine telangiectasias, leaf-like areas, spoke-wheel areas, small erosions and concentric structures were significantly associated with sBCC, whereas arborizing telangiectasias, blue-white veil-like structures, white shiny areas and rainbow pattern with nBCCs. Short fine telangiectasia, spoke-wheel areas and small erosions were independently associated with trunk location, whereas arborizing telangiectasias with facial location. Scalp BCCs had significantly more pigmentation and melanocytic criteria than BCCs located elsewhere. Multiple clinical/dermoscopic parameters displayed a significant linear trend across increasingly palpable sBCCs. Conclusions Particular dermoscopic criteria are independently associated with clinical type and anatomic location of BCC. Heavily pigmented, scalp BCCs are the most challenging to diagnose. A clinical/dermoscopic continuum across increasingly palpable sBCCs was detected and could be potentially important for the non-surgical management of the disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quenched and tempered high-speed steels obtained by powder metallurgy are commonly used in automotive components, such as valve seats of combustion engines. In order to machine these components, tools with high wear resistance and appropriate cutting edge geometry are required. This work aims to investigate the influence of the edge preparation of polycrystalline cubic boron nitride (PCBN) tools on the wear behavior in the orthogonal longitudinal turning of quenched and tempered M2 high-speed steels obtained by powder metallurgy. For this research, PCBN tools with high and low-CBN content have been used. Two different cutting edge geometries with a honed radius were tested: with a ground land (S shape) and without it (E shape). Also, the cutting speed was varied from 100 to 220 m/min. A rigid CNC lathe was used. The results showed that the high-CBN, E-shaped tool presented the longest life for a cutting speed of 100 m/min. High-CBN tools with a ground land and honed edge radius (S shaped) showed edge damage and lower values of the tool’s life. Low-CBN, S-shaped tools showed similar results, but with an inferior performance when compared with tools with high CBN content in both forms of edge preparation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A multistate molecular dyad containing flavylium and viologen units was synthesized and the pH dependent thermodynamics of the network completely characterized by a variety of spectroscopic techniques such as NMR, UV-vis and stopped-flow. The flavylium cation is only stable at acidic pH values. Above pH ≈ 5 the hydration of the flavylium leads to the formation of the hemiketal followed by ring-opening tautomerization to give the cis-chalcone. Finally, this last species isomerizes to give the trans-chalcone. For the present system only the flavylium cation and the trans-chalcone species could be detected as being thermodynamically stable. The hemiketal and the cis-chalcone are kinetic intermediates with negligible concentrations at the equilibrium. All stable species of the network were found to form 1 : 1 and 2 : 1 host : guest complexes with cucurbit[7]uril (CB7) with association constants in the ranges 10(5)-10(8) M(-1) and 10(3)-10(4) M(-1), respectively. The 1 : 1 complexes were particularly interesting to devise pH responsive bistable pseudorotaxanes: at basic pH values (≈12) the flavylium cation interconverts into the deprotonated trans-chalcone in a few minutes and under these conditions the CB7 wheel was found to be located around the viologen unit. A decrease in pH to values around 1 regenerates the flavylium cation in seconds and the macrocycle is translocated to the middle of the axle. On the other hand, if the pH is decreased to 6, the deprotonated trans-chalcone is neutralized to give a metastable species that evolves to the thermodynamically stable flavylium cation in ca. 20 hours. By taking advantage of the pH-dependent kinetics of the trans-chalcone/flavylium interconversion, spatiotemporal control of the molecular organization in pseudorotaxane systems can be achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire présente l’étude expérimentale de l’écoulement d’entrée d’un aspirateur d’une turbine bulbe présentant une chute abrupte de performance. Des mesures par vélocimétrie laser à effet Doppler (LDV) ont été réalisées sur deux axes soit en aval des pales de la roue et en aval du moyeu de la roue. Une particularité de cette étude est la conception d’un montage permettant de mesurer la vitesse axiale proche de la paroi du cône. De plus, une méthode d’estimation de la vitesse radiale moyenne a été développée. Ces mesures ont permis de caractériser l’écoulement primaire et les écoulements secondaires et d’analyser leur évolution entre les deux axes. De plus, l’évolution de ces écoulements est analysée en fonction de la chute de performance de la turbine. Les principales particularités de l’écoulement sont la présence d’une recirculation sous le moyeu, d’une zone contrarotative, des sillages des directrices et des tourbillons de bout de pale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a potent greenhouse gas with a global warming potential 298 times higher than carbon dioxide. Soils are a natural source of N2O, contributing 65% of global emissions. This paper is the first in Australia to measure and compare N2O emissions from pre-plant controlled release (CR) and conventional granular (CV) fertilisers in pineapple production using static PVC chambers to capture N2O emissions. Farm 1 cumulative emissions from the CR fertiliser were 3.22 kg ha-1 compared to 6.09 kg ha-1 produced by the CV. At farm 2 the CV blend emitted 2.36 kg ha-1 in comparison to the CR blend of 2.92 kg ha-1. Daily N2O flux rates showed a relationship of direct response to rainfall and soil moisture availability. High emissions were observed for wheel tracks where increased N2O emissions may be linked to soil compaction and waterlogging that creates anaerobic conditions after rain events. Emission measurements over three months highlighted the inconsistencies found in other studies relative to reducing emissions through controlled release nitrogen. More investigations are required to verify the benefits associated with controlled release fertiliser use in pineapples, placement and seasonal timing to address N2O emissions in pineapples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com uma Sociedade cada vez mais complexa e evolutiva, vários são os desafios a que somos postos à prova. Com isto, a população apesar das suas dependências e comodismos, tende a procurar novos meios de lazer que se envolvam com a natureza e com a prática desportiva. Deste modo, o Cicloturismo vai ao encontro destas práticas, tornando-se um mercado em evolução, cuja a aderência do número de pessoas tem vindo a crescer, sendo um tipo de turismo que acarreta resultados positivos, tais como um estilo de vida saudável, bem como a utilização de um meio de transporte de baixo custo e com uma pegada ecológica. Na mesma linha de pensamento, o presente documento aborda todo o processo de execução de um Quadriciclo. O objetivo geral do mesmo, incide em satisfazer as necessidades dos praticantes de cicloturismo e incentivar as pessoas para o uso da bicicleta como meio de transporte. Para tal, os participantes envolvidos foram utilizadores assíduos do uso da bicicleta, não só procurando-a para o lazer, mas sim para a prática do Cicloturismo. Assim, os utilizadores do veiculo projetado têm o privilégio de ter como painel de fundo das suas viagens, o coração da natureza, disfrutando do conforto e da segurança que este proporciona, e ainda como resultado das mesmas, um estilo de vida mais saudável. De forma sequencial e organizada, utilizando como base a Metodologia de Ulrich e Eppinger, são notáveis os passos que foram dados para obter o produto final. Para além dos fortes conceitos de conforto, estabilidade, engenharia e ergonomia, é primordial salientar toda a importância do Design, uma vez que influenciou a disposição do veículo de quatro rodas, do início ao fim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O tema alimentação é uma preocupação nos dias atuais. O aparecimento de doenças associadas aos excessos alimentares, diferentes culturas e informação transmitida, são problemas constantes e preocupantes na sociedade. As pessoas têm de ser educadas desde pequenas para esta situação, valorizando toda a informação que recolhem por parte do meio envolvente e colocando em prática o que aprenderam. Este trabalho baseou-se na avaliação de alguns indivíduos sobre o conhecimento e práticas que têm sobre o tema alimentação saudável. Escolheu-se um grupo de indivíduos que frequentavam do 5º ao 9º ano de escolaridade, do concelho de Viseu. O objectivo pretendido consistia em avaliar o conhecimento adquirido no convívio com a família e amigos, na escola e no marketing que os rodeia, verificando se era colocado em prática, ou se o conhecimento era insuficiente. Adotou-se o método de inquéritos por questionários para recolher a informação necessária e o SPSS (Statistical Package for the Social Sciences) como software para fazer a análise. As escolas escolhidas, encarregados de educação e alunos foram muito recetivos a este questionário, tornando possível uma amostra de 852 inquiridos, dos quais 50,12% são do sexo feminino e 49,88% são do sexo masculino, e cujas idades variam entre os 10 e os 18 anos. De modo geral percebe-se que os inquiridos têm alguma informação sobre alimentação saudável. A maior parte (93,8%) identifica a roda dos alimentos atual e é através da escola (60,2%) e pais/familiares (75,1%) que obtêm o seu conhecimento. No entanto, numa avaliação global resultante de uma análise de clusters, conclui-se que os indivíduos que até possuem algum conhecimento representam um terço dos alunos (38,7%), o que demonstra que ainda há barreiras que têm que se transpor para alertar a população estudantil para este assunto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper studies the influence of rail weld dip on wheel-rail contact dynamics, with particular reference to freight trains where it is important to increase the operating speed and also the load transported. This has produced a very precise model, albeit simple and cost-effective, which has enabled train-track dynamic interactions over rail welds to be studied to make it possible to quantify the influence on dynamic forces and displacements of the welding geometry; of the position of the weld relative to the sleeper; of the vehicle's speed; and of the axle load and wheelset unsprung mass. It is a vertical model on the spatial domain and is drawn up in a simple fashion from vertical track receptances. For the type of track and vehicle used, the results obtained enable the quantification of increases in wheel-rail contact forces due to the new speed and load conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.