968 resultados para Classifier Combination Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The investigation of pathogen persistence in vector-borne diseases is important in different ecological and epidemiological contexts. In this thesis, I have developed deterministic and stochastic models to help investigating the pathogen persistence in host-vector systems by using efficient modelling paradigms. A general introduction with aims and objectives of the studies conducted in the thesis are provided in Chapter 1. The mathematical treatment of models used in the thesis is provided in Chapter 2 where the models are found locally asymptotically stable. The models used in the rest of the thesis are based on either the same or similar mathematical structure studied in this chapter. After that, there are three different experiments that are conducted in this thesis to study the pathogen persistence. In Chapter 3, I characterize pathogen persistence in terms of the Critical Community Size (CCS) and find its relationship with the model parameters. In this study, the stochastic versions of two epidemiologically different host-vector models are used for estimating CCS. I note that the model parameters and their algebraic combination, in addition to the seroprevalence level of the host population, can be used to quantify CCS. The study undertaken in Chapter 4 is used to estimate pathogen persistence using both deterministic and stochastic versions of a model with seasonal birth rate of the vectors. Through stochastic simulations we investigate the pattern of epidemics after the introduction of an infectious individual at different times of the year. The results show that the disease dynamics are altered by the seasonal variation. The higher levels of pre-existing seroprevalence reduces the probability of invasion of dengue. In Chapter 5, I considered two alternate ways to represent the dynamics of a host-vector model. Both of the approximate models are investigated for the parameter regions where the approximation fails to hold. Moreover, three metrics are used to compare them with the Full model. In addition to the computational benefits, these approximations are used to investigate to what degree the inclusion of the vector population in the dynamics of the system is important. Finally, in Chapter 6, I present the summary of studies undertaken and possible extensions for the future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Herbicide runoff from cropping fields has been identified as a threat to the Great Barrier Reef ecosystem. A field investigation was carried out to monitor the changes in runoff water quality resulting from four different sugarcane cropping systems that included different herbicides and contrasting tillage and trash management practices. These include (i) Conventional - Tillage (beds and inter-rows) with residual herbicides used; (ii) Improved - only the beds were tilled (zonal) with reduced residual herbicides used; (iii) Aspirational - minimum tillage (one pass of a single tine ripper before planting) with trash mulch, no residual herbicides and a legume intercrop after cane establishment; and (iv) New Farming System (NFS) - minimum tillage as in Aspirational practice with a grain legume rotation and a combination of residual and knockdown herbicides. Results suggest soil and trash management had a larger effect on the herbicide losses in runoff than the physico-chemical properties of herbicides. Improved practices with 30% lower atrazine application rates than used in conventional systems produced reduced runoff volumes by 40% and atrazine loss by 62%. There were a 2-fold variation in atrazine and >10-fold variation in metribuzin loads in runoff water between reduced tillage systems differing in soil disturbance and surface residue cover from the previous rotation crops, despite the same herbicide application rates. The elevated risk of offsite losses from herbicides was illustrated by the high concentrations of diuron (14mugL-1) recorded in runoff that occurred >2.5months after herbicide application in a 1st ratoon crop. A cropping system employing less persistent non-selective herbicides and an inter-row soybean mulch resulted in no residual herbicide contamination in runoff water, but recorded 12.3% lower yield compared to Conventional practice. These findings reveal a trade-off between achieving good water quality with minimal herbicide contamination and maintaining farm profitability with good weed control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendritic cell algorithm is an immune-inspired technique for processing time-dependant data. Here we propose it as a possible solution for a robotic classification problem. The dendritic cell algorithm is implemented on a real robot and an investigation is performed into the effects of varying the migration threshold median for the cell population. The algorithm performs well on a classification task with very little tuning. Ways of extending the implementation to allow it to be used as a classifier within the field of robotic security are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a principal-agent model we analyze the firm’s decision to adopt an informal or a standardized Environmental Management System (EMS). Our results are consistent with empirical evidence in several respects. A standardized EMS increases the internal control at the cost of introducing some degree of rigidity that entails an endogenous setup cost. Standardized systems are more prone to be adopted by big and well established firms and under tougher environmental policies. Firms with standardized EMS tend to devote more effort to abatement although this effort results in lower pollution only if public incentives are strong enough, suggesting a complementarity relationship between standardized EMS and public policies. Emission charges have both a marginal effect on abatement and a qualitative effect on the adoption decision that may induce a conflict between private and public interests. As a result of the combination of these two effects it can be optimal for the government to distort the tax in a specific way in order to push the firm to choose the socially optimal EMS. The introduction of standardized systems can result in win-win situations where firms, society and the environment get better off.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências Veterinárias, especialidade de Produção Animal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To develop liposome formulations containing monoclonal antibody anti-HER2 (MabHer2), and Paclitaxel (PTX). Methods: Seven different liposomal systems containing PTX, or MabHer2 or a combination of PTX and MabHer2 were made using lipid film hydration technique and sonication. The effects of liposome preparation conditions and extraction methods on antibody structure were investigated by polyacrylamide gel electrophoresis and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. The characteristics of the liposomes were determined by a zetasizer, while drug-loading efficiency was evaluated by high-performance liquid chromatography. The cytotoxic effect of the liposome formulations was evaluated on MDA-MB-453 (HER2+) and MCF-7 (HER2-) breast cancer cell lines by MTT assay. Results: The antibody was not significantly affected by the stress conditions and the method of extraction. The particle size of liposomes was < 200 nm while the amount of incorporated PTX was 97.6 % for liposome without cationic agent and 98.2 % for those with cationic agent. Recovery of MabHer2 was 94.38 % after extraction. Combined PTX/MabHer2 liposome was more toxic on HER2 overexpressing positive MDA-MB-453 cell line than PTX-loaded liposomes and MabHer2. MabHer2 and combined PTX/MabHer2 liposomes showed no toxic effects on HER2 overexpressing negative MCF-7 cells relative to cationic PTX-loaded liposomes. Conclusions: This results obtained show that PTX can be encapsulated successfully into liposoma systems and that owing to Her2 specific antibody, these systems can be delivered directly to the target cell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cliff Mine, an archaeological site situated on the Keweenaw Peninsula of Michigan, is the location of the first successful attempt to mine native copper in North America. Under the management of the Pittsburgh & Boston Mining Company from 1845-1879, two-third of the Cliff’s mineral output was in the form of mass copper, some pieces of which weighed over 5 tons when removed from the ground. The unique nature of mass copper and the Cliff Mine’s handling of it make it one of the best examples of early mining processes in the Keweenaw District. Mass copper only constituted 2% of the entire product of the Lake Superior copper districts, and the story of early mining on the Peninsula is generally overshadowed by later, longer running mines such as the Calumet & Helca and Quincy Mining Companies. Operating into the mid-twentieth century, the size and duration of these later mines would come to define the region, though they would not have been possible without the Cliff’s early success. Research on the Cliff Mine has previously focused on social and popular history, neglecting the structural remains. However, these remains are physical clues to the technical processes that defined early mining on the Keweenaw. Through archaeological investigations, these processes and their associated networks were documented as part of the 2010 Michigan Technological Archaeology Field School’s curriculum. The project will create a visual representation of these processes utilizing Geographic Information Systems software. This map will be a useful aid in future research, community engagement and possible future interpretive planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is described herein, where neutron reflectivity measurements that probe changes in the density profile of thin films as they absorb material from the gas phase have been combined with a Love wave based gravimetric assay that measures the mass of absorbed material. This combination of techniques not only determines the spatial distribution of absorbed molecules, but also reveals the amount of void space within the thin film (a quantity that can be difficult to assess using neutron reflectivity measurements alone). The uptake of organic solvent vapours into spun cast films of polystyrene has been used as a model system with a view to this method having the potential for extension to the study of other systems. These could include, for example, humidity sensors, hydrogel swelling, biomolecule adsorption or transformations of electroactive and chemically reactive thin films. This is the first ever demonstration of combined neutron reflectivity and Love wave-based gravimetry and the experimental caveats, limitations and scope of the method are explored and discussed in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a previous work (Nicu et al. 2013), the flocculation efficiency of three chitosans differing by molecular weight and charge density were evaluated for their potential use as wet end additives in papermaking. According to the promising results obtained, chitosan (single system) and its combination with bentonite (dual system) were evaluated as retention aids, and their efficiency was compared with poly(diallyl dimethyl ammonium chloride) (PDADMAC) and polyethylenimine (PEI). In single systems, chitosan was clearly more efficient in drainage rate than PDADMAC and PEI, especially those with the lowest molecular weights; however, retention is considerably lower. This drawback can be overcome by using dual systems with anionic bentonite microparticles, with the optimum ratio of polymer:bentonite being 1:4 (wt./wt.). In dual systems, the differences in retention were almost negligible, and the difference in drainage rate was even higher, together with better floc reversibility. The most efficient chitosan in single systems was Ch.MMW, while Ch.LMW was the most efficient in dual systems. The flocculation mechanism of chitosan was a combination of patch formation, charge neutralization, and partial bridge formation, and the predominant mechanism depended on the molecular weight and charge density of the chitosan.