980 resultados para optimal route finding


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the effect of subdividing the potential barrier along the reaction coordinate on Kramers' escape rate for a model potential. Using the known supersymmetric potential approach, we show the existence of an optimal number of subdivisions that maximizes the rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activation of midbrain dopamine systems is thought to be critically involved in the addictive properties of abused substances. Drugs of abuse increase dopamine release in the nucleus accumbens and dorsal striatum, which are the target areas of mesolimbic and nigrostriatal dopamine pathways, respectively. Dopamine release in the nucleus accumbens is thought to mediate the attribution of incentive salience to rewards, and dorsal striatal dopamine release is involved in habit formation. In addition, changes in the function of prefrontal cortex (PFC), the target area of mesocortical dopamine pathway, may skew information processing and memory formation such that the addict pays an abnormal amount of attention to drug-related cues. In this study, we wanted to explore how long-term forced oral nicotine exposure or the lack of catechol-O-methyltransferase (COMT), one of the dopamine metabolizing enzymes, would affect the functioning of these pathways. We also wanted to find out how the forced nicotine exposure or the lack of COMT would affect the consumption of nicotine, alcohol, or cocaine. First, we studied the effect of forced chronic nicotine exposure on the sensitivity of dopamine D2-like autoreceptors in microdialysis and locomotor activity experiments. We found that the sensitivity of these receptors was unchanged after forced oral nicotine exposure, although an increase in the sensitivity was observed in mice treated with intermittent nicotine injections twice daily for 10 days. Thus, the effect of nicotine treatment on dopamine autoreceptor sensitivity depends on the route, frequency, and time course of drug administration. Second, we investigated whether the forced oral nicotine exposure would affect the reinforcing properties of nicotine injections. The chronic nicotine exposure did not significantly affect the development of conditioned place preference to nicotine. In the intravenous self-administration paradigm, however, the nicotine-exposed animals self-administered nicotine at a lower unit dose than the control animals, indicating that their sensitivity to the reinforcing effects of nicotine was enhanced. Next, we wanted to study whether the Comt gene knock-out animals would be a suitable model to study alcohol and cocaine consumption or addiction. Although previous work had shown male Comt knock-out mice to be less sensitive to the locomotor-activating effects of cocaine, the present study found that the lack of COMT did not affect the consumption of cocaine solutions or the development of cocaine-induced place preference. However, the present work did find that male Comt knock-out mice, but not female knock-out mice, consumed ethanol more avidly than their wild-type littermates. This finding suggests that COMT may be one of the factors, albeit not a primary one, contributing to the risk of alcoholism. Last, we explored the effect of COMT deficiency on dorsal striatal, accumbal, and prefrontal cortical dopamine metabolism under no-net-flux conditions and under levodopa load in freely-moving mice. The lack of COMT did not affect the extracellular dopamine concentrations under baseline conditions in any of the brain areas studied. In the prefrontal cortex, the dopamine levels remained high for a prolonged time after levodopa treatment in male, but not female, Comt knock-out mice. COMT deficiency induced accumulation of 3,4-dihydroxyphenylacetic acid, which increased further under levodopa load. Homovanillic acid was not detectable in Comt knock-out animals either under baseline conditions or after levodopa treatment. Taken together, the present results show that although forced chronic oral nicotine exposure affects the reinforcing properties of self-administered nicotine, it is not an addiction model itself. COMT seems to play a minor role in dopamine metabolism and in the development of addiction under baseline conditions, indicating that dopamine function in the brain is well-protected from perturbation. However, the role of COMT becomes more important when the dopaminergic system is challenged, such as by pharmacological manipulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary goal of a phase I trial is to find the maximally tolerated dose (MTD) of a treatment. The MTD is usually defined in terms of a tolerable probability, q*, of toxicity. Our objective is to find the highest dose with toxicity risk that does not exceed q*, a criterion that is often desired in designing phase I trials. This criterion differs from that of finding the dose with toxicity risk closest to q*, that is used in methods such as the continual reassessment method. We use the theory of decision processes to find optimal sequential designs that maximize the expected number of patients within the trial allocated to the highest dose with toxicity not exceeding q*, among the doses under consideration. The proposed method is very general in the sense that criteria other than the one considered here can be optimized and that optimal dose assignment can be defined in terms of patients within or outside the trial. It includes as an important special case the continual reassessment method. Numerical study indicates the strategy compares favourably with other phase I designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of optimal scheduling of the generation of a hydro-thermal power system that is faced with a shortage of energy is studied. The deterministic version of the problem is first analyzed, and the results are then extended to cases where the loads and the hydro inflows are random variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the optimal load flow problem in a fixed-head hydrothermal electric power system. Equality constraints on the volume of water available for active power generation at the hydro plants as well as inequality constraints on the reactive power generation at the voltage controlled buses are imposed. Conditions for optimal load flow are derived and a successive approximation algorithm for solving the optimal generation schedule is developed. Computer implementation of the algorithm is discussed, and the results obtained from the computer solution of test systems are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Systems of learning automata have been studied by various researchers to evolve useful strategies for decision making under uncertainity. Considered in this paper are a class of hierarchical systems of learning automata where the system gets responses from its environment at each level of the hierarchy. A classification of such sequential learning tasks based on the complexity of the learning problem is presented. It is shown that none of the existing algorithms can perform in the most general type of hierarchical problem. An algorithm for learning the globally optimal path in this general setting is presented, and its convergence is established. This algorithm needs information transfer from the lower levels to the higher levels. Using the methodology of estimator algorithms, this model can be generalized to accommodate other kinds of hierarchical learning tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australia is the world’s third largest exporter of raw sugar after Brazil and Thailand, with around $2.0 billion in export earnings. Transport systems play a vital role in the raw sugar production process by transporting the sugarcane crop between farms and mills. In 2013, 87 per cent of sugarcane was transported to mills by cane railway. The total cost of sugarcane transport operations is very high. Over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. A cane railway network mainly involves single track sections and multiple track sections used as passing loops or sidings. The cane railway system performs two main tasks: delivering empty bins from the mill to the sidings for filling by harvesters; and collecting the full bins of cane from the sidings and transporting them to the mill. A typical locomotive run involves an empty train (locomotive and empty bins) departing from the mill, traversing some track sections and delivering bins at specified sidings. The locomotive then, returns to the mill, traversing the same track sections in reverse order, collecting full bins along the way. In practice, a single track section can be occupied by only one train at a time, while more than one train can use a passing loop (parallel sections) at a time. The sugarcane transport system is a complex system that includes a large number of variables and elements. These elements work together to achieve the main system objectives of satisfying both mill and harvester requirements and improving the efficiency of the system in terms of low overall costs. These costs include delay, congestion, operating and maintenance costs. An effective cane rail scheduler will assist the traffic officers at the mill to keep a continuous supply of empty bins to harvesters and full bins to the mill with a minimum cost. This paper addresses the cane rail scheduling problem under rail siding capacity constraints where limited and unlimited siding capacities were investigated with different numbers of trains and different train speeds. The total operating time as a function of the number of trains, train shifts and a limited number of cane bins have been calculated for the different siding capacity constraints. A mathematical programming approach has been used to develop a new scheduler for the cane rail transport system under limited and unlimited constraints. The new scheduler aims to reduce the total costs associated with the cane rail transport system that are a function of the number of bins and total operating costs. The proposed metaheuristic techniques have been used to find near optimal solutions of the cane rail scheduling problem and provide different possible solutions to avoid being stuck in local optima. A numerical investigation and sensitivity analysis study is presented to demonstrate that high quality solutions for large scale cane rail scheduling problems are obtainable in a reasonable time. Keywords: Cane railway, mathematical programming, capacity, metaheuristics