987 resultados para OPTIMAL FAT LOADS
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.
Resumo:
The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.
Resumo:
This paper deals with the optimal load flow problem in a fixed-head hydrothermal electric power system. Equality constraints on the volume of water available for active power generation at the hydro plants as well as inequality constraints on the reactive power generation at the voltage controlled buses are imposed. Conditions for optimal load flow are derived and a successive approximation algorithm for solving the optimal generation schedule is developed. Computer implementation of the algorithm is discussed, and the results obtained from the computer solution of test systems are presented.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
Joints are primary sources of weakness in structures. Pin joints are very common and are used where periodic disassembly of components is needed. A circular pin in a circular hole in an infinitely large plate is an abstraction of such a pin joint. A two-dimensional plane-stress analysis of such a configuration is carried out, here, subjected to pin-bearing and/or biaxial-plate loading. The pin is assumed to be rigid compared to the plate material. For pin load the reactive stresses at the edges of the infinite plate tend to zero though their integral over the external boundary equals to the pin load. The pin-hole interface is unbonded and so beyond some load levels the plate separates from the pin and the extent of separation is a non-linear function of load level. The problem is solved by inverse technique where the extent of contact is specified and the causative loads are evaluated directly. In the situations where combined load is acting the separation-contact zone specification generally needs two parameters (angles) to be specified. The present report deals with analysing such a situation in metallic (or isotropic) plates. Numerical results are provided for parametric representation and the methodology is demonstrated.
Resumo:
Systems of learning automata have been studied by various researchers to evolve useful strategies for decision making under uncertainity. Considered in this paper are a class of hierarchical systems of learning automata where the system gets responses from its environment at each level of the hierarchy. A classification of such sequential learning tasks based on the complexity of the learning problem is presented. It is shown that none of the existing algorithms can perform in the most general type of hierarchical problem. An algorithm for learning the globally optimal path in this general setting is presented, and its convergence is established. This algorithm needs information transfer from the lower levels to the higher levels. Using the methodology of estimator algorithms, this model can be generalized to accommodate other kinds of hierarchical learning tasks.
Resumo:
Background Segmental biomechanics of the scoliotic spine are important since the overall spinal deformity is comprised of the cumulative coronal and axial rotations of individual joints. This study investigates the coronal plane segmental biomechanics for adolescent idiopathic scoliosis patients in response to physiologically relevant axial compression. Methods Individual spinal joint compliance in the coronal plane was measured for a series of 15 idiopathic scoliosis patients using axially loaded magnetic resonance imaging. Each patient was first imaged in the supine position with no axial load, and then again following application of an axial compressive load. Coronal plane disc wedge angles in the unloaded and loaded configurations were measured. Joint moments exerted by the axial compressive load were used to derive estimates of individual joint compliance. Findings The mean standing major Cobb angle for this patient series was 46°. Mean intra-observer measurement error for endplate inclination was 1.6°. Following loading, initially highly wedged discs demonstrated a smaller change in wedge angle, than less wedged discs for certain spinal levels (+ 2,+1,− 2 relative to the apex, (p < 0.05)). Highly wedged discs were observed near the apex of the curve, which corresponded to lower joint compliance in the apical region. Interpretation While individual patients exhibit substantial variability in disc wedge angles and joint compliance, overall there is a pattern of increased disc wedging near the curve apex, and reduced joint compliance in this region. Approaches such as this can provide valuable biomechanical data on in vivo spinal biomechanics of the scoliotic spine, for analysis of deformity progression and surgical planning.
Resumo:
Australia is the world’s third largest exporter of raw sugar after Brazil and Thailand, with around $2.0 billion in export earnings. Transport systems play a vital role in the raw sugar production process by transporting the sugarcane crop between farms and mills. In 2013, 87 per cent of sugarcane was transported to mills by cane railway. The total cost of sugarcane transport operations is very high. Over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. A cane railway network mainly involves single track sections and multiple track sections used as passing loops or sidings. The cane railway system performs two main tasks: delivering empty bins from the mill to the sidings for filling by harvesters; and collecting the full bins of cane from the sidings and transporting them to the mill. A typical locomotive run involves an empty train (locomotive and empty bins) departing from the mill, traversing some track sections and delivering bins at specified sidings. The locomotive then, returns to the mill, traversing the same track sections in reverse order, collecting full bins along the way. In practice, a single track section can be occupied by only one train at a time, while more than one train can use a passing loop (parallel sections) at a time. The sugarcane transport system is a complex system that includes a large number of variables and elements. These elements work together to achieve the main system objectives of satisfying both mill and harvester requirements and improving the efficiency of the system in terms of low overall costs. These costs include delay, congestion, operating and maintenance costs. An effective cane rail scheduler will assist the traffic officers at the mill to keep a continuous supply of empty bins to harvesters and full bins to the mill with a minimum cost. This paper addresses the cane rail scheduling problem under rail siding capacity constraints where limited and unlimited siding capacities were investigated with different numbers of trains and different train speeds. The total operating time as a function of the number of trains, train shifts and a limited number of cane bins have been calculated for the different siding capacity constraints. A mathematical programming approach has been used to develop a new scheduler for the cane rail transport system under limited and unlimited constraints. The new scheduler aims to reduce the total costs associated with the cane rail transport system that are a function of the number of bins and total operating costs. The proposed metaheuristic techniques have been used to find near optimal solutions of the cane rail scheduling problem and provide different possible solutions to avoid being stuck in local optima. A numerical investigation and sensitivity analysis study is presented to demonstrate that high quality solutions for large scale cane rail scheduling problems are obtainable in a reasonable time. Keywords: Cane railway, mathematical programming, capacity, metaheuristics
Resumo:
Typically only a limited number of consortiums are able to competitively bid for Public Private Partnership (PPP) projects. Consequently, this may lead to oligopoly pricing constraints and ineffective competition, thus engendering ex ante market failure. In addressing this issue, this paper aims to determine the optimal number of bidders required to ensure a healthy level of competition is available to procure major infrastructure projects. The theories of Structure-Conduct-Performance (SCP) paradigm; Game Theory and Auction Theory and Transaction Cost Economics are reviewed and discussed and used to produce an optimal level of competition for major infrastructure procurement, that prevents market failure ex ante (lack of competition) and market failure ex post (due to asymmetric lock-in).
Resumo:
To remain competitive, many agricultural systems are now being run along business lines. Systems methodologies are being incorporated, and here evolutionary computation is a valuable tool for identifying more profitable or sustainable solutions. However, agricultural models typically pose some of the more challenging problems for optimisation. This chapter outlines these problems, and then presents a series of three case studies demonstrating how they can be overcome in practice. Firstly, increasingly complex models of Australian livestock enterprises show that evolutionary computation is the only viable optimisation method for these large and difficult problems. On-going research is taking a notably efficient and robust variant, differential evolution, out into real-world systems. Next, models of cropping systems in Australia demonstrate the challenge of dealing with competing objectives, namely maximising farm profit whilst minimising resource degradation. Pareto methods are used to illustrate this trade-off, and these results have proved to be most useful for farm managers in this industry. Finally, land-use planning in the Netherlands demonstrates the size and spatial complexity of real-world problems. Here, GIS-based optimisation techniques are integrated with Pareto methods, producing better solutions which were acceptable to the competing organizations. These three studies all show that evolutionary computation remains the only feasible method for the optimisation of large, complex agricultural problems. An extra benefit is that the resultant population of candidate solutions illustrates trade-offs, and this leads to more informed discussions and better education of the industry decision-makers.
Resumo:
We consider the problem of estimating the optimal parameter trajectory over a finite time interval in a parameterized stochastic differential equation (SDE), and propose a simulation-based algorithm for this purpose. Towards this end, we consider a discretization of the SDE over finite time instants and reformulate the problem as one of finding an optimal parameter at each of these instants. A stochastic approximation algorithm based on the smoothed functional technique is adapted to this setting for finding the optimal parameter trajectory. A proof of convergence of the algorithm is presented and results of numerical experiments over two different settings are shown. The algorithm is seen to exhibit good performance. We also present extensions of our framework to the case of finding optimal parameterized feedback policies for controlled SDE and present numerical results in this scenario as well.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.