943 resultados para continuous performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to build up a data set including productive performance and production factors data of growing-finishing (GF) pigs in Spain in order to perform a representative and reliable description of the traits of Spanish growing-finishing pig industry. Data from 764 batches from 452 farms belonging to nine companies (1,157,212 pigs) were collected between 2008 and 2010 through a survey including five parts: general, facilities, feeding, health status and performance. Most studied farms had only GF pigs on their facilities (94.7%), produced ‘industrial’ pigs (86.7%), had entire male and female (59.5%) and Pietrain-sired pigs (70.0%), housed between 13-20 pigs per pen (87.2%), had  50% of slatted floor (70%), single-space dry feeder (54.0%), nipple drinker (88.7%) and automatic ventilation systems (71.2%). A 75.0% of the farms used three feeding phases using mainly pelleted diets (91.0%), 61.3% performed three or more antibiotic treatments and 36.5% obtained water from the public supply. Continuous variables studied had the following average values: number of pigs placed per batch, 1,515 pigs; initial and final body weight, 19.0 and 108 kg; length of GF period, 136 days; culling rate, 1.4%; barn occupation, 99.7%; feed intake per pig and fattening cycle, 244 kg; daily gain, 0.657 kg; feed conversion ratio, 2.77 kg kg-1 and mortality rate, 4.3%. Data reflecting the practical situation of the Spanish growing and finishing pig production and it may contribute to develop new strategies in order to improve the productive and economic efficiency of GF pig units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an amperometric biosensor, consisting of a subcutaneously implanted miniature (0.29 mm diameter, 5 × 10−4 cm2 mass transporting area), 90 s 10–90% rise/decay time glucose electrode, and an on-the-skin electrocardiogram Ag/AgCl electrode was tested in an unconstrained, naturally diabetic, brittle, type I, insulin-dependent chimpanzee. The chimpanzee was trained to wear on her wrist a small electronic package and to present her heel for capillary blood samples. In five sets of measurements, averaging 5 h each, 82 capillary blood samples were assayed, their concentrations ranging from 35 to 400 mg/dl. The current readings were translated to blood glucose concentration by assaying, at t = 1 h, one blood sample for each implanted sensor. The rms error in the correlation between the sensor-measured glucose concentration and that in capillary blood was 17.2%, 4.9% above the intrinsic 12.3% rms error of the Accu-Chek II reference, through which the illness of the chimpanzee was routinely managed. Linear regression analysis of the data points taken at t>1 h yielded the relationship (Accu-Chek) = 0.98 × (implanted sensor) + 4.2 mg/dl, r2 = 0.94. The capillary blood and the subcutaneous glucose concentrations were statistically indistinguishable when the rate of change was less than 1 mg/(dl⋅min). However, when the rate of decline exceeded 1.8 mg/(dl⋅min) after insulin injection, the subcutaneous glucose concentration was transiently higher.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerobic Gymnastic is the ability to perform complex movements produced by the traditional aerobic exercises, in a continuous manner, with high intensity, perfectly integrated with soundtracks. This sport is performed in an aerobic/anaerobic lactacid condition and expects the execution of complex movements produced by the traditional aerobic exercises integrated with difficulty elements performed with a high technical level. An inaccuracy about this sport is related to the name itself “aerobic” because Aerobic Gymnastic does not use just the aerobic work during the competition, due to the fact that the exercises last among 1’30” and 1’45” at high rhythm. Agonistic Aerobics exploit the basic movements of amateur Aerobics and its coordination schemes, even though the agonistic Aerobics is so much intense than the amateur Aerobics to need a completely different mix of energetic mechanisms. Due to the complexity and the speed with which you perform the technical elements of Aerobic Gymnastic, the introduction of video analysis is essential for a qualitative and quantitative evaluation of athletes’ performance during the training. The performance analysis can allow the accurate analysis and explanation of the evolution and dynamics of a historical phenomenon and motor sports. The notational analysis is used by technicians to have an objective analysis of performance. Tactics, technique and individual movements can be analyzed to help coaches and athletes to re-evaluate their performance and gain advantage during the competition. The purpose of the following experimental work will be a starting point for analyzing the performance of the athletes in an objective way, not only during competitions, but especially during the phases of training. It is, therefore, advisable to introduce the video analysis and notational analysis for more quantitative and qualitative examination of technical movements. The goal is to lead to an improvement of the technique of the athlete and the teaching of the coach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We reviewed the use of advanced display technologies for monitoring in anesthesia. Researchers are investigating displays that integrate information and that, in some cases, also deliver the results continuously to the anesthesiologist. Integrated visual displays reveal higher-order properties of patient state and speed in responding to events, but their benefits under an intensely timeshared load is unknown. Head-mounted displays seem to shorten the time to respond to changes, but their impact on peripheral awareness and attention is unknown. Continuous auditory displays extending pulse oximetry seem to shorten response times and improve the ability to time-share other tasks, but their integration into the already noisy operative environment still needs to be tested. We reviewed the advantages and disadvantages of the three approaches, drawing on findings from other fields, such as aviation, to suggest outcomes where there are still no results for the anesthesia context. Proving that advanced patient monitoring displays improve patient outcomes is difficult, and a more realistic goal is probably to prove that such displays lead to better situational awareness, earlier responding, and less workload, all of which keep anesthesia practice away from the outer boundaries of safe operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Separate treatment of dewatering liquor from anaerobic sludge digestion significantly reduces the nitrogen load of the main stream and improves overall nitrogen elimination. Such ammonium-rich wastewater is particularly suited to be treated by high rate processes which achieve a rapid elimination of nitrogen with a minimal COD requirement. Processes whereby ammonium is oxidised to nitrite only (nitritation) followed by denitritation with carbon addition can achieve this. Nitrogen removal by nitritation/denitritation was optimised using a novel SBR operation with continuous dewatering liquor addition. Efficient and robust nitrogen elimination was obtained at a total hydraulic retention time of 1 day via the nitrite pathway. Around 85-90% nitrogen removal was achieved at an ammonium loading rate of 1.2 g NH4+-N m(-3) d(-1). Ethanol was used as electron donor for denitritation at a ratio of 2.2gCODg(-1) N removed. Conventional nitritation/denitritation with rapid addition of the dewatering liquor at the beginning of the cycle often resulted in considerable nitric oxide (NO) accumulation during the anoxic phase possibly leading to unstable denitritation. Some NO production was still observed in the novel continuous mode, but denitritation was never seriously affected. Thus, process stability can be increased and the high specific reaction rates as well as the continuous feeding result in decreased reactor size for full-scale operation. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we extend the state-contingent production approach to principal–agent problems to the case where the state space is an atomless continuum. The approach is modelled on the treatment of optimal tax problems. The central observation is that, under reasonable conditions, the optimal contract may involve a fixed wage with a bonus for above-normal performance. This is analogous to the phenomenon of "bunching" at the bottom in the optimal tax literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents some initial attempts to mathematically model the dynamics of a continuous estimation of distribution algorithm (EDA) based on a Gaussian distribution and truncation selection. Case studies are conducted on both unimodal and multimodal problems to highlight the effectiveness of the proposed technique and explore some important properties of the EDA. With some general assumptions, we show that, for ID unimodal problems and with the (mu, lambda) scheme: (1). The behaviour of the EDA is dependent only on the general shape of the test function, rather than its specific form; (2). When initialized far from the global optimum, the EDA has a tendency to converge prematurely; (3). Given a certain selection pressure, there is a unique value for the proposed amplification parameter that could help the EDA achieve desirable performance; for ID multimodal problems: (1). The EDA could get stuck with the (mu, lambda) scheme; (2). The EDA will never get stuck with the (mu, lambda) scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we address some issue related to evaluating and testing evolutionary algorithms. A landscape generator based on Gaussian functions is proposed for generating a variety of continuous landscapes as fitness functions. Through some initial experiments, we illustrate the usefulness of this landscape generator in testing evolutionary algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of globalisation companies all around the world must improve their performance in order to survive. The threats are coming from everywhere, and in different ways, such as low cost products, high quality products, new technologies, and new products. Different companies in different countries are using various techniques and using quality criteria items to strive for excellence. Continuous improvement techniques are used to enable companies to improve their operations. Therefore, companies are using techniques such as TQM, Kaizen, Six-Sigma, Lean Manufacturing, and quality award criteria items such as Customer Focus, Human Resources, Information & Analysis, and Process Management. The purpose of this paper is to compare the use of these techniques and criteria items in two countries, Mexico and the United Kingdom, which differ in culture and industrial structure. In terms of the use of continuous improvement tools and techniques, Mexico formally started to deal with continuous improvement by creating its National Quality Award soon after the Americans began the Malcolm Baldrige National Quality Award. The United Kingdom formally started by using the European Quality Award (EQA), modified and renamed as the EFQM Excellence Model. The methodology used in this study was to undertake a literature review of the subject matter and to study some general applications around the world. A questionnaire survey was then designed and a survey undertaken based on the same scale, about the same sample size, and the about the same industrial sector within the two countries. The survey presents a brief definition of each of the constructs to facilitate understanding of the questions. The analysis of the data was then conducted with the assistance of a statistical software package. The survey results indicate both similarities and differences in the strengths and weaknesses of the companies in the two countries. One outcome of the analysis is that it enables the companies to use the results to benchmark themselves and thus act to reinforce their strengths and to reduce their weaknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an experimental investigation into several applications of the Raman scattering effect in communication optical fibres as well as how some of these applications can be modified to enhance the resulting performance. The majority of the work contained within is based on laboratory results using many commercially available components. The results can be divided into and presented in three main parts: Firstly, a novel application of a known effect is used to broaden Raman pump light in order to achieve a more continuous distribution of gain with respect to wavelength. Multiple experimental results are presented, all based around the prior spreading of the pump spectrum before being used in the desired transmission fibre. Gathered results show that a notable improvement can be obtained from applying such a technique along with the scope for further optimisation work. Secondly, an investigation into the interaction between the well known effect of Four Wave Mixing (FWM) and Raman scattering is provided. The work provides an introduction to the effect as well commenting on previous literature regarding the effect and its mitigation. In response to existing research experimental results are provided detailing some limitations of proposed schemes along with concepts of how further alleviation from the deleterious effects maybe obtained. Lastly, the distributed nature of the Raman gain process is explored. A novel technique on how a near constant distribution of gain can be employed is implemented practically. The application of distributed gain is then applied to the generation of optical pulses with special mathematical properties within a laboratory setting and finally the effect of pump noise upon distributed gain techniques is acknowledged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional high speed machinery actuators are powered and coordinated by mechanical linkages driven from a central drive, but these linkages may be replaced by independently synchronised electric drives. Problems associated with utilising such electric drives for this form of machinery were investigated. The research concentrated on a high speed rod-making machine, which required control of high inertias (0.01-0.5kgm2), at continuous high speed (2500 r/min), with low relative phase errors between two drives (0.0025 radians). Traditional minimum energy drive selection techniques for incremental motions were not applicable to continuous applications which require negligible energy dissipation. New selection techniques were developed. A brushless configuration constant enabled the comparison between seven different servo systems; the rate earth brushless drives had the best power rates which is a performance measure. Simulation was used to review control strategies, such that a microprocessor controller with a proportional velocity loop within a proportional position loop with velocity feedforward was designed. Local control schemes were investigated as means of reducing relative errors between drives: the slave of a master/slave scheme compensates for the master's errors: the matched scheme has drives with similar absolute errors so the relative error is minimised, and the feedforward scheme minimises error by adding compensation from previous knowledge. Simulation gave an approximate velocity loop bandwidth and position loop gain required to meet the specification. Theoretical limits for these parameters were defined in terms of digital sampling delays, quantisation, and system phase shifts. Performance degradation due to mechanical backlash was evaluated. Thus any drive could be checked to ensure that the performance specification could be realised. A two drive demonstrator was commissioned with 0.01kgm2 loads. By use of simulation the performance of one drive was improved by increasing the velocity loop bandwidth fourfold. With the master/slave scheme relative errors were within 0.0024 radians at a constant 2500 r/min for two 0.01 kgm^2 loads.