56 resultados para Robust experimental design
Resumo:
In this brief, we propose an orthogonal forward regression (OFR) algorithm based on the principles of the branch and bound (BB) and A-optimality experimental design. At each forward regression step, each candidate from a pool of candidate regressors, referred to as S, is evaluated in turn with three possible decisions: 1) one of these is selected and included into the model; 2) some of these remain in S for evaluation in the next forward regression step; and 3) the rest are permanently eliminated from S. Based on the BB principle in combination with an A-optimality composite cost function for model structure determination, a simple adaptive diagnostics test is proposed to determine the decision boundary between 2) and 3). As such the proposed algorithm can significantly reduce the computational cost in the A-optimality OFR algorithm. Numerical examples are used to demonstrate the effectiveness of the proposed algorithm.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
We compared the quantity of wheat bait consumed by Norway rats (Rattus norvegicus) from: (i) wooden bait trays, made as safe as possible from non-target animals using materials available at trial sites, and (ii) three different, proprietary tamper-resistant rat bait boxes. A balanced Latin square experimental design was used to overcome operational biases that occur when baits of different types are applied simultaneously at the same sites. The consumption of bait from the four different types of bait placement differed significantly and accounted for more than 76% of the total variation. The amount of bait eaten by rats from the bait trays was approximately eight times greater than the quantity eaten from the tamper-resistant bait boxes. The three bait box designs appeared to deter bait consumption by rats to a similar extent. Tamper-resistant bait boxes are essential tools in the application of rodenticides in many circumstances but their use should not be mandatory when it is possible to make baits safe from non-target animals by other means.
Resumo:
Six Holstein cows fitted with ruminal cannulas and permanent indwelling catheters in the portal vein, hepatic vein, mesenteric vein, and an artery were used to study the effects of abomasal glucose infusion on splanchnic plasma concentrations of gut peptides. The experimental design was a randomized block design with repeated measurements. Cows were assigned to one of 2 treatments: control or infusion of 1,500 g of glucose/d into the abomasum from the day of parturition to 29 d in milk. Cows were sampled 12 ± 6 d prepartum and at 4, 15, and 29 d in milk. Concentrations of glucose-dependent insulinotropic polypeptide, glucagon-like peptide 1(7–36) amide, and oxyntomodulin were measured in pooled samples within cow and sampling day, whereas active ghrelin was measured in samples obtained 30 min before and after feeding at 0800 h. Postpartum, dry matter intake increased at a lower rate with infusion compared with the control. Arterial, portal venous, and hepatic venous plasma concentrations of the measured gut peptides were unaffected by abomasal glucose infusion. The arterial, portal venous, and hepatic venous plasma concentrations of glucose-dependent insulinotropic polypeptide and glucagon-like peptide 1(7–36) amide increased linearly from 12 d prepartum to 29 d postpartum. Plasma concentrations of oxyntomodulin were unaffected by day relative to parturition. Arterial and portal venous plasma concentrations of ghrelin were lower postfeeding compared with prefeeding concentrations. Arterial plasma concentrations of ghrelin were greatest prepartum and lowest at 4 d postpartum, giving a quadratic pattern of change over the transition period. Positive portal venous-arterial and hepatic venous–arterial concentration differences were observed for glucagon-like peptide 1(7–36) amide. A negative portal venous–arterial concentration difference was observed for ghrelin pre-feeding. The remaining portal venous–arterial and hepatic venous–arterial concentration differences of gut peptides did not differ from zero. In conclusion, increased postruminal glucose supply to postpartum transition dairy cows reduced feed intake relative to control cows, but did not affect arterial, portal venous, or hepatic venous plasma concentrations of gut peptide hormones. Instead, gut peptide plasma concentrations increased as lactation progressed. Thus, the lower feed intake of postpartum dairy cows receiving abomasal glucose infusion was not attributable to changes in gut peptide concentrations.
Resumo:
The transport sector emits a wide variety of gases and aerosols, with distinctly different characteristics which influence climate directly and indirectly via chemical and physical processes. Tools that allow these emissions to be placed on some kind of common scale in terms of their impact on climate have a number of possible uses such as: in agreements and emission trading schemes; when considering potential trade-offs between changes in emissions resulting from technological or operational developments; and/or for comparing the impact of different environmental impacts of transport activities. Many of the non-CO2 emissions from the transport sector are short-lived substances, not currently covered by the Kyoto Protocol. There are formidable difficulties in developing metrics and these are particularly acute for such short-lived species. One difficulty concerns the choice of an appropriate structure for the metric (which may depend on, for example, the design of any climate policy it is intended to serve) and the associated value judgements on the appropriate time periods to consider; these choices affect the perception of the relative importance of short- and long-lived species. A second difficulty is the quantification of input parameters (due to underlying uncertainty in atmospheric processes). In addition, for some transport-related emissions, the values of metrics (unlike the gases included in the Kyoto Protocol) depend on where and when the emissions are introduced into the atmosphere – both the regional distribution and, for aircraft, the distribution as a function of altitude, are important. In this assessment of such metrics, we present Global Warming Potentials (GWPs) as these have traditionally been used in the implementation of climate policy. We also present Global Temperature Change Potentials (GTPs) as an alternative metric, as this, or a similar metric may be more appropriate for use in some circumstances. We use radiative forcings and lifetimes from the literature to derive GWPs and GTPs for the main transport-related emissions, and discuss the uncertainties in these estimates. We find large variations in metric (GWP and GTP) values for NOx, mainly due to the dependence on location of emissions but also because of inter-model differences and differences in experimental design. For aerosols we give only global-mean values due to an inconsistent picture amongst available studies regarding regional dependence. The uncertainty in the presented metric values reflects the current state of understanding; the ranking of the various components with respect to our confidence in the given metric values is also given. While the focus is mostly on metrics for comparing the climate impact of emissions, many of the issues are equally relevant for stratospheric ozone depletion metrics, which are also discussed.
Resumo:
The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.
Resumo:
Explosive volcanic eruptions cause episodic negative radiative forcing of the climate system. Using coupled atmosphere-ocean general circulation models (AOGCMs) subjected to historical forcing since the late nineteenth century, previous authors have shown that each large volcanic eruption is associated with a sudden drop in ocean heat content and sea-level from which the subsequent recovery is slow. Here we show that this effect may be an artefact of experimental design, caused by the AOGCMs not having been spun up to a steady state with volcanic forcing before the historical integrations begin. Because volcanic forcing has a long-term negative average, a cooling tendency is thus imposed on the ocean in the historical simulation. We recommend that an extra experiment be carried out in parallel to the historical simulation, with constant time-mean historical volcanic forcing, in order to correct for this effect and avoid misinterpretation of ocean heat content changes
Resumo:
Diabetes like many diseases and biological processes is not mono-causal. On the one hand multifactorial studies with complex experimental design are required for its comprehensive analysis. On the other hand, the data from these studies often include a substantial amount of redundancy such as proteins that are typically represented by a multitude of peptides. Coping simultaneously with both complexities (experimental and technological) makes data analysis a challenge for Bioinformatics.
Resumo:
An NIR reflectance sensor, with a large field of view and a fibre-optic connection to a spectrometer for measuring light backscatter at 980 nm, was used to monitor the syneresis process online during cheese-making with the goal of predicting syneresis indices (curd moisture content, yield of whey and fat losses to whey) over a range of curd cutting programmes and stirring speeds. A series of trials were carried out in an 11 L cheese vat using recombined whole milk. A factorial experimental design consisting of three curd stirring speeds and three cutting programmes, was undertaken. Milk was coagulated under constant conditions and the casein gel was cut when the elastic modulus reached 35 Pa. Among the syneresis indices investigated, the most accurate and most parsimonious multivariate model developed was for predicting yield of whey involving three terms, namely light backscatter, milk fat content and cutting intensity (R2 = 0.83, SEy = 6.13 g/100 g), while the best simple model also predicted this syneresis index using the light backscatter alone (R2 = 0.80, SEy = 6.53 g/100 g). In this model the main predictor was the light backscatter response from the NIR light back scatter sensor. The sensor also predicted curd moisture with a similar accuracy.
Resumo:
This paper reviews the current state of development of both near-infrared (NIR) and mid-infrared (MIR) spectroscopic techniques for process monitoring, quality control, and authenticity determination in cheese processing. Infrared spectroscopy has been identified as an ideal process analytical technology tool, and recent publications have demonstrated the potential of both NIR and MIR spectroscopy, coupled with chemometric techniques, for monitoring coagulation, syneresis, and ripening as well as determination of authenticity, composition, sensory, and rheological parameters. Recent research is reviewed and compared on the basis of experimental design, spectroscopic and chemometric methods employed to assess the potential of infrared spectroscopy as a technology for improving process control and quality in cheese manufacture. Emerging research areas for these technologies, such as cheese authenticity and food chain traceability, are also discussed.
Resumo:
Dissolved organic carbon (DOC) concentrations in surface waters have increased across much of Europe and North America, with implications for the terrestrial carbon balance, aquatic ecosystem functioning, water treatment costs and human health. Over the past decade, many hypotheses have been put forward to explain this phenomenon, from changing climate and land-management to eutrophication and acid deposition. Resolution of this debate has been hindered by a reliance on correlative analyses of time-series data, and a lack of robust experimental testing of proposed mechanisms. In a four-year, four-site replicated field experiment involving both acidifying and de-acidifying treatments, we tested the hypothesis that DOC leaching was previously suppressed by high levels of soil acidity in peat and organo-mineral soils, and therefore that observed DOC increases a consequence of decreasing soil acidity. We observed a consistent, positive relationship between DOC and acidity change at all sites. Responses were described by similar hyperbolic relationships between standardised changes in DOC and hydrogen ion concentrations at all sites, suggesting potentially general applicability. These relationships explained a substantial proportion of observed changes in peak DOC concentrations in nearby monitoring streams, and application to a UK-wide upland soil pH dataset suggests that recovery from acidification alone could have led to soil solution DOC increases in the range 46-126% by habitat type since 1978. Our findings raise the possibility that changing soil acidity may have wider impacts on ecosystem carbon balances. Decreasing sulphur deposition may be accelerating terrestrial carbon loss, and returning surface waters to a natural, high-DOC condition.
Resumo:
Robotic multiwell planar patch-clamp has become common in drug development and safety programs because it enables efficient and systematic testing of compounds against ion channels during voltage-clamp. It has not, however, been adopted significantly in other important areas of ion channel research, where conventional patch-clamp remains the favored method. Here, we show the wider potential of the multiwell approach with the ability for efficient intracellular solution exchange, describing protocols and success rates for recording from a range of native and primary mammalian cells derived from blood vessels, arthritic joints and the immune and central nervous systems. The protocol involves preparing a suspension of single cells to be dispensed robotically into 4-8 microfluidic chambers each containing a glass chip with a small aperture. Under automated control, giga-seals and whole-cell access are achieved followed by preprogrammed routines of voltage paradigms and fast extracellular or intracellular solution exchange. Recording from 48 chambers usually takes 1-6 h depending on the experimental design and yields 16-33 cell recordings.
Resumo:
People contribute more to experimental public goods the more others contribute, a tendency called “crowding-in.” We propose a novel experimental design to distinguish two possible causes of crowding-in: reciprocity, the usual explanation, and conformity, a neglected alternative. Subjects are given the opportunity to react to contributions of a payoff-irrelevant group, in addition to their own group. We find evidence of conformity, accounting for roughly 1/3 of crowding-in.
Resumo:
Lying to participants offers an experimenter the enticing prospect of making “others' behaviour” a controlled variable, but is eschewed by experimental economists because it may pollute the pool of subjects. This paper proposes and implements a new experimental design, the Conditional Information Lottery, which offers all the benefits of deception without actually deceiving anyone. The design should be suitable for most economics experiments, and works by a modification of an already standard device, the Random Lottery incentive system. The deceptive scenarios of designs which use deceit are replaced with fictitious scenarios, each of which, from a subject's viewpoint, has a chance of being true. The design is implemented in a sequential play public good experiment prompted by Weimann's (1994) result, from a deceptive design, that subjects are more sensitive to freeriding than cooperation on the part of others. The experiment provides similar results to Weimann's, in that subjects are at least as cooperative when uninformed about others' behaviour as they are if reacting to high contributions. No deception is used and the data cohere well both internally and with other public goods experiments. In addition, simultaneous play is found to be more efficient than sequential play, and subjects contribute less at the end of a sequence than at the start. The results suggest pronounced elements of overconfidence, egoism and (biased) reciprocity in behaviour, which may explain decay in contributions in repeated play designs. The experiment shows there is a workable alternative to deception.