940 resultados para Box-Behnken experimental design


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A suite of climate model experiments indicates that 20th Century increases in ocean heat content and sea-level ( via thermal expansion) were substantially reduced by the 1883 eruption of Krakatoa. The volcanically-induced cooling of the ocean surface is subducted into deeper ocean layers, where it persists for decades. Temporary reductions in ocean heat content associated with the comparable eruptions of El Chichon ( 1982) and Pinatubo ( 1991) were much shorter lived because they occurred relative to a non-stationary background of large, anthropogenically-forced ocean warming. Our results suggest that inclusion of the effects of Krakatoa ( and perhaps even earlier eruptions) is important for reliable simulation of 20th century ocean heat uptake and thermal expansion. Inter-model differences in the oceanic thermal response to Krakatoa are large and arise from differences in external forcing, model physics, and experimental design. Systematic experimentation is required to quantify the relative importance of these factors. The next generation of historical forcing experiments may require more careful treatment of pre-industrial volcanic aerosol loadings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

University students suffer from variable sleep patterns including insomnia;[1] furthermore, the highest incidence of herbal use appears to be among college graduates.[2] Our objective was to test the perception of safety and value of herbal against conventional medicine for the treatment of insomnia in a non-pharmacy student population. We used an experimental design and bespoke vignettes that relayed the same effectiveness information to test our hypothesis that students would give higher ratings of safety and value to herbal product compared to conventional medicine. We tested another hypothesis that the addition of side-effect information would lower people’s perception of the safety and value of the herbal product to a greater extent than it would with the conventional medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant communities of set-aside agricultural land in a European project were managed in order to enhance plant succession towards weed-resistant, mid-successional grassland. Here, we ask if the management of a plant community affects the earthworm community. Field experiments were established in four countries, the Netherlands, Sweden, the UK, and the Czech Republic. High (15 plant species) and low diversity (four plant species) seed mixtures were sown as management practice, with natural colonization as control treatment in a randomized block design. The response of the earthworrns to the management was studied after three summers since establishment of the sites. Samples were also taken from plots with continued agricultural practices included in the experimental design and from a site with a late successional plant community representing the target plant community. The numbers and biomass of individuals were higher in the set-aside plots than in the agricultural treatment in two countries out of four. The numbers of individuals at one site (The Netherlands) was higher in the naturally colonized plots than in the sowing treatments, otherwise there were no differences between the treatments. Species diversity was lower in the agricultural plots in one country. The species composition had changed from the initial community of the agricultural field, but was still different from a late successional target community. The worm biomass was positively related to legume biomass in Sweden and to grass biomass in the UK. (C) 2005 Elsevier SAS. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical modeling of bacterial chemotaxis systems has been influential and insightful in helping to understand experimental observations. We provide here a comprehensive overview of the range of mathematical approaches used for modeling, within a single bacterium, chemotactic processes caused by changes to external gradients in its environment. Specific areas of the bacterial system which have been studied and modeled are discussed in detail, including the modeling of adaptation in response to attractant gradients, the intracellular phosphorylation cascade, membrane receptor clustering, and spatial modeling of intracellular protein signal transduction. The importance of producing robust models that address adaptation, gain, and sensitivity are also discussed. This review highlights that while mathematical modeling has aided in understanding bacterial chemotaxis on the individual cell scale and guiding experimental design, no single model succeeds in robustly describing all of the basic elements of the cell. We conclude by discussing the importance of this and the future of modeling in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims and objectives. To examine the impact of written and verbal education on bed-making practices, in an attempt to reduce the prevalence of pressure ulcers. Background. The Department of Health has set targets for a 5% reduction per annum in the incidence of pressure ulcers. Electric profiling beds with a visco-elastic polymer mattress are a new innovation in pressure ulcer prevention; however, mattress efficacy is reduced by tightly tucking sheets around the mattress. Design. A prospective randomized pre/post-test experimental design. Methods. Ward managers at a teaching hospital were approached to participate in the study. Two researchers independently examined the tightness of the sheets around the mattresses. Wards were randomized to one of two groups. Groups A and B received written education. In addition, group B received verbal education on alternate days for one week. Beds were re-examined one month later. One researcher was blinded to the educational delivery received by the wards. Results. Twelve wards agreed to participate in the study and 245 beds were examined. Before education, 113 beds (46%) had sheets tucked correctly around the mattresses. Following education, this increased to 215 beds (87.8%) (chi(2) = 68.03, P < 0.001). There was no significant difference in the number of correctly made beds between the two different education groups: 100 (87.72%) beds correctly made in group A vs. 115 (87.79%) beds in group B (chi(2) = 0, P 0.987). Conclusions. Clear, concise written instruction improved practice but verbal education was not additionally beneficial. Relevance to clinical practice. Nurses are receptive to clear, concise written evidence regarding pressure ulcer prevention and incorporate this into clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recovery of lactoferrin and lactoperoxidase from sweet whey was studied using colloidal gas aphrons (CGAs), which are surfactant-stabilized microbubbles (10-100 mum). CGAs are generated by intense stirring (8000 rpm for 10 min) of the anionic surfactant AOT (sodium bis-2-ethylhexyl sulfosuccinate). A volume of CGAs (10-30 mL) is mixed with a given volume of whey (1 - 10 mL), and the mixture is allowed to separate into two phases: the aphron (top) phase and the liquid (bottom) phase. Each of the phases is analyzed by SDS-PAGE and surfactant colorimetric assay. A statistical experimental design has been developed to assess the effect of different process parameters including pH, ionic strength, the concentration of surfactant in the CGAs generating solution, the volume of CGAs and the volume of whey on separation efficiency. As expected pH, ionic strength and the volume of whey (i.e. the amount of total protein in the starting material) are the main factors influencing the partitioning of the Lf(.)Lp fraction into the aphron phase. Moreover, it has been demonstrated that best separation performance was achieved at pH = 4 and ionic strength = 0.1 mol/L i.e., with conditions favoring electrostatic interactions between target proteins and CGAs (recovery was 90% and the concentration of lactoferrin and lactoperoxidase in the aphron phase was 25 times higher than that in the liquid phase), whereas conditions favoring hydrophobic interactions (pH close to pI and high ionic strength) led to lower performance. However, under these conditions, as confirmed by zeta potential measurements, the adsorption of both target proteins and contaminant proteins is favored. Thus, low selectivity is achieved at all of the studied conditions. These results confirm the initial hypothesis that CGAs act as ion exchangers and that the selectivity of the process can be manipulated by changing main operating parameters such as type of surfactant, pH and ionic strength.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this brief, we propose an orthogonal forward regression (OFR) algorithm based on the principles of the branch and bound (BB) and A-optimality experimental design. At each forward regression step, each candidate from a pool of candidate regressors, referred to as S, is evaluated in turn with three possible decisions: 1) one of these is selected and included into the model; 2) some of these remain in S for evaluation in the next forward regression step; and 3) the rest are permanently eliminated from S. Based on the BB principle in combination with an A-optimality composite cost function for model structure determination, a simple adaptive diagnostics test is proposed to determine the decision boundary between 2) and 3). As such the proposed algorithm can significantly reduce the computational cost in the A-optimality OFR algorithm. Numerical examples are used to demonstrate the effectiveness of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.