906 resultados para fixed regression
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The fixed-dose procedure (FDP) was introduced as OECD Test Guideline 420 in 1992, as an alternative to the conventional median lethal dose (LD50) test for the assessment of acute oral toxicity (OECD Test Guideline 401). The FDP uses fewer animals and causes less suffering than the conventional test, while providing information on the acute toxicity to allow substances to be ranked according to the EU hazard classification system. Recently the FDP has been revised, with the aim of providing further reductions and refinements, and classification according to the criteria of the Globally Harmonized Hazard Classification and Labelling scheme (GHS). This paper describes the revised FDP and analyses its properties, as determined by a statistical modelling approach. The analysis shows that the revised FDP classifies substances for acute oral toxicity generally in the same, or a more stringent, hazard class as that based on the LD50 value, according to either the GHS or the EU classification scheme. The likelihood of achieving the same classification is greatest for substances with a steep dose-response curve and median toxic dose (TD50) close to the LD50. The revised FDP usually requires five or six animals with two or fewer dying as a result of treatment in most cases.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Multiple regression analysis is a statistical technique which allows to predict a dependent variable from m ore than one independent variable and also to determine influential independent variables. Using experimental data, in this study the multiple regression analysis is applied to predict the room mean velocity and determine the most influencing parameters on the velocity. More than 120 experiments for four different heat source locations were carried out in a test chamber with a high level wall mounted air supply terminal at air change rates 3-6 ach. The influence of the environmental parameters such as supply air momentum, room heat load, Archimedes number and local temperature ratio, were examined by two methods: a simple regression analysis incorporated into scatter matrix plots and multiple stepwise regression analysis. It is concluded that, when a heat source is located along the jet centre line, the supply momentum mainly influences the room mean velocity regardless of the plume strength. However, when the heat source is located outside the jet region, the local temperature ratio (the inverse of the local heat removal effectiveness) is a major influencing parameter.
Resumo:
The successful implementation of just-in-time (JIT) purchasing policy in many industries has prompted many companies that still use the economic order quantity (EOQ) purchasing policy to ponder if they should switch to the JIT purchasing policy. Despite existing studies that directly compare the costs between the EOQ and JIT purchasing systems, this decision is, however, still difficult to be made, especially when price discount has to be considered. JIT purchasing may not always be successful even though plants that adopted JIT operations have experienced or can take advantage of physical space reduction. Hence, the objective of this study is to expand on a classical EOQ with a price discount model to derive the EOQ–JIT cost indifference point. The objective was tested and achieved through a survey and case study conducted in the ready-mixed concrete industry in Singapore.
Resumo:
The relationships between wheat protein quality and baking properties of 20 flour samples were studied for two breadmaking processes; a hearth bread test and the Chorleywood Bread Process (CBP). The strain hardening index obtained from dough inflation measurements, the proportion of unextractable polymeric protein, and mixing properties were among the variables found to be good indicators of protein quality and suitable for predicting potential baking quality of wheat flours. By partial least squares regression, flour and dough test variables were able to account for 71-93% of the variation in crumb texture, form ratio and volume of hearth loaves made using optimal mixing and fixed proving times. These protein quality variables were, however, not related to the volume of loaves produced by the CBP using mixing to constant work input and proving to constant height. On the other hand, variation in crumb texture of CBP loaves (54-55%) could be explained by protein quality. The results underline that the choice of baking procedure and loaf characteristics is vital in assessing the protein quality of flours. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
An evaluation of milk urea nitrogen (MUN) as a diagnostic of protein feeding in dairy cows was performed using mean treatment data (n = 306) from 50 production trials conducted in Finland (n = 48) and Sweden (n = 2). Data were used to assess the effects of diet composition and certain animal characteristics on MUN and to derive relationships between MUN and the efficiency of N utilization for milk production and urinary N excretion. Relationships were developed using regression analysis based on either models of fixed factors or using mixed models that account for between-experiment variations. Dietary crude protein (CP) content was the best single predictor of MUN and accounted for proportionately 0.778 of total variance [ MUN (mg/dL) = -14.2 + 0.17 x dietary CP content (g/kg dry matter)]. The proportion of variation explained by this relationship increased to 0.952 when a mixed model including the random effects of study was used, but both the intercept and slope remained unchanged. Use of rumen degradable CP concentration in excess of predicted requirements, or the ratio of dietary CP to metabolizable energy as single predictors, did not explain more of the variation in MUN (R-2 = 0.767 or 0.778, respectively) than dietary CP content. Inclusion of other dietary factors with dietary CP content in bivariate models resulted in only marginally better predictions of MUN (R-2 = 0.785 to 0.804). Closer relationships existed between MUN and dietary factors when nutrients (CP to metabolizable energy) were expressed as concentrations in the diet, rather than absolute intakes. Furthermore, both MUN and MUN secretion (g/d) provided more accurate predictions of urinary N excretion (R-2 = 0.787 and 0.835, respectively) than measurements of the efficiency of N utilization for milk production (R-2 = 0.769). It is concluded that dietary CP content is the most important nutritional factor influencing MUN, and that measurements of MUN can be utilized as a diagnostic of protein feeding in the dairy cow and used to predict urinary N excretion.
Resumo:
We report rates of regression and associated findings in a population derived group of 255 children aged 9-14 years, participating in a prevalence study of autism spectrum disorders (ASD); 53 with narrowly defined autism, 105 with broader ASD and 97 with non-ASD neurodevelopmental problems, drawn from those with special educational needs within a population of 56,946 children. Language regression was reported in 30% with narrowly defined autism, 8% with broader ASD and less than 3% with developmental problems without ASD. A smaller group of children were identified who underwent a less clear setback. Regression was associated with higher rates of autistic symptoms and a deviation in developmental trajectory. Regression was not associated with epilepsy or gastrointestinal problems.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward constrained regression manner. The leave-one-out (LOO) test score is used for kernel selection. The jackknife parameter estimator subject to positivity constraint check is used for the parameter estimation of a single parameter at each forward step. As such the proposed approach is simple to implement and the associated computational cost is very low. An illustrative example is employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to that of the classical Parzen window estimate.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.