988 resultados para regression discontinuity design
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.
Resumo:
AIMS: Published incidences of acute mountain sickness (AMS) vary widely. Reasons for this variation, and predictive factors of AMS, are not well understood. We aimed to identify predictive factors that are associated with the occurrence of AMS, and to test the hypothesis that study design is an independent predictive factor of AMS incidence. We did a systematic search (Medline, bibliographies) for relevant articles in English or French, up to April 28, 2013. Studies of any design reporting on AMS incidence in humans without prophylaxis were selected. Data on incidence and potential predictive factors were extracted by two reviewers and crosschecked by four reviewers. Associations between predictive factors and AMS incidence were sought through bivariate and multivariate analyses for different study designs separately. Association between AMS incidence and study design was assessed using multiple linear regression. RESULTS: We extracted data from 53,603 subjects from 34 randomized controlled trials, 44 cohort studies, and 33 cross-sectional studies. In randomized trials, the median of AMS incidences without prophylaxis was 60% (range, 16%-100%); mode of ascent and population were significantly associated with AMS incidence. In cohort studies, the median of AMS incidences was 51% (0%-100%); geographical location was significantly associated with AMS incidence. In cross-sectional studies, the median of AMS incidences was 32% (0%-68%); mode of ascent and maximum altitude were significantly associated with AMS incidence. In a multivariate analysis, study design (p=0.012), mode of ascent (p=0.003), maximum altitude (p<0.001), population (p=0.002), and geographical location (p<0.001) were significantly associated with AMS incidence. Age, sex, speed of ascent, duration of exposure, or history of AMS were inconsistently reported and therefore not further analyzed. CONCLUSIONS: Reported incidences and identifiable predictive factors of AMS depend on study design.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Introduction: The chronic kidney disease outcomes and practice patterns study (CKDopps) is an international observational, prospective, cohort study involving patients with chronic kidney disease (CKD) stages 3-5 [estimated glomerular filtration rate (eGFR) < 60 ml/min/1.73 m2, with a major focus upon care during the advanced CKD period (eGFR < 30 ml/min/1.73 m2)]. During a 1-year enrollment period, each one of the 22 selected clinics will enroll up to 60 advanced CKD patients (eGFR < 30 ml/min/1.73 m2 and not dialysis-dependent) and 20 earlier stage CKD patients (eGFR between 30-59 ml/min/1.73 m2). Exclusion criteria: age < 18 years old, patients on chronic dialysis or prior kidney transplant. The study timeline include up to one year for enrollment of patients at each clinic starting in the end of 2013, followed by up to 2-3 years of patient follow-up with collection of detailed longitudinal patient-level data, annual clinic practice-level surveys, and patient surveys. Analyses will apply regression models to evaluate the contribution of patient-level and clinic practice-level factors to study outcomes, and utilize instrumental variable-type techniques when appropriate. Conclusion: Launching in 2013, CKDopps Brazil will study advanced CKD care in a random selection of nephrology clinics across Brazil to gain understanding of variation in care across the country, and as part of a multinational study to identify optimal treatment practices to slow kidney disease progression and improve outcomes during the transition period to end-stage kidney disease.
Resumo:
With the recent progress and rapid increase in the field of communication, the designs of antennas for small mobile terminals with enhanced radiation characteristics are acquiring great importance. Compactness, efficiency, high data rate capacity etc. are the major criteria for the new generation antennas. The challenging task of the microwave scientists and engineers is to design a compact printed radiating structure having broadband behavior along with good efficiency and enhanced gain. Printed antenna technology has received popularity among antenna scientists after the introduction of planar transmission lines in mid-seventies. When we view the antenna through a transmission line concept, the mechanism behind any electromagnetic radiator is quite simple and interesting. Any electromagnetic system with a discontinuity is radiating electromagnetic energy. The size, shape and orientation of the discontinuities control the radiation characteristics of the system such as radiation pattern, gain, polarization etc. It can be either resonant or non-resonant. This thesis deals with antennas that are developed from a class of transmission lines known as coplanar strip-CPS, a planar analogy of parallel pair transmission line. The specialty of CPS is its symmetric structure compared to other transmission lines, which makes the antenna structures developed from CPS quite simple for design and fabrication. The structural modifications on either metallic strip of CPS results in different antennas. The first part of the thesis discusses a single band and dual band design derived from open ended slot lines which are very much suitable for 2.4 and 5.2 GHz WLAN applications. The second section of the study is vectored into the development of enhanced gain dipoles. A single band dipole and a wide band enhanced gain dipole suitable for 5.2/5.8 GHZ band and imaging applications are developed and discussed. Last part of the thesis discusses the development of directional UWBs. Three different types of ultra-compact UWBs are developed and almost all the frequency domain and time domain analysis of the structures are discussed.
Resumo:
Optimum experimental designs depend on the design criterion, the model and the design region. The talk will consider the design of experiments for regression models in which there is a single response with the explanatory variables lying in a simplex. One example is experiments on various compositions of glass such as those considered by Martin, Bursnall, and Stillman (2001). Because of the highly symmetric nature of the simplex, the class of models that are of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather different from those of standard regression analysis. The optimum designs are also rather different, inheriting a high degree of symmetry from the models. In the talk I will hope to discuss a variety of modes for such experiments. Then I will discuss constrained mixture experiments, when not all the simplex is available for experimentation. Other important aspects include mixture experiments with extra non-mixture factors and the blocking of mixture experiments. Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007). If time and my research allows, I would hope to finish with a few comments on design when the responses, rather than the explanatory variables, lie in a simplex. References Atkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum Experimental Designs, with SAS. Oxford: Oxford University Press. Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results on optimal and efficient designs for constrained mixture experiments. In A. C. Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000, pp. 225–239. Dordrecht: Kluwer. Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal Statistical Society, Ser. B 20, 344–360. 1
Determinants of fruit and vegetable intake in England: a re-examination based on quantile regression
Resumo:
Objective To examine die sociodemographic determinants of fruit and vegetable (F&V) consumption in England and determine the differential effects of socioeconomic variables at various parts of the intake distribution, with a special focus on severely inadequate intakes Design Quantile regression, expressing F&V intake as a function of sociodemographic variables, is employed. Here, quantile regression flexibly allows variables such as ethnicity to exert effects on F&V intake that. vary depending oil existing levels of intake. Setting The 2003 Health survey of England. Subjects Data were from 11044 adult individuals. Results The influence of particular sociodemographic variables is found to vary significantly across the intake distribution We conclude that women consume more F&V than men, Asians and Hacks mole dian Whites, co-habiting individuals more than single-living ones Increased incomes and education also boost intake However, the key general finding of the present study is that the influence of most variables is relatively weak in the area of greatest concern, i e among those with the most inadequate intakes in any reference group. Conclusions. Our findings emphasise the importance of allowing the effects of socio-economic drivers to vary across the intake distribution The main finding, that variables which exert significant influence on F&V Intake at other parts Of the conditional distribution have a relatively weak influence at the lower tail, is cause for concern. It implies that in any defined group, those consuming the lease F&V are hard to influence using compaigns or policy levers.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.
Resumo:
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.
Resumo:
In this brief, we propose an orthogonal forward regression (OFR) algorithm based on the principles of the branch and bound (BB) and A-optimality experimental design. At each forward regression step, each candidate from a pool of candidate regressors, referred to as S, is evaluated in turn with three possible decisions: 1) one of these is selected and included into the model; 2) some of these remain in S for evaluation in the next forward regression step; and 3) the rest are permanently eliminated from S. Based on the BB principle in combination with an A-optimality composite cost function for model structure determination, a simple adaptive diagnostics test is proposed to determine the decision boundary between 2) and 3). As such the proposed algorithm can significantly reduce the computational cost in the A-optimality OFR algorithm. Numerical examples are used to demonstrate the effectiveness of the proposed algorithm.
Resumo:
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.
Resumo:
In this project, two broad facets in the design of a methodology for performance optimization of indexable carbide inserts were examined. They were physical destructive testing and software simulation.For the physical testing, statistical research techniques were used for the design of the methodology. A five step method which began with Problem definition, through System identification, Statistical model formation, Data collection and Statistical analyses and results was indepthly elaborated upon. Set-up and execution of an experiment with a compression machine together with roadblocks and possible solution to curb road blocks to quality data collection were examined. 2k factorial design was illustrated and recommended for process improvement. Instances of first-order and second-order response surface analyses were encountered. In the case of curvature, test for curvature significance with center point analysis was recommended. Process optimization with method of steepest ascent and central composite design or process robustness studies of response surface analyses were also recommended.For the simulation test, AdvantEdge program was identified as the most used software for tool development. Challenges to the efficient application of this software were identified and possible solutions proposed. In conclusion, software simulation and physical testing were recommended to meet the objective of the project.