883 resultados para Variable sample size X- control chart
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Fisiopatologia em Clínica Médica - FMB
Resumo:
Objective: To test six variations in the Goldberg equation for evaluating the underreporting of energy intake (EI) among obese women on the waiting list for bariatric surgery, considering variations in resting metabolic rate (RMR), physical activity, and food intake levels in group and individual approaches.Methods: One hundred obese women aged 20 to 45years (33.3 6.08) recruited from a bariatric surgery waiting list participated in the study. Underreporting assessment was based on the difference between reported energy intake, indirect calorimetry measurements and RMR (rEI:RMR), which is compatible with the predicted physical activity level (PAL). Six approaches were used for defining the cutoff points. The approaches took into account variances in the components of the rEI:RMR = PAL equation as a function of the assumed PAL, sample size (n), and measured or estimated RMR.Results: The underreporting percentage varied from 55% to 97%, depending on the approach used for generating the cutoff points. The ratio rEI:RMR and estimated PAL of the sample were significantly different (p = 0.001). Sixty-one percent of the women reported an EI lower than their RMR. The PAL variable significantly affected the cutoff point, leading to different proportions of underreporting. The RMR measured or estimated in the equation did not result in differences in the proportion of underreporting. The individual approach was less sensitive than the group approach.Conclusion: RMR did not interfere in underreporting estimates. However, PAL variations were responsible for significant differences in cutoff point. Thus, PAL should be considered when estimating underreporting, and even though the individual approach is less sensitive than the group approach, it may be a useful tool for clinical practice.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Engenharia de Produção - FEB
Resumo:
This paper presents a modeling effort for developing safety performance models (SPM) for urban intersections for three major Brazilian cities. The proposed methodology for calibrating SPM has been divided into the following steps: defining the safety study objective, choosing predictive variables and sample size, data acquisition, defining model expression and model parameters and model evaluation. Among the predictive variables explored in the calibration phase were exposure variables (AADT), number of lanes, number of approaches and central median status. SPMs were obtained for three cities: Fortaleza, Belo Horizonte and Brasilia. The SPM developed for signalized intersections in Fortaleza and Belo Horizonte had the same structure and the most significant independent variables, which were AADT entering the intersection and number of lanes, and in addition, the coefficient of the best models were in the same range of values. For Brasilia, because of the sample size, the signalized and unsignalized intersections were grouped, and the AADT was split in minor and major approaches, which were the most significant variables. This paper also evaluated SPM transferability to other jurisdiction. The SPM for signalized intersections from Fortaleza and Belo Horizonte have been recalibrated (in terms of the COx) to the city of Porto Alegre. The models were adjusted following the Highway Safety Manual (HSM) calibration procedure and yielded C-x of 0.65 and 2.06 for Fortaleza and Belo Horizonte SPM respectively. This paper showed the experience and future challenges toward the initiatives on development of SPMs in Brazil, that can serve as a guide for other countries that are in the same stage in this subject. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Introduction: Preterm Labor (PTL) and Preterm Premature Rupture of Membranes (PPROM) cause severe complications for both mother and fetus. Among the risk factors associated with preterm labor and PPROM, genetic predisposition has been gaining importance. However, the association between polymorphic genes and the pathogenesis of PTL and PPROM remains elusive. A better understanding of the genetic mechanisms underlying these adverse pregnancy outcomes may enable the identification of high risk patients and allow new approaches to minimize the deleterious effects of prematurity. Aim: To determine the association between maternal IL-6 polymorphism gene and the occurrence of PTL and PPROM. Patients and Methods: The study included 109 patients with prior history of PL and/or PPROM that delivered prematurely at the Obstetrical Unit Care of Botucatu Medical School, UNESP between 2003 and 2012. The control group consisted of 68 patients that delivered at term, matched to the case group by age, ethnicity, and sex of the newborn. Oral swabs (Cath-AllTM – Epicentre Biotechnologies) were collected for analysis of genetic polymorphisms by PCR. Statistical tests were performed to compare genotype, clinical and socio-demographic data from the groups. A p-value of <0.05 was considered significant. Results: The sociodemographic characteristics in both groups were homogeneously distributed. The frequency of the polymorphic allele C, associated with less production of IL-6, and therefore thought to be protective against PTL and PPROM, was 32,5% in the study group and 30,9% in the control group, without statistically significant differences. Conclusion: Considering the sample size included in this study, the frequency of the mutated allele is similar in pregnant women who delivered at term and gestational complications as PTL and PPROM
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This article examines new product development (NPD) in small and medium-sized Brazilian enterprises (SMEs) in two technology-based industries: medical devices and process control automation devices. A conceptual model that categorizes factors that contribute to the success of a new product was established. The data were collected from a sample of 62 Brazilian SMEs. The conceptual model was tested to examine the relationships between NPD practices and new product success. Data analysis reveals that new product success in medical device companies is related to organizational characteristics such as NPD proficiency and marketing skills; while in process control automation device companies, they deal in a large degree with product differentiation, innovation and capability to analyze the targeted market. Due to the relatively small sample size, caution should be exercised when interpreting the results.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Electrical conductivity has been proposed as a rapid test to evaluate seed vigor; however, few researches have emphasized methodologies to its use in seeds of medicinal plants, such as chamomile. The objective of the research was to evaluate the electrical conductivity of chamomile seeds affected by different imbibition times and sample size. The evaluations consisted of moisture content, germination and vigor (first count of germination) to seed initial characterization. Then, it was evaluated the electrical conductivity, affected by imbibition time (6, 12, 24 e 48 hours) and seed amount per sample (25, 50, 75, 100). The completely randomized design was used with four replications, arranged as a 4 x 4 factorial. Means were compared by the Tukey test at 5% of probability. It was concluded that the electrical conductivity of chamomile seeds is affected by the number of seeds per sample and imbibition time isolately.
Resumo:
The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using several variations of DupER Augmentation (based on different imputation methodologies, deletion rates, and duplication rates), analyzed in BILOG-MG 3, and results are compared to those obtained from analyzing the raw data. Additional manipulated variables include test length and sample size. Estimates are compared using seven different evaluative criteria. Results are mixed and inconclusive. DupER augmented data tend to result in larger root mean squared errors (RMSEs) and lower correlations between estimates and parameters for both item and ability parameters. However, some DupER variations produce estimates that are much less biased than those obtained from the raw data alone. For one DupER variation, it was found that DupER produced better results for low-ability simulees and worse results for those with high abilities. Findings, limitations, and recommendations for future studies are discussed. Specific recommendations for future studies include the application of Duper Augmentation (1) to empirical data, (2) with additional IRT models, and (3) the analysis of the efficacy of the procedure for different item and ability parameter distributions.
Resumo:
Observations of cosmic rays arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three method can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ... , 110 highest energy events with a corresponding minimum energy threshold of about 49.3 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.
Resumo:
Background: The evaluation of associations between genotypes and diseases in a case-control framework plays an important role in genetic epidemiology. This paper focuses on the evaluation of the homogeneity of both genotypic and allelic frequencies. The traditional test that is used to check allelic homogeneity is known to be valid only under Hardy-Weinberg equilibrium, a property that may not hold in practice. Results: We first describe the flaws of the traditional (chi-squared) tests for both allelic and genotypic homogeneity. Besides the known problem of the allelic procedure, we show that whenever these tests are used, an incoherence may arise: sometimes the genotypic homogeneity hypothesis is not rejected, but the allelic hypothesis is. As we argue, this is logically impossible. Some methods that were recently proposed implicitly rely on the idea that this does not happen. In an attempt to correct this incoherence, we describe an alternative frequentist approach that is appropriate even when Hardy-Weinberg equilibrium does not hold. It is then shown that the problem remains and is intrinsic of frequentist procedures. Finally, we introduce the Full Bayesian Significance Test to test both hypotheses and prove that the incoherence cannot happen with these new tests. To illustrate this, all five tests are applied to real and simulated datasets. Using the celebrated power analysis, we show that the Bayesian method is comparable to the frequentist one and has the advantage of being coherent. Conclusions: Contrary to more traditional approaches, the Full Bayesian Significance Test for association studies provides a simple, coherent and powerful tool for detecting associations.
Resumo:
The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.