864 resultados para Inimizes the chi-square


Relevância:

100.00% 100.00%

Publicador:

Resumo:

"NAVWEPS report 7770. NOTS TP 2749."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When the data are counts or the frequencies of particular events and can be expressed as a contingency table, then they can be analysed using the chi-square distribution. When applied to a 2 x 2 table, the test is approximate and care needs to be taken in analysing tables when the expected frequencies are small either by applying Yate’s correction or by using Fisher’s exact test. Larger contingency tables can also be analysed using this method. Note that it is a serious statistical error to use any of these tests on measurement data!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the non-central chi-square chart with two stage samplings. During the first stage, one item of the sample is inspected and, depending on the result, the sampling is either interrupted, or it goes on to the second stage, where the remaining sample items are inspected and the non-central chi-square statistic is computed. The proposed chart is not only more sensitive than the joint (X) over bar and R charts, but operationally simpler too, particularly when appropriate devices, such as go-no-go gauges, can be used to decide if the sampling should go on to the second stage or not. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout this article, it is assumed that the no-central chi-square chart with two stage samplings (TSS Chisquare chart) is employed to monitor a process where the observations from the quality characteristic of interest X are independent and identically normally distributed with mean μ and variance σ2. The process is considered to start with the mean and the variance on target (μ = μ0; σ2 = σ0 2), but at some random time in the future an assignable cause shifts the mean from μ0 to μ1 = μ0 ± δσ0, δ >0 and/or increases the variance from σ0 2 to σ1 2 = γ2σ0 2, γ > 1. Before the assignable cause occurrence, the process is considered to be in a state of statistical control (defined by the in-control state). Similar to the Shewhart charts, samples of size n 0+ 1 are taken from the process at regular time intervals. The samplings are performed in two stages. At the first stage, the first item of the i-th sample is inspected. If its X value, say Xil, is close to the target value (|Xil-μ0|< w0σ 0, w0>0), then the sampling is interrupted. Otherwise, at the second stage, the remaining n0 items are inspected and the following statistic is computed. Wt = Σj=2n 0+1(Xij - μ0 + ξiσ 0)2 i = 1,2 Let d be a positive constant then ξ, =d if Xil > 0 ; otherwise ξi =-d. A signal is given at sample i if |Xil-μ0| > w0σ 0 and W1 > knia:tl, where kChi is the factor used in determining the upper control limit for the non-central chi-square chart. If devices such as go and no-go gauges can be considered, then measurements are not required except when the sampling goes to the second stage. Let P be the probability of deciding that the process is in control and P 1, i=1,2, be the probability of deciding that the process is in control at stage / of the sampling procedure. Thus P = P1 + P 2 - P1P2, P1 = Pr[μ0 - w0σ0 ≤ X ≤ μ0+ w 0σ0] P2=Pr[W ≤ kChi σ0 2], (3) During the in-control period, W / σ0 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ0 = n0d2, i.e. W / σ0 2 - xn0 22 (λ0) During the out-of-control period, W / σ1 2 is distributed as a non-central chi-square distribution with n0 degrees of freedom and a non-centrality parameter λ1 = n0(δ + ξ)2 / γ2 The effectiveness of a control chart in detecting a process change can be measured by the average run length (ARL), which is the speed with which a control chart detects process shifts. The ARL for the proposed chart is easily determined because in this case, the number of samples before a signal is a geometrically distributed random variable with parameter 1-P, that is, ARL = I /(1-P). It is shown that the performance of the proposed chart is better than the joint X̄ and R charts, Furthermore, if the TSS Chi-square chart is used for monitoring diameters, volumes, weights, etc., then appropriate devices, such as go-no-go gauges can be used to decide if the sampling should go to the second stage or not. When the process is stable, and the joint X̄ and R charts are in use, the monitoring becomes monotonous because rarely an X̄ or R value fall outside the control limits. The natural consequence is the user to pay less and less attention to the steps required to obtain the X̄ and R value. In some cases, this lack of attention can result in serious mistakes. The TSS Chi-square chart has the advantage that most of the samplings are interrupted, consequently, most of the time the user will be working with attributes. Our experience shows that the inspection of one item by attribute is much less monotonous than measuring four or five items at each sampling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, an (X) over bar -chart is used to control the process mean and an R-chart to control the process variance. However, these charts are not sensitive to small changes in process parameters. A good alternative to these charts is the exponentially weighted moving average (EWMA) control chart for controlling the process mean and variability, which is very effective in detecting small process disturbances. In this paper, we propose a single chart that is based on the non-central chi-square statistic, which is more effective than the joint (X) over bar and R charts in detecting assignable cause(s) that change the process mean and/or increase variability. It is also shown that the EWMA control chart based on a non-central chi-square statistic is more effective in detecting both increases and decreases in mean and/or variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: This paper explores the effects of perceived stage of cancer (PSOC) on carers' anxiety and depression during the patients' final year. Methods: A consecutive sample of patients and carers (N=98) were surveyed at regular intervals regarding PSOC, and anxiety and depression using the Hospital Anxiety and Depression Scale. Means were compared by gender using the Mann-Whitney U-test. The chi-square was used to analyse categorical data. Agreement between carers' and patients' PSOC was estimated using kappa statistics. Correlations between carers' PSOC and their anxiety and depression were calculated using the Spearman's rank correlation. Results: Over time, an increasing proportion of carers reported that the cancer was advanced, culminating at 43% near death. Agreement regarding PSOC was fair (kappa=0.29-0.34) until near death (kappa=0.21). Carers' anxiety increased over the year; depression increased in the final 6 months. Females were more anxious (p=0.049, 6 months; p=0.009, 3 months) than males, and more depressed until 1 month to death. The proportion of carers reporting moderate-severe anxiety almost doubled over the year to 27%, with more females in this category at 6 months (p=0.05). Carers with moderate-severe depression increased from 6 to 15% over the year. Increased PSOC was weakly correlated with increased anxiety and depression. Conclusions: Carers' anxiety exceeded depression in severity during advanced cancer. Females generally experienced greater anxiety and depression. Carers were more realistic than patients regarding the ultimate outcome, which was reflected in their declining mental health, particularly near the end.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article introduces a “pseudo classical” notion of modelling non-separability. This form of non-separability can be viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives To investigate the frequency of the ACTN3 R577X polymorphism in elite endurance triathletes, and whether ACTN3 R577X is significantly associated with performance time. Design Cross-sectional study. Methods Saliva samples, questionnaires, and performance times were collected for 196 elite endurance athletes who participated in the 2008 Kona Ironman championship triathlon. Athletes were of predominantly North American, European, and Australian origin. A one-way analysis of variance was conducted to compare performance times between genotype groups. Multiple linear regression analysis was performed to model the effect of questionnaire variables and genotype on performance time. Genotype and allele frequencies were compared to results from different populations using the chi-square test. Results Performance time did not significantly differ between genotype groups, and age, sex, and continent of origin were significant predictors of finishing time (age and sex: p < 5 × 10−6; continent: p = 0.003) though genotype was not. Genotype and allele frequencies obtained (RR 26.5%, RX 50.0%, XX 23.5%, R 51.5%, X 48.5%) were found to be not significantly different from Australian, Spanish, and Italian endurance athletes (p > 0.05), but were significantly different from Kenyan, Ethiopian, and Finnish endurance athletes (p < 0.01). Conclusions Genotype and allele frequencies agreed with those reported for endurance athletes of similar ethnic origin, supporting previous findings for an association between 577X allele and endurance. However, analysis of performance time suggests that ACTN3 does not alone influence endurance performance, or may have a complex effect on endurance performance due to a speed/endurance trade-off.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The methylenetetrahydrofolate reductase (MTHFR) gene variant C677T has been implicated as a genetic risk factor in migraine susceptibility, particularly in Migraine with Aura. Migraine, with and without aura (MA and MO) have many diagnostic characteristics in common. It is postulated that migraine symptomatic characteristics might themselves be influenced by MTHFR. Here we analysed the clinical profile, migraine symptoms, triggers and treatments of 267 migraineurs previously genotyped for the MTHFR C677T variant. The chi-square test was used to analyse all potential relationships between genotype and migraine clinical variables. Regression analyses were performed to assess the association of C677T with all migraine clinical variables after adjusting for gender. Findings The homozygous TT genotype was significantly associated with MA (P < 0.0001) and unilateral head pain (P = 0.002). While the CT genotype was significantly associated with physical activity discomfort (P < 0.001) and stress as a migraine trigger (P = 0.002). Females with the TT genotype were significantly associated with unilateral head pain (P < 0.001) and females with the CT genotype were significantly associated with nausea (P < 0.001), osmophobia (P = 0.002), and the use of natural remedy for migraine treatment (P = 0.003). Conversely, male migraineurs with the TT genotype experienced higher incidences of bilateral head pain (63% vs 34%) and were less likely to use a natural remedy as a migraine treatment compared to female migraineurs (5% vs 20%). Conclusions MTHFR genotype is associated with specific clinical variables of migraine including unilateral head pain, physical activity discomfort and stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research, which focused on the Irish adult population, was to generate information for policymakers by applying statistical analyses and current technologies to oral health administrative and survey databases. Objectives included identifying socio-demographic influences on oral health and utilisation of dental services, comparing epidemiologically-estimated dental treatment need with treatment provided, and investigating the potential of a dental administrative database to provide information on utilisation of services and the volume and types of treatment provided over time. Information was extracted from the claims databases for the Dental Treatment Benefit Scheme (DTBS) for employed adults and the Dental Treatment Services Scheme (DTSS) for less-well-off adults, the National Surveys of Adult Oral Health, and the 2007 Survey of Lifestyle Attitudes and Nutrition in Ireland. Factors associated with utilisation and retention of natural teeth were analysed using count data models and logistic regression. The chi-square test and the student’s t-test were used to compare epidemiologically-estimated need in a representative sample of adults with treatment provided. Differences were found in dental care utilisation and tooth retention by Socio-Economic Status. An analysis of the five-year utilisation behaviour of a 2003 cohort of DTBS dental attendees revealed that age and being female were positively associated with visiting annually and number of treatments. Number of adults using the DTBS increased, and mean number of treatments per patient decreased, between 1997 and 2008. As a percentage of overall treatments, restorations, dentures, and extractions decreased, while prophylaxis increased. Differences were found between epidemiologically-estimated treatment need and treatment provided for those using the DTBS and DTSS. This research confirms the utility of survey and administrative data to generate knowledge for policymakers. Public administrative databases have not been designed for research purposes, but they have the potential to provide a wealth of knowledge on treatments provided and utilisation patterns.