977 resultados para Preiction error methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Purpose. This study evaluated an electromyographic technique for the measurement of muscle activity of the deep cervical flexor (DCF) muscles. Electromyographic signals were detected from the DCF, sternocleidomastoid (SCM), and anterior scalene (AS) muscles during performance of the craniocervical flexion (CCF) test, which involves performing 5 stages of increasing craniocervical flexion range of motion-the anatomical action of the DCF muscles. Subjects. Ten volunteers without known pathology or impairment participated in this study. Methods. Root-mean-square (RMS) values were calculated for the DCF, SCM, and AS muscles during performance of the CCF test. Myoelectric signals were recorded from the DCF muscles using bipolar electrodes placed over the posterior oropharyngeal wall. Reliability estimates of normalized RMS values were obtained by evaluating intraclass correlation coefficients and the normalized standard error of the mean (SEM). Results. A linear relationship was evident between the amplitude of DCF muscle activity and the incremental stages of the CCF test (F=239.04, df=36, P<.0001). Normalized SEMs in the range 6.7% to 10.3% were obtained for the normalized RMS values for the DCF muscles, providing evidence of reliability for these variables. Discussion and Conclusion. This approach for obtaining a direct measure of the DCF muscles, which differs from those previously used, may be useful for the examination of these muscles in future electromyographic applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several competing methods commonly used to solve energy grained master equations describing gas-phase reactive systems. When it comes to selecting an appropriate method for any particular problem, there is little guidance in the literature. In this paper we directly compare several variants of spectral and numerical integration methods from the point of view of computer time required to calculate the solution and the range of temperature and pressure conditions under which the methods are successful. The test case used in the comparison is an important reaction in combustion chemistry and incorporates reversible and irreversible bimolecular reaction steps as well as isomerizations between multiple unimolecular species. While the numerical integration of the ODE with a stiff ODE integrator is not the fastest method overall, it is the fastest method applicable to all conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Although early in life there is little discernible difference in bone mass between boys and girls, at puberty sex differences are observed. It is uncertain if these differences represent differences in bone mass or just differences in anthropometric dimensions. Aim: The study aimed to identify whether sex independently affects bone mineral content (BMC) accrual in growing boys and girls. Three sites are investigated: total body (TB), femoral neck (FN) and lumbar spine (LS). Subjects and methods: 85 boys and 67 girls were assessed annually for seven consecutive years. BMC was assessed by dual energy X-ray absorptiometry (DXA). Biological age was defined as years from age at peak height velocity (PHV). Data were analysed using a hierarchical (random effects) modelling approach. Results: When biological age, body size and body composition were controlled, boys had statistically significantly higher TB and FN BMC at all maturity levels (p < 0.05). No independent sex differences were found at the LS (p > 0.05). Conclusion: Although a statistical significant sex effect is observed, it is less than the error of the measurement, and thus sex difference are debatable. In general, sex difference are explained by anthropometric difference

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and aims: Hip fracture is a devastating event in terms of outcome in the elderly, and the best predictor of hip fracture risk is hip bone density, usually measured by dual X-ray absorptiometry (DXA). However, bone density can also be ascertained from computerized tomography (CT) scans, and mid-thigh scans are frequently employed to assess the muscle and fat composition of the lower limb. Therefore, we examined if it was possible to predict hip bone density using mid-femoral bone density. Methods: Subjects were 803 ambulatory white and black women and men, aged 70-79 years, participating in the Health, Aging and Body Composition (Health ABC) Study. Bone mineral content (BMC, g) and volumetric bone mineral density (vBMD, mg/cm(3)) of the mid-femur were obtained by CT, whereas BMC and areal bone mineral density (aBMD, g/cm(2)) of the hip (femoral neck and trochanter) were derived from DXA. Results: In regression analyses stratified by race and sex, the coefficient of determination was low with mid-femoral BMC, explaining 6-27% of the variance in hip BMC, with a standard error of estimate (SEE) ranging from 16 to 22% of the mean. For mid-femur vBMD, the variance explained in hip aBMD was 2-17% with a SEE ranging from 15 to 18%. Adjusting aBMD to approximate volumetric density did not improve the relationships. In addition, the utility of fracture prediction was examined. Forty-eight subjects had one or more fractures (various sites) during a mean follow-up of 4.07 years. In logistic regression analysis, there was no association between mid-femoral vBMD and fracture (all fractures), whereas a 1 SD increase in hip BMD was associated with reduced odds for fracture of similar to60%. Conclusions: These results do not support the use of CT-derived mid-femoral vBMD or BMC to predict DXA-measured hip bone mineral status, irrespective of race or sex in older adults. Further, in contrast to femoral neck and trochanter BMD, mid-femur vBMD was not able to predict fracture (all fractures). (C) 2003, Editrice Kurtis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente trabalho objetiva avaliar o desempenho do MECID (Método dos Elementos de Contorno com Interpolação Direta) para resolver o termo integral referente à inércia na Equação de Helmholtz e, deste modo, permitir a modelagem do Problema de Autovalor assim como calcular as frequências naturais, comparando-o com os resultados obtidos pelo MEF (Método dos Elementos Finitos), gerado pela Formulação Clássica de Galerkin. Em primeira instância, serão abordados alguns problemas governados pela equação de Poisson, possibilitando iniciar a comparação de desempenho entre os métodos numéricos aqui abordados. Os problemas resolvidos se aplicam em diferentes e importantes áreas da engenharia, como na transmissão de calor, no eletromagnetismo e em problemas elásticos particulares. Em termos numéricos, sabe-se das dificuldades existentes na aproximação precisa de distribuições mais complexas de cargas, fontes ou sorvedouros no interior do domínio para qualquer técnica de contorno. No entanto, este trabalho mostra que, apesar de tais dificuldades, o desempenho do Método dos Elementos de Contorno é superior, tanto no cálculo da variável básica, quanto na sua derivada. Para tanto, são resolvidos problemas bidimensionais referentes a membranas elásticas, esforços em barras devido ao peso próprio e problemas de determinação de frequências naturais em problemas acústicos em domínios fechados, dentre outros apresentados, utilizando malhas com diferentes graus de refinamento, além de elementos lineares com funções de bases radiais para o MECID e funções base de interpolação polinomial de grau (um) para o MEF. São geradas curvas de desempenho através do cálculo do erro médio percentual para cada malha, demonstrando a convergência e a precisão de cada método. Os resultados também são comparados com as soluções analíticas, quando disponíveis, para cada exemplo resolvido neste trabalho.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O soro de leite é um subproduto da fabricação do queijo, seja por acidificação ou por processo enzimático. Em condições ideais, a caseína do leite se agrega formando um gel, que posteriormente cortado, induz a separação e liberação do soro. É utilizado de diversas formas em toda a indústria alimentícia, possui rica composição em lactose, sais minerais e proteínas. A desidratação é um dos principais processos utilizados para beneficiamento e transformação do soro. Diante disto, o objetivo deste trabalho foi avaliar a influência dos métodos de secagem: liofilização, leito de espuma (nas temperaturas de 40, 50, 60, 70 e 80ºC) e spray-dryer (nas temperaturas de 55, 60, 65, 70 e 75ºC), sobre as características de umidade, proteína, cor e solubilidade do soro, bem como estudar o seu processo de secagem. O soro foi obtido e desidratado após concentração por osmose reversa, testando 11 tratamentos, em 3 repetições, utilizando um delineamento inteiramente casualizado. Os resultados demonstraram que o modelo matemático que melhor se ajustou foi o modelo de Page, apresentado um coeficiente de determinação ajustado acima de 0,98 e erro padrão da regressão em todas as temperaturas abaixo de 0,04 para o método por leito de espuma. Para o método de liofilização os respectivos valores foram 0,9975 e 0,01612. A partir disso, pode-se elaborar um modelo matemático generalizado, apresentando um coeficiente de determinação igual a 0,9888. No caso do leito de espuma, observou-se que à medida que se aumenta a temperatura do ar de secagem, o tempo de secagem diminui e os valores do coeficiente de difusão efetiva aumentam. Porém, a redução no tempo de secagem entre os intervalos de temperatura, diminui com o aumento da mesma. A energia de ativação para a difusão no processo de secagem do soro foi de 26,650 kJ/mol e para todas as avaliações físico-químicas e tecnológicas, a análise de variância apresentou um valor de F significativo (p<0,05), indicando que há pelo menos um contraste entre as médias dos tratamentos que é significativo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical comparison is performed between three methods of third order with the same structure, namely BSC, Halley’s and Euler–Chebyshev’s methods. As the behavior of an iterative method applied to a nonlinear equation can be highly sensitive to the starting points, the numerical comparison is carried out, allowing for complex starting points and for complex roots, on the basins of attraction in the complex plane. Several examples of algebraic and transcendental equations are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI’s code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI’s legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.