996 resultados para APPLIED PROBABILITY
Resumo:
This article develops a weighted least squares version of Levene's test of homogeneity of variance for a general design, available both for univariate and multivariate situations. When the design is balanced, the univariate and two common multivariate test statistics turn out to be proportional to the corresponding ordinary least squares test statistics obtained from an analysis of variance of the absolute values of the standardized mean-based residuals from the original analysis of the data. The constant of proportionality is simply a design-dependent multiplier (which does not necessarily tend to unity). Explicit results are presented for randomized block and Latin square designs and are illustrated for factorial treatment designs and split-plot experiments. The distribution of the univariate test statistic is close to a standard F-distribution, although it can be slightly underdispersed. For a complex design, the test assesses homogeneity of variance across blocks, treatments, or treatment factors and offers an objective interpretation of residual plot.
Resumo:
Motivation: A major issue in cell biology today is how distinct intracellular regions of the cell, like the Golgi Apparatus, maintain their unique composition of proteins and lipids. The cell differentially separates Golgi resident proteins from proteins that move through the organelle to other subcellular destinations. We set out to determine if we could distinguish these two types of transmembrane proteins using computational approaches. Results: A new method has been developed to predict Golgi membrane proteins based on their transmembrane domains. To establish the prediction procedure, we took the hydrophobicity values and frequencies of different residues within the transmembrane domains into consideration. A simple linear discriminant function was developed with a small number of parameters derived from a dataset of Type II transmembrane proteins of known localization. This can discriminate between proteins destined for Golgi apparatus or other locations (post-Golgi) with a success rate of 89.3% or 85.2%, respectively on our redundancy-reduced data sets.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This study investigated the haemodynamic response to the 90-minute application of 85 Hz transcutaneous electrical nerve stimulation (TENS) to the T1 and T5 nerve roots. Comparison was made between 20 healthy subjects who had TENS stimulation and a separate group of 20 healthy subjects who rested for 90 minutes. Pulse and blood pressure were measured just prior to the start of TENS stimulation, after 30 minutes of stimulation, and after 90 minutes of stimulation (immediately after stopping TENS) or at completion of the rest time depending on group allocation. The rate pressure product was calculated from the pulse and systolic blood pressure data. Multivariate repeated measures analysis showed a significant group effect for TENS (p = 0.048). Univariate repeated measures analyses showed a significant group by time effect due to TENS on systolic blood pressure over the 90-minute time period (p = 0.028). Separate group repeated measures ANOVA showed a significant decline in heart rate (p = 0.000), systolic blood pressure (p = 0.013) and rate pressure product (p = 0.000) for the TENS group, while the control resting group showed a significant decline in heart rate only (p = 0.04). The application of 85 Hz TENS to the upper thoracic nerve roots causes no adverse haemodynamic effects in healthy subjects.
Resumo:
Plasma levels of lipoprotein(a) _ Lp(a) _ are associated with cardiovascular risk (Danesh et al., 2000) and were long believed to be influenced by the LPA locus on chromosome 6q27 only. However, a recent report of Broeckel et al. (2002) suggested the presence of a second quantitative trait locus on chromosome 1 influencing Lp(a) levels. Using a two-locus model, we found no evidence for an additional Lp(a) locus on chromosome 1 in a linkage study among 483 dizygotic twin pairs.
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.
Resumo:
In order to establish the relationship between solute lipophilicity and skin penetration (including flux and concentration behavior), we examined the in vitro penetration and membrane concentration of a series of homologous alcohols (C2-C10) applied topically in aqueous solutions to human epidermal, full-thickness, and dermal membranes. The partitioning/distribution of each alcohol between the donor solution, stratum corneum, viable epidermis, dermis, and receptor phase compartments was determined during the penetration process and separately to isolated samples of each tissue type. Maximum flux and permeability coefficients are compared for each membrane and estimates of alcohol diffusivity are made based on flux/concentration data and also the related tissue resistance (the reciprocal of permeability coefficient) for each membrane type. The permeability coefficient increased with increasing lipophilicity to alcohol C8 (octanol) with no further increase for C10 (decanol). Log vehicle:stratum corneum partition coefficients were related to logP , and the concentration of alcohols in each of the tissue layers appeared to increase with lipophilicity. No difference was measured in the diffusivity of smaller more polar alcohols in the three membranes; however, the larger more lipophilic solutes showed slower diffusivity values. The study showed that the dermis may be a much more lipophilic environment than originally believed and that distribution of smaller nonionized solutes into local tissues below a site of topical application may be estimated based on knowledge of their lipophilicity alone.
Resumo:
Purpose. The flux of a topically applied drug depends on the activity in the skin and the interaction between the vehicle and skin. Permeation of vehicle into the skin can alter the activity of drug and the properties of the skin barrier. The aim of this in vitro study was to separate and quantify these effects. Methods. The flux of four radiolabeled permeants (water, phenol, diflunisal, and diazepam) with log K-oct/water values from 1.4 to 4.3 was measured over 4 h through heat-separated human epidermis pretreated for 30 min with vehicles having Hildebrand solubility parameters from 7.9 to 23.4 (cal/cm(3))(1/2). Results. Enhancement was greatest after pretreatment with the more lipophilic vehicles. A synergistic enhancement was observed using binary mixtures. The flux of diazepam was not enhanced to the same extent as the other permeants, possibly because its partitioning into the epidermis is close to optimal (log K-oct 2.96). Conclusion. An analysis of the permeant remaining in the epidermis revealed that the enhancement can be the result of either increased partitioning of permeant into the epidermis or an increasing diffusivity of permeants through the epidermis.
Resumo:
Chlorophyll fluorescence measurements have a wide range of applications from basic understanding of photosynthesis functioning to plant environmental stress responses and direct assessments of plant health. The measured signal is the fluorescence intensity (expressed in relative units) and the most meaningful data are derived from the time dependent increase in fluorescence intensity achieved upon application of continuous bright light to a previously dark adapted sample. The fluorescence response changes over time and is termed the Kautsky curve or chlorophyll fluorescence transient. Recently, Strasser and Strasser (1995) formulated a group of fluorescence parameters, called the JIP-test, that quantify the stepwise flow of energy through Photosystem II, using input data from the fluorescence transient. The purpose of this study was to establish relationships between the biochemical reactions occurring in PS II and specific JIP-test parameters. This was approached using isolated systems that facilitated the addition of modifying agents, a PS II electron transport inhibitor, an electron acceptor and an uncoupler, whose effects on PS II activity are well documented in the literature. The alteration to PS II activity caused by each of these compounds could then be monitored through the JIP-test parameters and compared and contrasted with the literature. The known alteration in PS II activity of Chenopodium album atrazine resistant and sensitive biotypes was also used to gauge the effectiveness and sensitivity of the JIP-test. The information gained from the in vitro study was successfully applied to an in situ study. This is the first in a series of four papers. It shows that the trapping parameters of the JIP-test were most affected by illumination and that the reduction in trapping had a run-on effect to inhibit electron transport. When irradiance exposure proceeded to photoinhibition, the electron transport probability parameter was greatly reduced and dissipation significantly increased. These results illustrate the advantage of monitoring a number of fluorescence parameters over the use of just one, which is often the case when the F-V/F-M ratio is used.
Resumo:
A theta graph is a graph consisting of three pairwise internally disjoint paths with common end points. Methods for decomposing the complete graph K-nu into theta graphs with fewer than ten edges are given.
Resumo:
Increased professionalism in rugby has elicited rapid changes in the fitness profile of elite players. Recent research, focusing on the physiological and anthropometrical characteristics of rugby players, and the demands of competition are reviewed. The paucity of research on contemporary elite rugby players is highlighted, along with the need for standardised testing protocols. Recent data reinforce the pronounced differences in the anthropometric and physical characteristics of the forwards and backs. Forwards are typically heavier, taller, and have a greater proportion of body fat than backs. These characteristics are changing, with forwards developing greater total mass and higher muscularity. The forwards demonstrate superior absolute aerobic and anaerobic power, and Muscular strength. Results favour the backs when body mass is taken into account. The scaling of results to body mass can be problematic and future investigations should present results using power function ratios. Recommended tests for elite players include body mass and skinfolds, vertical jump, speed, and the multi-stage shuttle run. Repeat sprint testing is a possible avenue for more specific evaluation of players. During competition, high-intensity efforts are often followed by periods of incomplete recovery. The total work over the duration of a game is lower in the backs compared with the forwards; forwards spend greater time in physical contact with the opposition while the backs spend more time in free running, allowing them to cover greater distances. The intense efforts undertaken by rugby players place considerable stress on anaerobic energy sources, while the aerobic system provides energy during repeated efforts and for recovery. Training should focus on repeated brief high-intensity efforts with short rest intervals to condition players to the demands of the game. Training for the forwards should emphasise the higher work rates of the game, while extended rest periods can be provided to the backs. Players should not only be prepared for the demands of competition, but also the stress of travel and extreme environmental conditions. The greater professionalism of rugby union has increased scientific research in the sport; however, there is scope for significant refinement of investigations on the physiological demands of the game, and sports-specific testing procedures.