81 resultados para Immunologic Tests -- methods
Resumo:
Testing ecological models for management is an increasingly important part of the maturation of ecology as an applied science. Consequently, we need to work at applying fair tests of models with adequate data. We demonstrate that a recent test of a discrete time, stochastic model was biased towards falsifying the predictions. If the model was a perfect description of reality, the test falsified the predictions 84% of the time. We introduce an alternative testing procedure for stochastic models, and show that it falsifies the predictions only 5% of the time when the model is a perfect description of reality. The example is used as a point of departure to discuss some of the philosophical aspects of model testing.
Resumo:
A number of techniques have been developed to study the disposition of drugs in the head and, in particular, the role of the blood-brain barrier (BBB) in drug uptake. The techniques can be divided into three groups: in-vitro, in-vivo and in-situ. The most suitable method depends on the purpose(s) and requirements of the particular study being conducted. In-vitro techniques involve the isolation of cerebral endothelial cells so that direct investigations of these cells can be carried out. The most recent preparations are able to maintain structural and functional characteristics of the BBB by simultaneously culturing endothelial cells with astrocytic cells,The main advantages of the in-vitro methods are the elimination of anaesthetics and surgery. In-vivo methods consist of a diverse range of techniques and include the traditional Brain Uptake Index and indicator diffusion methods, as well as microdialysis and positron emission tomography. In-vivo methods maintain the cells and vasculature of an organ in their normal physiological states and anatomical position within the animal. However, the shortcomings include renal acid hepatic elimination of solutes as well as the inability to control blood flow. In-situ techniques, including the perfused head, are more technically demanding. However, these models have the ability to vary the composition and flow rate of the artificial perfusate. This review is intended as a guide for selecting the most appropriate method for studying drug uptake in the brain.
Resumo:
Background: A variety of methods for prediction of peptide binding to major histocompatibility complex (MHC) have been proposed. These methods are based on binding motifs, binding matrices, hidden Markov models (HMM), or artificial neural networks (ANN). There has been little prior work on the comparative analysis of these methods. Materials and Methods: We performed a comparison of the performance of six methods applied to the prediction of two human MHC class I molecules, including binding matrices and motifs, ANNs, and HMMs. Results: The selection of the optimal prediction method depends on the amount of available data (the number of peptides of known binding affinity to the MHC molecule of interest), the biases in the data set and the intended purpose of the prediction (screening of a single protein versus mass screening). When little or no peptide data are available, binding motifs are the most useful alternative to random guessing or use of a complete overlapping set of peptides for selection of candidate binders. As the number of known peptide binders increases, binding matrices and HMM become more useful predictors. ANN and HMM are the predictive methods of choice for MHC alleles with more than 100 known binding peptides. Conclusion: The ability of bioinformatic methods to reliably predict MHC binding peptides, and thereby potential T-cell epitopes, has major implications for clinical immunology, particularly in the area of vaccine design.
Resumo:
Estimating energy requirements is necessary in clinical practice when indirect calorimetry is impractical. This paper systematically reviews current methods for estimating energy requirements. Conclusions include: there is discrepancy between the characteristics of populations upon which predictive equations are based and current populations; tools are not well understood, and patient care can be compromised by inappropriate application of the tools. Data comparing tools and methods are presented and issues for practitioners are discussed. (C) 2003 International Life Sciences Institute.
Resumo:
Taking functional programming to its extremities in search of simplicity still requires integration with other development (e.g. formal) methods. Induction is the key to deriving and verifying functional programs, but can be simplified through packaging proofs with functions, particularly folds, on data (structures). Totally Functional Programming avoids the complexities of interpretation by directly representing data (structures) as platonic combinators - the functions characteristic to the data. The link between the two simplifications is that platonic combinators are a kind of partially-applied fold, which means that platonic combinators inherit fold-theoretic properties, but with some apparent simplifications due to the platonic combinator representation. However, despite observable behaviour within functional programming that suggests that TFP is widely-applicable, significant work remains before TFP as such could be widely adopted.
Resumo:
Background: Recent research has shown that Mulligan's Mobilization With Movement treatment technique for the elbow (MWM), a peripheral joint mobilization technique, produces a substantial and immediate pain relief in chronic lateral epicondylalgia (48% increase in pain-free grip strength).(1) This hypoalgesic effect is far greater than that previously reported with spinal manual therapy treatments, prompting speculation that peripheral manual therapy treatments may differ in mechanism of action to spinal manual therapy techniques. Naloxone antagonism and tolerance studies, which employ widely accepted tests for the identification of endogenous opioid-mediated pain control mechanisms, have shown that spinal manual therapy-induced hypoalgesia does not involve an opioid mechanism. Objective: The aim of this study was to evaluate the effect of naloxone administration on the hypoalgesic effect of MWM. Methods: A randomized, controlled trial evaluated the effect of administering naloxone, saline, or no-substance control injection on the MWM-induced hypoalgesia in 18 participants with lateral epicondylalgia. Pain-free grip strength, pressure pain threshold, thermal pain threshold, and upper limb neural tissue provocation test 2b were the outcome measures. Results: The results demonstrated that the initial hypoalgesic effect of the MWM was not antagonized by naloxone, suggesting a nonopioid mechanism of action. Conclusions: The studied peripheral mobilization treatment technique appears to have a similar effect profile to previously studied spinal manual therapy techniques, suggesting a nonopioid-mediated hypoalgesia following manual therapy.
Resumo:
Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Background and Purpose-The Echoplanar Imaging Thrombolysis Evaluation Trial ( EPITHET) tests the hypothesis that perfusion-weighted imaging (PWI)-diffusion-weighted imaging (DWI) mismatch predicts the response to thrombolysis. There is no accepted standardized definition of PWI-DWI mismatch. We compared common mismatch definitions in the initial 40 EPITHET patients. Methods-Raw perfusion images were used to generate maps of time to peak (TTP), mean transit time (MTT), time to peak of the impulse response (Tmax) and first moment transit time (FMT). DWI, apparent diffusion coefficient ( ADC), and PWI volumes were measured with planimetric and thresholding techniques. Correlations between mismatch volume (PWIvol-DWIvol) and DWI expansion (T2(Day) (90-vol)-DWIAcute-vol) were also assessed. Results-Mean age was 68 +/- 11, time to MRI 4.5 +/- 0.7 hours, and median National Institutes of Health Stroke Scale (NIHSS) score 11 (range 4 to 23). Tmax and MTT hypoperfusion volumes were significantly lower than those calculated with TTP and FMT maps (P < 0.001). Mismatch >= 20% was observed in 89% (Tmax) to 92% (TTP/FMT/MTT) of patients. Application of a +4s ( relative to the contralateral hemisphere) PWI threshold reduced the frequency of positive mismatch volumes (TTP 73%/FMT 68%/Tmax 54%/MTT 43%). Mismatch was not significantly different when assessed with ADC maps. Mismatch volume, calculated with all parameters and thresholds, was not significantly correlated with DWI expansion. In contrast, reperfusion was correlated inversely with infarct growth (R= -0.51; P = 0.009). Conclusions-Deconvolution and application of PWI thresholds provide more conservative estimates of tissue at risk and decrease the frequency of mismatch accordingly. The precise definition may not be critical; however, because reperfusion alters tissue fate irrespective of mismatch.
Resumo:
Objectives: To evaluate the effect of a radio and newspaper campaign encouraging Italian-speaking women aged 50-69 years to attend a population-based mammography screening program. Methods: A series of radio scripts and newspaper advertisements ran weekly in the Italian-language media over two, four-week periods. Monthly mammography screens were analysed to determine if numbers of Italian-speaking women in the program increased during the two campaign periods, using interrupted time series regression analysis. A survey of Italian-speaking women attending BreastScreen NSW during the campaign period (n=240) investigated whether individuals had heard or seen the advertisements. Results: There was no statistically significant difference in the number of initial or subsequent mammograms in Italian-speaking women between the campaign periods and the period prior to (or after) the campaign. Twenty per cent of respondents cited the Italian media campaign as a prompt to attend. Fifty per cent had heard the radio ad and 30% had seen the newspaper ad encouraging Italian-speaking women to attend BSNSW. The most common prompt to attend was the BSNSW invitation letter, followed by information or recommendation from a GP. Conclusion: Radio and newspaper advertisements developed for the Italian community did not significantly increase attendance to BSNSW. Implications: Measures of program effectiveness based on self-report may not correspond to aggregate screening behaviour. The development of the media campaign in conjunction with the Italian community, and the provision of appropriate levels of resourcing, did not ensure the media campaign's success.
Resumo:
This special issue represents a further exploration of some issues raised at a symposium entitled “Functional magnetic resonance imaging: From methods to madness” presented during the 15th annual Theoretical and Experimental Neuropsychology (TENNET XV) meeting in Montreal, Canada in June, 2004. The special issue’s theme is methods and learning in functional magnetic resonance imaging (fMRI), and it comprises 6 articles (3 reviews and 3 empirical studies). The first (Amaro and Barker) provides a beginners guide to fMRI and the BOLD effect (perhaps an alternative title might have been “fMRI for dummies”). While fMRI is now commonplace, there are still researchers who have yet to employ it as an experimental method and need some basic questions answered before they venture into new territory. This article should serve them well. A key issue of interest at the symposium was how fMRI could be used to elucidate cerebral mechanisms responsible for new learning. The next 4 articles address this directly, with the first (Little and Thulborn) an overview of data from fMRI studies of category-learning, and the second from the same laboratory (Little, Shin, Siscol, and Thulborn) an empirical investigation of changes in brain activity occurring across different stages of learning. While a role for medial temporal lobe (MTL) structures in episodic memory encoding has been acknowledged for some time, the different experimental tasks and stimuli employed across neuroimaging studies have not surprisingly produced conflicting data in terms of the precise subregion(s) involved. The next paper (Parsons, Haut, Lemieux, Moran, and Leach) addresses this by examining effects of stimulus modality during verbal memory encoding. Typically, BOLD fMRI studies of learning are conducted over short time scales, however, the fourth paper in this series (Olson, Rao, Moore, Wang, Detre, and Aguirre) describes an empirical investigation of learning occurring over a longer than usual period, achieving this by employing a relatively novel technique called perfusion fMRI. This technique shows considerable promise for future studies. The final article in this special issue (de Zubicaray) represents a departure from the more familiar cognitive neuroscience applications of fMRI, instead describing how neuroimaging studies might be conducted to both inform and constrain information processing models of cognition.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.