76 resultados para Appearance-based methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal dosing schedule for melphalan therapy of recurrent malignant melanoma in isolated limb perfusions has been examined using a physiological pharmacokinetic model with data from isolated rat hindlimb perfusions (IRHP), The study included a comparison of melphalan distribution in IRHP under hyperthermia and normothermia conditions. Rat hindlimbs were perfused with Krebs-Henseleit buffer containing 4.7% bovine serum albumin at 37 or 41.5 degrees C at a flow rate of 4 ml/min. Concentrations of melphalan in perfusate and tissues were determined by high performance liquid chromatography with fluorescence detection, The concentration of melphalan in perfusate and tissues was linearly related to the input concentration. The rate and amount of melphalan uptake into the different tissues was higher at 41.5 degrees C than at 37 degrees C. A physiological pharmacokinetic model was validated from the tissue and perfusate time course of melphalan after melphalan perfusion. Application of the model involved the amount of melphalan exposure in the muscle, skin and fat in a recirculation system was related to the method of melphalan administration: single bolus > divided bolus > infusion, The peak concentration of melphalan in the perfusate was also related to the method of administration in the same order, Infusing the total dose of melphalan over 20 min during a 60 min perfusion optimized the exposure of tissues to melphalan whilst minimizing the peak perfusate concentration of melphalan. It is suggested that this method of melphalan administration may be preferable to other methods in terms of optimizing the efficacy of melphalan whilst minimizing the limb toxicity associated with its use in isolated limb perfusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss implicit Taylor methods for stiff Ito stochastic differential equations. Based on the relationship between Ito stochastic integrals and backward stochastic integrals, we introduce three implicit Taylor methods: the implicit Euler-Taylor method with strong order 0.5, the implicit Milstein-Taylor method with strong order 1.0 and the implicit Taylor method with strong order 1.5. The mean-square stability properties of the implicit Euler-Taylor and Milstein-Taylor methods are much better than those of the corresponding semi-implicit Euler and Milstein methods and these two implicit methods can be used to solve stochastic differential equations which are stiff in both the deterministic and the stochastic components. Numerical results are reported to show the convergence properties and the stability properties of these three implicit Taylor methods. The stability analysis and numerical results show that the implicit Euler-Taylor and Milstein-Taylor methods are very promising methods for stiff stochastic differential equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss implicit methods based on stiffly accurate Runge-Kutta methods and splitting techniques for solving Stratonovich stochastic differential equations (SDEs). Two splitting techniques: the balanced splitting technique and the deterministic splitting technique, are used in this paper. We construct a two-stage implicit Runge-Kutta method with strong order 1.0 which is corrected twice and no update is needed. The stability properties and numerical results show that this approach is suitable for solving stiff SDEs. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Problems associated with the stickiness of food in processing and storage practices along with its causative factors are outlined. Fundamental mechanisms that explain why and how food products become sticky are discussed. Methods currently in use for characterizing and overcoming stickiness problems in food processing and storage operations are described. The use of glass transition temperature-based model, which provides a rational basis for understanding and characterizing the stickiness of many food products, is highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes a new test method for the assessment of the severity of environmental stress cracking of biomedical polyurethanes in a manner that minimizes the degree of subjectivity involved. The effect of applied strain and acetone pre-treatment on degradation of Pellethane 2363 80A and Pellethane 2363 55D polyurethanes under in vitro and in vivo conditions is studied. The results are presented using a magnification-weighted image rating system that allows the semi-quantitative rating of degradation based on distribution and severity of surface damage. Devices for applying controlled strain to both flat sheet and tubing samples are described. The new rating system consistently discriminated between. the effects of acetone pre-treatments, strain and exposure times in both in vitro and in vivo experiments. As expected, P80A underwent considerable stress cracking compared with P55D. P80A produced similar stress crack ratings in both in vivo and in vitro experiments, however P55D performed worse under in vitro conditions compared with in vivo. This result indicated that care must be taken when interpreting in vitro results in the absence of in vivo data. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement. (C) 2001 Elsevier Science Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Germline mutations of the PTEN tumor-suppressor gene, on 10q23, cause Cowden syndrome, an inherited hamartoma syndrome with a high risk of breast, thyroid and endometrial carcinomas and, some suggest, melanoma. To date, most studies which strongly implicate PTEN in the etiology of sporadic melanomas have depended on cell lines, short-term tumor cultures and noncultured metastatic melanomas. The only study which reports PTEN protein expression in melanoma focuses on cytoplasmic expression, mainly in metastatic samples. To determine how PTEN contributes to the etiology or the progression of primary cutaneous melanoma, we examined cytoplasmic and nuclear PTEN expression against clinical and pathologic features in a population-based sample of 150 individuals with incident primary cutaneous melanoma. Among 92 evaluable samples, 30 had no or decreased cytoplasmic PTEN protein expression and the remaining 62 had normal PTEN expression. In contrast, 84 tumors had no or decreased nuclear expression and 8 had normal nuclear PTEN expression. None of the clinical features studied, such as Clark's level and Breslow thickness or sun exposure, were associated with cytoplasmic PTEN expressional levels. An association with loss of nuclear PTEN expression was indicated for anatomical site (p = 0.06) and mitotic index (p = 0.02). There was also an association for melanomas to either not express nuclear PTEN or to express p53 alone, rather than both simultaneously (p = 0.02). In contrast with metastatic melanoma, where we have shown previously that almost two-thirds of tumors have some PTEN inactivation, only one-third of primary melanomas had PTEN silencing. This suggests that PTEN inactivation is a late event likely related to melanoma progression rather than initiation. Taken together with our previous observations in thyroid and islet cell tumors, our data suggest that nuclear-cytoplasmic partitioning of PTEN might also play a role in melanoma progression. (C) 2002 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.