986 resultados para Backward model refinement
Resumo:
Guided by a modified information-motivation-behavioral skills model, this study identified predictors of condom use among heterosexual people living with HIV with their steady partners. Consecutive patients at 14 European HIV outpatient clinics received an anonymous, standardized, self-administered questionnaire between March and December 2007. Data were analyzed using descriptive statistics and two-step backward elimination regression analyses stratified by gender. The survey included 651 participants (n = 364, 56% women; n = 287, 44%). Mean age was 39 years for women and 43 years for men. Most had acquired HIV sexually and more than half were in a serodiscordant relationship. Sixty-three percent (n = 229) of women and 59% of men (n = 169) reported at least one sexual encounter with a steady partner 6 months prior to the survey. Fifty-one percent (n = 116) of women and 59% of men (n = 99) used condoms consistently with that partner. In both genders, condom use was positively associated with subjective norm conducive to condom use, and self-efficacy to use condoms. Having a partner whose HIV status was positive or unknown reduced condom use. In men, higher education and knowledge about condom use additionally increased condom use, while the use of erectile-enhancing medication decreased it. For women, HIV disclosure to partners additionally reduced the likelihood of condom use. Positive attitudes to condom use and subjective norm increased self-efficacy in both genders, however, a number of gender-related differences appeared to influence self-efficacy. Service providers should pay attention to the identified predictors of condom use and adopt comprehensive and gender-related approaches for preventive interventions with people living with HIV.
Resumo:
QUESTIONS UNDER STUDY: The starting point of the interdisciplinary project "Assessing the impact of diagnosis related groups (DRGs) on patient care and professional practice" (IDoC) was the lack of a systematic ethical assessment for the introduction of cost containment measures in healthcare. Our aim was to contribute to the methodological and empirical basis of such an assessment. METHODS: Five sub-groups conducted separate but related research within the fields of biomedical ethics, law, nursing sciences and health services, applying a number of complementary methodological approaches. The individual research projects were framed within an overall ethical matrix. Workshops and bilateral meetings were held to identify and elaborate joint research themes. RESULTS: Four common, ethically relevant themes emerged in the results of the studies across sub-groups: (1.) the quality and safety of patient care, (2.) the state of professional practice of physicians and nurses, (3.) changes in incentives structure, (4.) vulnerable groups and access to healthcare services. Furthermore, much-needed data for future comparative research has been collected and some early insights into the potential impact of DRGs are outlined. CONCLUSIONS: Based on the joint results we developed preliminary recommendations related to conceptual analysis, methodological refinement, monitoring and implementation.
Resumo:
Homology modeling is the most commonly used technique to build a three-dimensional model for a protein sequence. It heavily relies on the quality of the sequence alignment between the protein to model and related proteins with a known three dimensional structure. Alignment quality can be assessed according to the physico-chemical properties of the three dimensional models it produces.In this work, we introduce fifteen predictors designed to evaluate the properties of the models obtained for various alignments. They consist of an energy value obtained from different force fields (CHARMM, ProsaII or ANOLEA) computed on residue selected around misaligned regions. These predictors were evaluated on ten challenging test cases. For each target, all possible ungapped alignments are generated and their corresponding models are computed and evaluated.The best predictor, retrieving the structural alignment for 9 out of 10 test cases, is based on the ANOLEA atomistic mean force potential and takes into account residues around misaligned secondary structure elements. The performance of the other predictors is significantly lower. This work shows that substantial improvement in local alignments can be obtained by careful assessment of the local structure of the resulting models.
The transtheoretical model in weight management: Validation of the Processes of Change Questionnaire
Resumo:
Objective: The processes of change implied in weight management remain unclear. The present study aimed to identify these processes by validating a questionnaire designed to assess processes of change (the P-Weight) in line with the transtheoretical model. The relationship of processes of change with stages of change and other external variables is also examined. Methods: Participants were 723 people from community and clinical settings in Barcelona. Their mean age was 32.07 (SD = 14.55) years; most of them were women (75.0%), and their mean BMI was 26.47 (SD = 8.52) kg/m2. They all completed the P-Weight and the stages of change questionnaire (SWeight), both applied to weight management, as well as two subscales from the Eating Disorders Inventory-2 and Eating Attitudes Test-40 questionnaires about the concern with dieting. Results: A 34-item version of the PWeight was obtained by means of a refinement process. The principal components analysis applied to half of the sample identified four processes of change. A confirmatory factor analysis was then carried out with the other half of the sample, revealing that the model of four freely correlated first-order factors showed the best fit (GFI = 0.988, AGFI = 0.986, NFI = 0.986, and SRMR = 0.0559). Corrected item-total correlations (0.322-0.865) and Cronbach"s alpha coefficients (0.781-0.960) were adequate. The relationship between the P-Weight and the S-Weight and the concern with dieting measures from other questionnaires supported the validity of the scale. Conclusion: The study identified processes of change involved in weight management and reports the adequate psychometric properties of the P-Weight. It also reveals the relationship between processes and stages of change and other external variables.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
A new algorithm is described for refining the pose of a model of a rigid object, to conform more accurately to the image structure. Elemental 3D forces are considered to act on the model. These are derived from directional derivatives of the image local to the projected model features. The convergence properties of the algorithm is investigated and compared to a previous technique. Its use in a video sequence of a cluttered outdoor traffic scene is also illustrated and assessed.
Resumo:
Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.
Resumo:
Simulations of the global atmosphere for weather and climate forecasting require fast and accurate solutions and so operational models use high-order finite differences on regular structured grids. This precludes the use of local refinement; techniques allowing local refinement are either expensive (eg. high-order finite element techniques) or have reduced accuracy at changes in resolution (eg. unstructured finite-volume with linear differencing). We present solutions of the shallow-water equations for westerly flow over a mid-latitude mountain from a finite-volume model written using OpenFOAM. A second/third-order accurate differencing scheme is applied on arbitrarily unstructured meshes made up of various shapes and refinement patterns. The results are as accurate as equivalent resolution spectral methods. Using lower order differencing reduces accuracy at a refinement pattern which allows errors from refinement of the mountain to accumulate and reduces the global accuracy over a 15 day simulation. We have therefore introduced a scheme which fits a 2D cubic polynomial approximately on a stencil around each cell. Using this scheme means that refinement of the mountain improves the accuracy after a 15 day simulation. This is a more severe test of local mesh refinement for global simulations than has been presented but a realistic test if these techniques are to be used operationally. These efficient, high-order schemes may make it possible for local mesh refinement to be used by weather and climate forecast models.
Resumo:
Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.
Resumo:
A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.
Resumo:
Survival times for the Acacia mangium plantation in the Segaliud Lokan Project, Sabah, East Malaysia were analysed based on 20 permanent sample plots (PSPs) established in 1988 as a spacing experiment. The PSPs were established following a complete randomized block design with five levels of spacing randomly assigned to units within four blocks at different sites. The survival times of trees in years are of interest. Since the inventories were only conducted annually, the actual survival time for each tree was not observed. Hence, the data set comprises censored survival times. Initial analysis of the survival of the Acacia mangium plantation suggested there is block by spacing interaction; a Weibull model gives a reasonable fit to the replicate survival times within each PSP; but a standard Weibull regression model is inappropriate because the shape parameter differs between PSPs. In this paper we investigate the form of the non-constant Weibull shape parameter. Parsimonious models for the Weibull survival times have been derived using maximum likelihood methods. The factor selection for the parameters is based on a backward elimination procedure. The models are compared using likelihood ratio statistics. The results suggest that both Weibull parameters depend on spacing and block.
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
A fast backward elimination algorithm is introduced based on a QR decomposition and Givens transformations to prune radial-basis-function networks. Nodes are sequentially removed using an increment of error variance criterion. The procedure is terminated by using a prediction risk criterion so as to obtain a model structure with good generalisation properties. The algorithm can be used to postprocess radial basis centres selected using a k-means routine and, in this mode, it provides a hybrid supervised centre selection approach.
Resumo:
The isotropic crystallographic model of the structure of xylanase I from Thermoascus aurantiacus (TAXI) has now been refined anisotropically at 1.14 Å resolution to a standard residual of R = 11.1% for all data. TAXI is amongst the five largest proteins deposited in the Protein Data Bank to have been refined with anisotropic displacement parameters (ADPs) at this level of resolution. The anisotropy analysis revealed a more isotropic distribution of anisotropy than usually observed previously. Adding ADPs resulted in high-quality electron-density maps which revealed discrepancies from the previously suggested primary sequences for this enzyme. Side-chain conformational disorder was modelled for 16 residues, including Trp275, a bulky residue at the active site. An unrestrained refinement was consistent with the protonation of the catalytic acid/base glutamate and the deprotonation of the nucleophile glutamate, as required for catalysis. The thermal stability of TAXI is reinterpreted in the light of the new refined model.