938 resultados para Quasi-analytical algorithms
Resumo:
Protein-ligand docking has made important progress during the last decade and has become a powerful tool for drug development, opening the way to virtual high throughput screening and in silico structure-based ligand design. Despite the flattering picture that has been drawn, recent publications have shown that the docking problem is far from being solved, and that more developments are still needed to achieve high successful prediction rates and accuracy. Introducing an accurate description of the solvation effect upon binding is thought to be essential to achieve this goal. In particular, EADock uses the Generalized Born Molecular Volume 2 (GBMV2) solvent model, which has been shown to reproduce accurately the desolvation energies calculated by solving the Poisson equation. Here, the implementation of the Fast Analytical Continuum Treatment of Solvation (FACTS) as an implicit solvation model in small molecules docking calculations has been assessed using the EADock docking program. Our results strongly support the use of FACTS for docking. The success rates of EADock/FACTS and EADock/GBMV2 are similar, i.e. around 75% for local docking and 65% for blind docking. However, these results come at a much lower computational cost: FACTS is 10 times faster than GBMV2 in calculating the total electrostatic energy, and allows a speed up of EADock by a factor of 4. This study also supports the EADock development strategy relying on the CHARMM package for energy calculations, which enables straightforward implementation and testing of the latest developments in the field of Molecular Modeling.
Resumo:
ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
Many regional governments in developed countries design programs to improve the competitiveness of local firms. In this paper, we evaluate the effectiveness of public programs whose aim is to enhance the performance of firms located in Catalonia (Spain). We compare the performance of publicly subsidised companies (treated) with that of similar, but unsubsidised companies (non-treated). We use the Propensity Score Matching (PSM) methodology to construct a control group which, with respect to its observable characteristics, is as similar as possible to the treated group, and that allows us to identify firms which retain the same propensity to receive public subsidies. Once a valid comparison group has been established, we compare the respective performance of each firm. As a result, we find that recipient firms, on average, change their business practices, improve their performance, and increase their value added as a direct result of public subsidy programs.
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
Many regional governments in developed countries design programs to improve the competitiveness of local firms. In this paper, we evaluate the effectiveness of public programs whose aim is to enhance the performance of firms located in Catalonia (Spain). We compare the performance of publicly subsidised companies (treated) with that of similar, but unsubsidised companies (non-treated). We use the Propensity Score Matching (PSM) methodology to construct a control group which, with respect to its observable characteristics, is as similar as possible to the treated group, and that allows us to identify firms which retain the same propensity to receive public subsidies. Once a valid comparison group has been established, we compare the respective performance of each firm. As a result, we find that recipient firms, on average, change their business practices, improve their performance, and increase their value added as a direct result of public subsidy programs.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
The real part of the optical potential for heavy ion elastic scattering is obtained by double folding of the nuclear densities with a density-dependent nucleon-nucleon effective interaction which was successful in describing the binding, size, and nucleon separation energies in spherical nuclei. A simple analytical form is found to differ from the resulting potential considerably less than 1% all through the important region. This analytical potential is used so that only few points of the folding need to be computed. With an imaginary part of the Woods-Saxon type, this potential predicts the elastic scattering angular distribution in very good agreement with experimental data, and little renormalization (unity in most cases) is needed.
Resumo:
OBJECTIVES: To show the effectiveness of a brief group alcohol intervention. Aims of the intervention were to reduce the frequency of heavy drinking occasions, maximum number of drinks on an occasion and overall weekly consumption. METHODS: A cluster quasi-randomized control trial (intervention n = 338; control n = 330) among 16- to 18-year-old secondary school students in the Swiss Canton of Zürich. Groups homogeneous for heavy drinking occasions (5+/4+ drinks for men/women) consisted of those having medium risk (3-4) or high risk (5+) occasions in the past 30 days. Groups of 8-10 individuals received two 45-min sessions based on motivational interviewing techniques. RESULTS: Borderline significant beneficial effects (p < 0.10) on heavy drinking occasions and alcohol volume were found 6 months later for the medium-risk group only, but not for the high-risk group. None of the effects remained significant after Bonferroni corrections. CONCLUSIONS: Group intervention was ineffective for all at-risk users. The heaviest drinkers may need more intensive treatment. Alternative explanations were iatrogenic effects among the heaviest drinkers, assessment reactivity, or reduction of social desirability bias at follow-up through peer feedback.
Resumo:
We study the singular effects of vanishingly small surface tension on the dynamics of finger competition in the Saffman-Taylor problem, using the asymptotic techniques described by Tanveer [Philos. Trans. R. Soc. London, Ser. A 343, 155 (1993)] and Siegel and Tanveer [Phys. Rev. Lett. 76, 419 (1996)], as well as direct numerical computation, following the numerical scheme of Hou, Lowengrub, and Shelley [J. Comput. Phys. 114, 312 (1994)]. We demonstrate the dramatic effects of small surface tension on the late time evolution of two-finger configurations with respect to exact (nonsingular) zero-surface-tension solutions. The effect is present even when the relevant zero-surface-tension solution has asymptotic behavior consistent with selection theory. Such singular effects, therefore, cannot be traced back to steady state selection theory, and imply a drastic global change in the structure of phase-space flow. They can be interpreted in the framework of a recently introduced dynamical solvability scenario according to which surface tension unfolds the structurally unstable flow, restoring the hyperbolicity of multifinger fixed points.
Resumo:
A precise and simple computational model to generate well-behaved two-dimensional turbulent flows is presented. The whole approach rests on the use of stochastic differential equations and is general enough to reproduce a variety of energy spectra and spatiotemporal correlation functions. Analytical expressions for both the continuous and the discrete versions, together with simulation algorithms, are derived. Results for two relevant spectra, covering distinct ranges of wave numbers, are given.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
There has been a recent revolution in the ability to manipulate micrometer-sized objects on surfaces patterned by traps or obstacles of controllable configurations and shapes. One application of this technology is to separate particles driven across such a surface by an external force according to some particle characteristic such as size or index of refraction. The surface features cause the trajectories of particles driven across the surface to deviate from the direction of the force by an amount that depends on the particular characteristic, thus leading to sorting. While models of this behavior have provided a good understanding of these observations, the solutions have so far been primarily numerical. In this paper we provide analytic predictions for the dependence of the angle between the direction of motion and the external force on a number of model parameters for periodic as well as random surfaces. We test these predictions against exact numerical simulations.
Resumo:
Biological markers for the status of vitamins B12 and D: the importance of some analytical aspects in relation to clinical interpretation of results When vitamin B12 deficiency is expressed clinically, the diagnostic performance of total cobalamin is identical to that of holotranscobalamin II. In subclinical B12 deficiency, the two aforementioned markers perform less well. Additional analysis of a second, functional marker (methylmalonate or homocysteine) is recommended. Different analytical approaches for 25-hydroxyvitamin D quantification, the marker of vitamin D deficiency, are not yet standardized. Measurement biases of up to +/- 20% compared with the original method used to establish threshold values are still observed.