966 resultados para Subgrid-scale Modelling
Resumo:
This paper reviews three different approaches to modelling the cost-effectiveness of schistosomiasis control. Although these approaches vary in their assessment of costs, the major focus of the paper is on the evaluation of effectiveness. The first model presented is a static economic model which assesses effectiveness in terms of the proportion of cases cured. This model is important in highlighting that the optimal choice of chemotherapy regime depends critically on the level of budget constraint, the unit costs of screening and treatment, the rates of compliance with screening and chemotherapy and the prevalence of infection. The limitations of this approach is that it models the cost-effectiveness of only one cycle of treatment, and effectiveness reflects only the immediate impact of treatment. The second model presented is a prevalence-based dynamic model which links prevalence rates from one year to the next, and assesses effectiveness as the proportion of cases prevented. This model was important as it introduced the concept of measuring the long-term impact of control by using a transmission model which can assess reduction in infection through time, but is limited to assessing the impact only on the prevalence of infection. The third approach presented is a theoretical framework which describes the dynamic relationships between infection and morbidity, and which assesses effectiveness in terms of case-years prevented of infection and morbidity. The use of this model in assessing the cost-effectiveness of age-targeted treatment in controlling Schistosoma mansoni is explored in detail, with respect to varying frequencies of treatment and the interaction between drug price and drug efficacy.
Resumo:
OBJECTIVES: Darunavir is a protease inhibitor that is administered with low-dose ritonavir to enhance its bioavailability. It is prescribed at standard dosage regimens of 600/100 mg twice daily in treatment-experienced patients and 800/100 mg once daily in naive patients. A population pharmacokinetic approach was used to characterize the pharmacokinetics of both drugs and their interaction in a cohort of unselected patients and to compare darunavir exposure expected under alternative dosage regimens. METHODS: The study population included 105 HIV-infected individuals who provided darunavir and ritonavir plasma concentrations. Firstly, a population pharmacokinetic analysis for darunavir and ritonavir was conducted, with inclusion of patients' demographic, clinical and genetic characteristics as potential covariates (NONMEM(®)). Then, the interaction between darunavir and ritonavir was studied while incorporating levels of both drugs into different inhibitory models. Finally, model-based simulations were performed to compare trough concentrations (Cmin) between the recommended dosage regimen and alternative combinations of darunavir and ritonavir. RESULTS: A one-compartment model with first-order absorption adequately characterized darunavir and ritonavir pharmacokinetics. The between-subject variability in both compounds was important [coefficient of variation (CV%) 34% and 47% for darunavir and ritonavir clearance, respectively]. Lopinavir and ritonavir exposure (AUC) affected darunavir clearance, while body weight and darunavir AUC influenced ritonavir elimination. None of the tested genetic variants showed any influence on darunavir or ritonavir pharmacokinetics. The simulations predicted darunavir Cmin much higher than the IC50 thresholds for wild-type and protease inhibitor-resistant HIV-1 strains (55 and 550 ng/mL, respectively) under standard dosing in >98% of experienced and naive patients. Alternative regimens of darunavir/ritonavir 1200/100 or 1200/200 mg once daily also had predicted adequate Cmin (>550 ng/mL) in 84% and 93% of patients, respectively. Reduction of darunavir/ritonavir dosage to 600/50 mg twice daily led to a 23% reduction in average Cmin, still with only 3.8% of patients having concentrations below the IC50 for resistant strains. CONCLUSIONS: The important variability in darunavir and ritonavir pharmacokinetics is poorly explained by clinical covariates and genetic influences. In experienced patients, treatment simplification strategies guided by drug level measurements and adherence monitoring could be proposed.
Resumo:
According to the most widely accepted Cattell-Horn-Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS-III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a Schmid-Leiman orthogonalization transformation (SLT) to the standardization data published in the French technical manual for the WAIS-III. Results showed that the general factor accounted for 63% of the common variance and that the specific contributions of the 1st-order factors were weak (4.7%-15.9%). We also addressed this issue by using confirmatory factor analysis. Results indicated that the bifactor model (with 1st-order group and general factors) better fit the data than did the traditional higher order structure. Models based on the CHC framework were also tested. Results indicated that a higher order CHC model showed a better fit than did the classical 4-factor model; however, the WAIS bifactor structure was the most adequate. We recommend that users do not discount the Full Scale IQ when interpreting the index scores of the WAIS-III because the general factor accounts for the bulk of the common variance in the French WAIS-III. The 4 index scores cannot be considered to reflect only broad ability because they include a strong contribution of the general factor.
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.
Resumo:
To date, there is no widely accepted clinical scale to monitor the evolution of depressive symptoms in demented patients. We assessed the sensitivity to treatment of a validated French version of the Health of the Nation Outcome Scale (HoNOS) 65+ compared to five routinely used scales. Thirty elderly inpatients with ICD-10 diagnosis of dementia and depression were evaluated at admission and discharge using paired t-test. Using the Brief Psychiatric Rating Scale (BPRS) "depressive mood" item as gold standard, a receiver operating characteristic curve (ROC) analysis assessed the validity of HoNOS65+F "depressive symptoms" item score changes. Unlike Geriatric Depression Scale, Mini Mental State Examination and Activities of Daily Living scores, BPRS scores decreased and Global Assessment Functioning Scale score increased significantly from admission to discharge. Amongst HoNOS65+F items, "behavioural disturbance", "depressive symptoms", "activities of daily life" and "drug management" items showed highly significant changes between the first and last day of hospitalization. The ROC analysis revealed that changes in the HoNOS65+F "depressive symptoms" item correctly classified 93% of the cases with good sensitivity (0.95) and specificity (0.88) values. These data suggest that the HoNOS65+F "depressive symptoms" item may provide a valid assessment of the evolution of depressive symptoms in demented patients.
Resumo:
Rapport de synthèse Introduction : Le Glasgow coma score (GCS) est un outil reconnu permettant l'évaluation des patients après avoir subi un traumatisme crânien. Il est réputé pour sa simplicité et sa reproductibilité permettant ainsi aux soignants une évaluation appropriée et continue du status neurologique des patients. Le GCS est composé de trois catégories évaluant la réponse oculaire, verbale et motrice. En Suisse, les soins préhospitaliers aux patients victimes d'un trauma crânien sévère sont effectués par des médecins, essdntiellement à bord des hélicoptères médicalisés. Avant une anesthésie générale nécessaire à ces patients, une évaluation du GCS est essentielle indiquant au personnel hospitalier la gravité des lésions cérébrales. Afin d'évaluer la connaissance du GCS par les médecins à bord des hélicoptères médicalisés en Suisse, nous avons élaboré un questionnaire, contenant dans une première partie des questions sur les connaissances générales du GCS suivi d'un cas clinique. Objectif : Evaluation des connaissances pratiques et théoriques du GCS par les médecins travaillant à bord des hélicoptères médicalisés en Suisse. Méthode : Etude observationnelle prospective et anonymisée à l'aide d'un questionnaire. Evaluation des connaissances générales du GCS et de son utilisation clinique lors de la présentation d'un cas. Résultats : 16 des 18 bases d'hélicoptères médicalisés suisses ont participé à notre étude. 130 questionnaires ont été envoyés et le taux de réponse a été de 79.2%. Les connaissances théoriques du GCS étaient comparables pour tous les médecins indépendamment de leur niveau de formation. Des erreurs dans l'appréciation du cas clinique étaient présentes chez 36.9% des participants. 27.2% ont commis des erreurs dans le score moteur et 18.5% dans le score verbal. Les erreurs ont été répertoriées le plus fréquemment chez les médecins assistants (47.5%, p=0.09), suivi par les chefs de clinique (31.6%, p=0.67) et les médecins installés en cabinet (18.4%, p=1.00). Les médecins cadres ont fait significativement moins d'erreurs que les autres participants (0%, p<0.05). Aucune différence significative n'à été observée entre les différentes spécialités (anesthésie, médecine interne, médecine général et «autres »). Conclusion Même si les connaissances théoriques du GCS sont adéquates parmi les médecins travaillant à bord des hélicoptères médicalisés, des erreurs dans son application clinique sont présentes dans plus d'un tiers des cas. Les médecins avec le moins d'expériences professionnelle font le plus d'erreurs. Au vu de l'importance de l'évaluation correcte du score de Glasgow initial, une amélioration des connaissances est indispensable.
Resumo:
A factor limiting preliminary rockfall hazard mapping at regional scale is often the lack of knowledge of potential source areas. Nowadays, high resolution topographic data (LiDAR) can account for realistic landscape details even at large scale. With such fine-scale morphological variability, quantitative geomorphometric analyses become a relevant approach for delineating potential rockfall instabilities. Using digital elevation model (DEM)-based ?slope families? concept over areas of similar lithology and cliffs and screes zones available from the 1:25,000 topographic map, a susceptibility rockfall hazard map was drawn up in the canton of Vaud, Switzerland, in order to provide a relevant hazard overview. Slope surfaces over morphometrically-defined thresholds angles were considered as rockfall source zones. 3D modelling (CONEFALL) was then applied on each of the estimated source zones in order to assess the maximum runout length. Comparison with known events and other rockfall hazard assessments are in good agreement, showing that it is possible to assess rockfall activities over large areas from DEM-based parameters and topographical elements.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Rapid response to: Ortegón M, Lim S, Chisholm D, Mendis S. Cost effectiveness of strategies to combat cardiovascular disease, diabetes, and tobacco use in sub-Saharan Africa and South East Asia: mathematical modelling study. BMJ. 2012 Mar 2;344:e607. doi: 10.1136/bmj.e607. PMID: 22389337.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Cry11Bb is an insecticidal crystal protein produced by Bacillus thuringiensis subsp. medellin during its stationary phase; this ¶-endotoxin is active against dipteran insects and has great potential for mosquito borne disease control. Here, we report the first theoretical model of the tridimensional structure of a Cry11 toxin. The tridimensional structure of the Cry11Bb toxin was obtained by homology modelling on the structures of the Cry1Aa and Cry3Aa toxins. In this work we give a brief description of our model and hypothesize the residues of the Cry11Bb toxin that could be important in receptor recognition and pore formation. This model will serve as a starting point for the design of mutagenesis experiments aimed to the improvement of toxicity, and to provide a new tool for the elucidation of the mechanism of action of these mosquitocidal proteins.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.