954 resultados para Computational Simulation
Resumo:
The pharmacokinetic determinants of successful antibiotic prophylaxis of endocarditis are not precisely known. Differences in half-lives of antibiotics between animals and humans preclude extrapolation of animal results to human situations. To overcome this limitation, we have mimicked in rats the amoxicillin kinetics in humans following a 3-g oral dose (as often used for prophylaxis of endocarditis) by delivering the drug through a computerized pump. Rats with catheter-induced vegetations were challenged with either of two strains of antibiotic-tolerant viridans group streptococci. Antibiotics were given either through the pump (to simulate the whole kinetic profile during prophylaxis in humans) or as an intravenous bolus which imitated only the peak level of amoxicillin (18 mg/liter) in human serum. Prophylaxis by intravenous bolus was inoculum dependent and afforded a limited protection only in rats challenged with the minimum inoculum size infecting > or = 90% of untreated controls. In contrast, simulation of kinetics in humans significantly protected animals challenged with 10 to 100 times the inoculum of either of the test organisms infecting > or = 90% of untreated controls. Thus, simulation of the profiles of amoxicillin prophylaxis in human serum was more efficacious than mere imitation of the transient peak level in rats. This confirms previous studies suggesting that the duration for which the serum amoxicillin level remained detectable (not only the magnitude of the peak) was an important parameter in successful prophylaxis of endocarditis. The results also suggest that single-dose prophylaxis with 3 g of amoxicillin in humans might be more effective than predicted by conventional animal models in which only peak levels of antibiotic in human serum were stimulated.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
We extend PML theory to account for information on the conditional moments up to order four, but without assuming a parametric model, to avoid a risk of misspecification of the conditional distribution. The key statistical tool is the quartic exponential family, which allows us to generalize the PML2 and QGPML1 methods proposed in Gourieroux et al. (1984) to PML4 and QGPML2 methods, respectively. An asymptotic theory is developed. The key numerical tool that we use is the Gauss-Freud integration scheme that solves a computational problem that has previously been raised in several fields. Simulation exercises demonstrate the feasibility and robustness of the methods [Authors]
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
Le modèle développé à l'Institut universitaire de médecine sociale et préventive de Lausanne utilise un programme informatique pour simuler les mouvements d'entrées et de sorties des hôpitaux de soins généraux. Cette simulation se fonde sur les données récoltées de routine dans les hôpitaux; elle tient notamment compte de certaines variations journalières et saisonnières, du nombre d'entrées, ainsi que du "Case-Mix" de l'hôpital, c'est-à-dire de la répartition des cas selon les groupes cliniques et l'âge des patients.
Resumo:
La dinàmica de fluids computacional (CFD) és una eina que serveix per analitzar mitjançantcomputadors diferents problemes que involucren fluxos de fluids. Els programes de CFD usen expressions matemàtiques no lineals que defineixen les equacions fonamentals de fluxos i transport de calor en fluids. Aquestes es resolen amb complexos algoritmes iteratius. Actualment aquesta eina és una part fonamental en els procés de disseny en moltes empreses relacionades amb la dinàmica de fluids. Les simulacions que es realitzen ambaquests programes s’ha demostrat que són fiables i que estalvien temps i diners, ja que eviten haver de realitzar els costosos processos d’assaig-error. En el projecte s’utilitza el programa de CFD Ansys CFX 11.0 per simular una agitació bifàsica composta per aigua i aire a temperatura ambient. Els objectius són determinar els paràmetres òptims de simulació que permetin recrear aquesta agitació, per posteriorment dissenyar un nou impulsor
Resumo:
To further understand the pharmacological properties of N-oleoylethanolamine (OEA), a naturally occurring lipid that activates peroxisome proliferator-activated receptor alpha (PPARα), we designed sulfamoyl analogs based on its structure. Among the compounds tested, N-octadecyl-N'-propylsulfamide (CC7) was selected for functional comparison with OEA. The performed studies include the following computational and biological approaches: 1) molecular docking analyses; 2) molecular biology studies with PPARα; 3) pharmacological studies on feeding behavior and visceral analgesia. For the docking studies, we compared OEA and CC7 data with crystallization data obtained with the reference PPARα agonist GW409544. OEA and CC7 interacted with the ligand-binding domain of PPARα in a similar manner to GW409544. Both compounds produced similar transcriptional activation by in vitro assays, including the GST pull-down assay and reporter gene analysis. In addition, CC7 and OEA induced the mRNA expression of CPT1a in HpeG2 cells through PPARα and the induction was avoided with PPARα-specific siRNA. In vivo studies in rats showed that OEA and CC7 had anorectic and antiobesity activity and induced both lipopenia and decreases in hepatic fat content. However, different effects were observed when measuring visceral pain; OEA produced visceral analgesia whereas CC7 showed no effects. These results suggest that OEA activity on the PPARα receptor (e.g., lipid metabolism and feeding behavior) may be dissociated from other actions at alternative targets (e.g., pain) because other non cannabimimetic ligands that interact with PPARα, such as CC7, do not reproduce the full spectrum of the pharmacological activity of OEA. These results provide new opportunities for the development of specific PPARα-activating drugs focused on sulfamide derivatives with a long alkyl chain for the treatment of metabolic dysfunction.
Resumo:
OBJECTIVES: Human papillomavirus (HPV) is a sexually transmitted infection of particular interest because of its high prevalence rate and strong causal association with cervical cancer. Two prophylactic vaccines have been developed and different countries have made or will soon make recommendations for the vaccination of girls. Even if there is a consensus to recommend a vaccination before the beginning of sexual activity, there are, however, large discrepancies between countries concerning the perceived usefulness of a catch-up procedure and of boosters. The main objective of this article is to simulate the impact on different vaccination policies upon the mid- and long-term HPV 16/18 age-specific infection rates. METHODS: We developed an epidemiological model based on the susceptible-infective-recovered approach using Swiss data. The mid- and long-term impact of different vaccination scenarios was then compared. RESULTS: The generalization of a catch-up procedure is always beneficial, whatever its extent. Moreover, pending on the length of the protection offered by the vaccine, boosters will also be very useful. CONCLUSIONS: To be really effective, a vaccination campaign against HPV infection should at least include a catch-up to early reach a drop in HPV 16/18 prevalence, and maybe boosters. Otherwise, the protection insured for women in their 20s could be lower than expected, resulting in higher risks to later develop cervical cancer.
Resumo:
Reverse transcriptase (RT) is a multifunctional enzyme in the human immunodeficiency virus (HIV)-1 life cycle and represents a primary target for drug discovery efforts against HIV-1 infection. Two classes of RT inhibitors, the nucleoside RT inhibitors (NRTIs) and the nonnucleoside transcriptase inhibitors are prominently used in the highly active antiretroviral therapy in combination with other anti-HIV drugs. However, the rapid emergence of drug-resistant viral strains has limited the successful rate of the anti-HIV agents. Computational methods are a significant part of the drug design process and indispensable to study drug resistance. In this review, recent advances in computer-aided drug design for the rational design of new compounds against HIV-1 RT using methods such as molecular docking, molecular dynamics, free energy calculations, quantitative structure-activity relationships, pharmacophore modelling and absorption, distribution, metabolism, excretion and toxicity prediction are discussed. Successful applications of these methodologies are also highlighted.
Resumo:
Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.