993 resultados para Iterative methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A short overview is given on the most important analytical body composition methods. Principles of the methods and advantages and limitations of the methods are discussed also in relation to other fields of research such as energy metabolism. Attention is given to some new developments in body composition research such as chemical multiple-compartment models, computerized tomography or nuclear magnetic resonance imaging (tissue level), and multifrequency bioelectrical impedance. Possible future directions of body composition research in the light of these new developments are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present paper we discuss and compare two different energy decomposition schemes: Mayer's Hartree-Fock energy decomposition into diatomic and monoatomic contributions [Chem. Phys. Lett. 382, 265 (2003)], and the Ziegler-Rauk dissociation energy decomposition [Inorg. Chem. 18, 1558 (1979)]. The Ziegler-Rauk scheme is based on a separation of a molecule into fragments, while Mayer's scheme can be used in the cases where a fragmentation of the system in clearly separable parts is not possible. In the Mayer scheme, the density of a free atom is deformed to give the one-atom Mulliken density that subsequently interacts to give rise to the diatomic interaction energy. We give a detailed analysis of the diatomic energy contributions in the Mayer scheme and a close look onto the one-atom Mulliken densities. The Mulliken density ρA has a single large maximum around the nuclear position of the atom A, but exhibits slightly negative values in the vicinity of neighboring atoms. The main connecting point between both analysis schemes is the electrostatic energy. Both decomposition schemes utilize the same electrostatic energy expression, but differ in how fragment densities are defined. In the Mayer scheme, the electrostatic component originates from the interaction of the Mulliken densities, while in the Ziegler-Rauk scheme, the undisturbed fragment densities interact. The values of the electrostatic energy resulting from the two schemes differ significantly but typically have the same order of magnitude. Both methods are useful and complementary since Mayer's decomposition focuses on the energy of the finally formed molecule, whereas the Ziegler-Rauk scheme describes the bond formation starting from undeformed fragment densities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Iterative image reconstruction algorithms provide significant improvements over traditional filtered back projection in computed tomography (CT). Clinically available through recent advances in modern CT technology, iterative reconstruction enhances image quality through cyclical image calculation, suppressing image noise and artifacts, particularly blooming artifacts. The advantages of iterative reconstruction are apparent in traditionally challenging cases-for example, in obese patients, those with significant artery calcification, or those with coronary artery stents. In addition, as clinical use of CT has grown, so have concerns over ionizing radiation associated with CT examinations. Through noise reduction, iterative reconstruction has been shown to permit radiation dose reduction while preserving diagnostic image quality. This approach is becoming increasingly attractive as the routine use of CT for pediatric and repeated follow-up evaluation grows ever more common. Cardiovascular CT in particular, with its focus on detailed structural and functional analyses, stands to benefit greatly from the promising iterative solutions that are readily available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

P>1. Entomopathogenic nematodes can function as indirect defence for plants that are attacked by root herbivores. By releasing volatile organic compounds (VOCs), plants signal the presence of host insects and thereby attract nematodes.2. Nonetheless, how roots deploy indirect defences, how indirect defences relate to direct defences, and the ecological consequences of root defence allocation for herbivores and plant biomass are essentially unknown.3. We investigate a natural below-ground tritrophic system, involving common milkweed, a specialist root-boring beetle and entomopathogenic nematodes, and asked whether there is a negative genetic correlation between direct defences (root cardenolides) and indirect defences (emission of volatiles in the roots and nematode attraction), and between constitutive and inducible defences.4. Volatiles of roots were analysed using two distinct sampling methods. First, we collected emissions from living Asclepias syriaca roots by dynamic headspace sampling. This method showed that attacked A. syriaca plants emit five times higher levels of volatiles than control plants. Secondly, we used a solid phase micro-extraction (SPME) method to sample the full pool of volatiles in roots for genetic correlations of volatile biosynthesis.5. Field experiments showed that entomopathogenic nematodes prevent the loss of biomass to root herbivory. Additionally, suppression of root herbivores was mediated directly by cardenolides and indirectly by the attraction of nematodes. Genetic families of plants with high cardenolides benefited less from nematodes compared to low-cardenolide families, suggesting that direct and indirect defences may be redundant. Although constitutive and induced root defences traded off within each strategy (for both direct and indirect defence, cardenolides and VOCs, respectively), we found no trade-off between the two strategies.6. Synthesis. Constitutive expression and inducibility of defences may trade off because of resource limitation or because they are redundant. Direct and indirect defences do not trade off, likely because they may not share a limiting resource and because independently they may promote defence across the patchiness of herbivore attack and nematode presence in the field. Indeed, some redundancy in strategies may be necessary to increase effective defence, but for each strategy, an economy of deployment reduces overall costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This review paper reports the consensus of a technical workshop hosted by the European network, NanoImpactNet (NIN). The workshop aimed to review the collective experience of working at the bench with manufactured nanomaterials (MNMs), and to recommend modifications to existing experimental methods and OECD protocols. Current procedures for cleaning glassware are appropriate for most MNMs, although interference with electrodes may occur. Maintaining exposure is more difficult with MNMs compared to conventional chemicals. A metal salt control is recommended for experiments with metallic MNMs that may release free metal ions. Dispersing agents should be avoided, but if they must be used, then natural or synthetic dispersing agents are possible, and dispersion controls essential. Time constraints and technology gaps indicate that full characterisation of test media during ecotoxicity tests is currently not practical. Details of electron microscopy, dark-field microscopy, a range of spectroscopic methods (EDX, XRD, XANES, EXAFS), light scattering techniques (DLS, SLS) and chromatography are discussed. The development of user-friendly software to predict particle behaviour in test media according to DLVO theory is in progress, and simple optical methods are available to estimate the settling behaviour of suspensions during experiments. However, for soil matrices such simple approaches may not be applicable. Alternatively, a Critical Body Residue approach may be taken in which body concentrations in organisms are related to effects, and toxicity thresholds derived. For microbial assays, the cell wall is a formidable barrier to MNMs and end points that rely on the test substance penetrating the cell may be insensitive. Instead assays based on the cell envelope should be developed for MNMs. In algal growth tests, the abiotic factors that promote particle aggregation in the media (e.g. ionic strength) are also important in providing nutrients, and manipulation of the media to control the dispersion may also inhibit growth. Controls to quantify shading effects, and precise details of lighting regimes, shaking or mixing should be reported in algal tests. Photosynthesis may be more sensitive than traditional growth end points for algae and plants. Tests with invertebrates should consider non-chemical toxicity from particle adherence to the organisms. The use of semi-static exposure methods with fish can reduce the logistical issues of waste water disposal and facilitate aspects of animal husbandry relevant to MMNs. There are concerns that the existing bioaccumulation tests are conceptually flawed for MNMs and that new test(s) are required. In vitro testing strategies, as exemplified by genotoxicity assays, can be modified for MNMs, but the risk of false negatives in some assays is highlighted. In conclusion, most protocols will require some modifications and recommendations are made to aid the researcher at the bench. [Authors]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The objective of the present study was to compare three different sampling and questionnaire administration methods used in the international KIDSCREEN study in terms of participation, response rates, and external validity. Methods: Children and adolescents aged 8–18 years were surveyed in 13 European countries using either telephone sampling and mail administration, random sampling of school listings followed by classroom or mail administration, or multistage random sampling of communities and households with self-administration of the survey materials at home. Cooperation, completion, and response rates were compared across countries and survey methods. Data on non-respondents was collected in 8 countries. The population fraction (PF, respondents in each sex-age, or educational level category, divided by the population in the same category from Eurostat census data) and population fraction ratio (PFR, ratio of PF) and their corresponding 95% confidence intervals were used to analyze differences by country between the KIDSCREEN samples and a reference Eurostat population. Results: Response rates by country ranged from 18.9% to 91.2%. Response rates were highest in the school-based surveys (69.0%–91.2%). Sample proportions by age and gender were similar to the reference Eurostat population in most countries, although boys and adolescents were slightly underrepresented (PFR <1). Parents in lower educational categories were less likely to participate (PFR <1 in 5 countries). Parents in higher educational categories were overrepresented when the school and household sampling strategies were used (PFR = 1.78–2.97). Conclusion: School-based sampling achieved the highest overall response rates but also produced slightly more biased samples than the other methods. The results suggest that the samples were sufficiently representative to provide reference population values for the KIDSCREEN instrument.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pneumocystis jirovecii is a fungus belonging to a basal lineage of the Ascomycotina, the Taphrinomycotina subphylum. It is a parasite specific to humans that dwells primarily in the lung and can cause severe pneumonia in individuals with debilitated immune system. Despite its clinical importance, many aspects of its biology remain poorly understood, at least in part because of the lack of a continuous in vitro cultivation system. The present thesis consists in the genome reconstruction and comparative genomics of P. jirovecii. It is made of three parts: (i) the de novo sequencing of P. jirovecii genome starting from a single broncho- alveolar lavage fluid of a single patient (ii) the de novo sequencing of the genome of the plant pathogen Taphrina deformans, a fungus closely related to P. jirovecii, and (iii) the genome scale comparison of P. jirovecii to other Taphrinomycotina members. Enrichment in P. jirovecii cells by immuno-precipitation, whole DNA random amplification, two complementary high throughput DNA sequencing methods, and in silico sorting and assembly of sequences were used for the de novo reconstruction of P. jirovecii genome from the microbiota of a single clinical specimen. An iterative ad hoc pipeline as well as numerical simulations was used to recover P. jirovecii sequences while purging out contaminants and assembly or amplification chimeras. This strategy produced a 8.1 Mb assembly, which encodes 3,898 genes. Homology searches, mapping on biochemical pathways atlases, and manual validations revealed that this genome lacks (i) most of the enzymes dedicated to the amino acids biosyntheses, and (ii) most virulence factors observed in other fungi, e.g. the glyoxylate shunt pathway and specific peptidases involved in the degradation of the host cell membrane. The same analyses applied to the available genomic sequences from Pneumocystis carinii the species infecting rats and Pneumocystis murina the species infecting mice revealed the same deficiencies. The genome sequencing of T. deformans yielded a 13 Mb assembly, which encodes 5,735 genes. T. deformans possesses enzymes involved plant cell wall degradation, secondary metabolism, the glyoxylate cycle, detoxification, sterol biosynthesis, as well as the biosyntheses of plant hormones such as abscisic acid or indole-3-acetic acid. T. deformans also harbors gene subsets that have counterparts in plant saprophytes or pathogens, which is consistent with its alternate saprophytic and pathogenic lifestyles. Mating genes were also identified. The homothallism of this fungus suggests a mating-type switching mechanism. Comparative analyses indicated that 81% of P. jirovecii genes are shared with eight other Taphrinomycotina members, including T. deformans, P. carinii and P. murina. These genes are mostly involved in housekeeping activities. The genes specific to the Pneumocystis genus represent 8%, and are involved in RNA metabolism and signaling. The signaling is known to be crucial for interaction of Pneumocystis spp with their environment. Eleven percent are unique to P. jirovecii and encode mostly proteins of unknown function. These genes in conjunction with other ones (e.g. the major surface glycoproteins) might govern the interaction of P. jirovecii with its human host cells, and potentially be responsible of the host specificity. P. jirovecii exhibits a reduced genome in size with a low GC content, and most probably scavenges vital compounds such as amino acids and cholesterol from human lungs. Consistently, its genome encodes a large set of transporters (ca. 22% of its genes), which may play a pivotal role in the acquisition of these compounds. All these features are generally observed in obligate parasite of various kingdoms (bacteria, protozoa, fungi). Moreover, epidemiological studies failed to evidence a free-living form of the fungus and Pneumocystis spp were shown to co-evolved with their hosts. Given also the lack of virulence factors, our observations strongly suggest that P. jirovecii is an obligate parasite specialized in the colonization of human lungs, and which causes disease only in individuals with compromised immune system. The same conclusion is most likely true for all other Pneumocystis spp in their respective mammalian host. - Pneumocystis jirovecii est un champignon appartenant à ine branche basale des Ascomycotina, le sous-embranchement des Taphrinomycotina. C'est un parasite spécifique aux humains qui réside principalement dans les poumons, et qui peut causer des pneumonies sévères chez des individus ayant un système immunitaire déficient. En dépit de son importance clinique, de nombreux aspects de sa biologie demeurent,largement méconnus, au moins en partie à cause de l'absence d'un système de culture in vitro continu. Cette thèse traite de la reconstruction du génome et de la génomique comparative de P. jirovecii. Elle comporte trois parties: (i) le séquençage de novo du génome de P. jirovecii à partir d'un lavage broncho-alvéolaire provenant d'un seul patient, (ii) le séquençage de novo du génome d'un champignon pathogène de plante Taphrina deformans qui est phylogénétiquement proche de P. jirovecii, et (iii) la comparaison du génome de P. jirovecii à celui d'autres membres du sous-embranchement des Taphrinomycotina. Un enrichissement en cellules de P. jirovecii par immuno-précipitation, une amplification aléatoire des molécules d'ADN, deux méthodes complémentaires de séquençage à haut débit, un tri in silico et un assemblage des séquences ont été utilisés pour reconstruire de novo le génome de P. jirovecii à partir du microbiote d'un seul échantillon clinique. Un pipeline spécifique ainsi que des simulations numériques ont été utilisés pour récupérer les séquences de P. jirovecii tout en éliminant les séquences contaminants et les chimères d'amplification ou d'assemblage. Cette stratégie a produit un assemblage de 8.1 Mb, qui contient 3898 gènes. Les recherches d'homologies, de cartographie des voies métaboliques et des validations manuelles ont révélé que ce génome est dépourvu (i) de la plupart des enzymes dédiées à la biosynthèse des acides aminés, et (ii) de la plupart des facteurs de virulence observés chez d'autres champignons, par exemple, le cycle du glyoxylate ainsi que des peptidases spécifiques impliquées dans la dégradation de la membrane de la cellule hôte. Les analyses appliquées aux données génomiques disponibles de Pneumocystis carinii, l'espèce infectant les rats, et de Pneumocystis murina, l'espèce infectant les souris, ont révélé les mêmes déficiences. Le séquençage du génome de T. deformans a généré un assemblage de 13.3 Mb qui contient 5735 gènes. T. deformans possède les gènes codant pour les enzymes impliquées dans la dégradation des parois cellulaires des plantes, le métabolisme secondaire, le cycle du glyoxylate, la détoxification, la biosynthèse des stérols ainsi que la biosynthèse d'hormones de plantes telles que l'acide abscissique ou l'acide indole 3-acétique. T. deformans possède également des sous-ensembles de gènes présents exclusivement chez des saprophytes ou des pathogènes de plantes, ce qui est consistent avec son mode de vie alternatif saprophyte et pathogène. Des gènes impliqués dans la conjugaison ont été identifiés. L'homothallisme de ce champignon suggère mécanisme de permutation du type conjuguant. Les analyses comparatives ont démontré que 81% des gènes de P. jirovecii sont présent chez les autres membres du sous-embranchement des Taphrinomycotina. Ces gènes sont essentiellement impliqués dans le métabolisme basai. Les gènes spécifiques au genre Pneumocystis représentent 8%, et sont impliqués dans le métabolisme de l'ARN et la signalisation. La signalisation est connue pour être cruciale pour l'interaction des espèces de Pneumocystis avec leur environnement. Les gènes propres à P. jirovecii représentent 11% et codent en majorité pour des protéines dont la fonction est inconnue. Ces gènes en conjonction avec d'autres (par exemple, les glycoprotéines de surface), pourraient être déterminants dans l'interaction de P. jirovecii avec les cellules de l'hôte humain, et être potentiellement responsable de la spécificité d'hôte. P. jirovecii possède un génome de taille réduite à faible pourcentage en GC et récupère très probablement des composés vitaux comme les acides aminés et le cholestérol à partir des poumons humains. De manière consistante, son génome code pour de nombreux transporteurs (22% de ses gènes), qui pourraient jouer un rôle essentiel dans l'acquisition de ces composés. Ces caractéristiques sont généralement observées chez les parasites obligatoires de plusieurs règnes (bactéries, protozoaires, champignons). De plus, les études épidémiologiques n'ont pas réussi à prouver l'existence d'ime forme vivant librement du champignon. Etant donné également l'absence de facteurs de virulence, nos observations suggèrent que P. jirovecii est un parasite obligatoire spécialisé dans la colonisation des poumons humains, ne causant une maladie que chez des individus ayant un système immunitaire compromis. La même conclusion est très probablement applicable à toutes les autres espèces de Pneumocystis dans leur hôte mammifère respectif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pseudo-spectral time-domain (PSTD) method is an alternative time-marching method to classicalleapfrog finite difference schemes in the simulation of wave-like propagating phenomena. It is basedon the fundamentals of the Fourier transform to compute the spatial derivatives of hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acoustics simulations. However, one of the first issues to be solved consists on modeling wallabsorption. Unfortunately, there are no references in the technical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals to overcome this problem are presented, validated and compared to analytical solutions in different scenarios.