120 resultados para similarity estimation
Resumo:
A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.
Resumo:
A major issue in the application of waveform inversion methods to crosshole ground-penetrating radar (GPR) data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a recently published time-domain inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity of both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little to no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
The physical disector is a method of choice for estimating unbiased neuron numbers; nevertheless, calibration is needed to evaluate each counting method. The validity of this method can be assessed by comparing the estimated cell number with the true number determined by a direct counting method in serial sections. We reconstructed a 1/5 of rat lumbar dorsal root ganglia taken from two experimental conditions. From each ganglion, images of 200 adjacent semi-thin sections were used to reconstruct a volumetric dataset (stack of voxels). On these stacks the number of sensory neurons was estimated and counted respectively by physical disector and direct counting methods. Also, using the coordinates of nuclei from the direct counting, we simulate, by a Matlab program, disector pairs separated by increasing distances in a ganglion model. The comparison between the results of these approaches clearly demonstrates that the physical disector method provides a valid and reliable estimate of the number of sensory neurons only when the distance between the consecutive disector pairs is 60 microm or smaller. In these conditions the size of error between the results of physical disector and direct counting does not exceed 6%. In contrast when the distance between two pairs is larger than 60 microm (70-200 microm) the size of error increases rapidly to 27%. We conclude that the physical dissector method provides a reliable estimate of the number of rat sensory neurons only when the separating distance between the consecutive dissector pairs is no larger than 60 microm.
Resumo:
Les précipitations journalières extrêmes centennales ont été estimées à partir d'analyses de Gumbel et de sept formule empiriques effectuées sur des séries de mesures pluviométriques à 151 endroits de la Suisse pour deux périodes de 50 ans. Ces estimations ont été comparées avec les valeurs journalières maximales mesurées durant les 100 dernières années (1911-2010) afin de tester l'efficacité de ces sept formules. Cette comparaison révèle que la formule de Weibull serait la meilleure pour estimer les précipitations journalières centennales à partir de la série de mesures pluviométriques 1961-2010, mais la moins bonne pour la série de mesures 1911-1960. La formule de Hazen serait la plus efficace pour cette dernière période. Ces différences de performances entre les formules empiriques pour les deux périodes étudiées résultent de l'augmentation des précipitations journalières maximales mesurées de 1911 à 2010 pour 90% des stations en Suisse. Mais les différences entre les pluies extrêmes estimées à partir des sept formules empiriques ne dépassent pas 6% en moyenne.
Resumo:
Captan and folpet are two fungicides largely used in agriculture, but biomonitoring data are mostly limited to measurements of captan metabolite concentrations in spot urine samples of workers, which complicate interpretation of results in terms of internal dose estimation, daily variations according to tasks performed, and most plausible routes of exposure. This study aimed at performing repeated biological measurements of exposure to captan and folpet in field workers (i) to better assess internal dose along with main routes-of-entry according to tasks and (ii) to establish most appropriate sampling and analysis strategies. The detailed urinary excretion time courses of specific and non-specific biomarkers of exposure to captan and folpet were established in tree farmers (n = 2) and grape growers (n = 3) over a typical workweek (seven consecutive days), including spraying and harvest activities. The impact of the expression of urinary measurements [excretion rate values adjusted or not for creatinine or cumulative amounts over given time periods (8, 12, and 24 h)] was evaluated. Absorbed doses and main routes-of-entry were then estimated from the 24-h cumulative urinary amounts through the use of a kinetic model. The time courses showed that exposure levels were higher during spraying than harvest activities. Model simulations also suggest a limited absorption in the studied workers and an exposure mostly through the dermal route. It further pointed out the advantage of expressing biomarker values in terms of body weight-adjusted amounts in repeated 24-h urine collections as compared to concentrations or excretion rates in spot samples, without the necessity for creatinine corrections.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
The knowledge of the relationship that links radiation dose and image quality is a prerequisite to any optimization of medical diagnostic radiology. Image quality depends, on the one hand, on the physical parameters such as contrast, resolution, and noise, and on the other hand, on characteristics of the observer that assesses the image. While the role of contrast and resolution is precisely defined and recognized, the influence of image noise is not yet fully understood. Its measurement is often based on imaging uniform test objects, even though real images contain anatomical backgrounds whose statistical nature is much different from test objects used to assess system noise. The goal of this study was to demonstrate the importance of variations in background anatomy by quantifying its effect on a series of detection tasks. Several types of mammographic backgrounds and signals were examined by psychophysical experiments in a two-alternative forced-choice detection task. According to hypotheses concerning the strategy used by the human observers, their signal to noise ratio was determined. This variable was also computed for a mathematical model based on the statistical decision theory. By comparing theoretical model and experimental results, the way that anatomical structure is perceived has been analyzed. Experiments showed that the observer's behavior was highly dependent upon both system noise and the anatomical background. The anatomy partly acts as a signal recognizable as such and partly as a pure noise that disturbs the detection process. This dual nature of the anatomy is quantified. It is shown that its effect varies according to its amplitude and the profile of the object being detected. The importance of the noisy part of the anatomy is, in some situations, much greater than the system noise. Hence, reducing the system noise by increasing the dose will not improve task performance. This observation indicates that the tradeoff between dose and image quality might be optimized by accepting a higher system noise. This could lead to a better resolution, more contrast, or less dose.
Resumo:
The comparison of cancer prevalence with cancer mortality can lead under some hypotheses to an estimate of registration rate. A method is proposed, where the cases with cancer as a cause of death are divided into 3 categories: (1) cases already known by the registry (2) unknown cases having occured before the registry creation date (3) unknown cases occuring during the registry operates. The estimate is then the number of cases in the first category divided by the total of those in categories 1 and 3 (these only are to be registered). An application is performed on the data of the Canton de Vaud. Survival rates of the Norvegian Cancer Registry are used for computing the number of unknown cases to be included in second and third category, respectively. The discussion focusses on the possible determinants of the obtained comprehensiveness rates for various cancer sites.
Resumo:
This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.
Resumo:
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Resumo:
Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of 450-600 A degrees C has been estimated based on pseudosection calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D-0= 5.7*10(-6) m(2)/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (600 A degrees C) = 4.0*10(-22) m(2)/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, T-c, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) < D (Nd) < D (Sm) a parts per thousand currency sign D (Lu). T-c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 A degrees C, assuming a rapid initial exhumation rate of 50A degrees/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 A degrees C must be interpreted in terms of prograde metamorphism.
Resumo:
A clear and rigorous definition of muscle moment-arms in the context of musculoskeletal systems modelling is presented, using classical mechanics and screw theory. The definition provides an alternative to the tendon excursion method, which can lead to incorrect moment-arms if used inappropriately due to its dependency on the choice of joint coordinates. The definition of moment-arms, and the presented construction method, apply to musculoskeletal models in which the bones are modelled as rigid bodies, the joints are modelled as ideal mechanical joints and the muscles are modelled as massless, frictionless cables wrapping over the bony protrusions, approximated using geometric surfaces. In this context, the definition is independent of any coordinate choice. It is then used to solve a muscle-force estimation problem for a simple 2D conceptual model and compared with an incorrect application of the tendon excursion method. The relative errors between the two solutions vary between 0% and 100%.
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
L'article constituant le présent travail de thèse décrit une recherche évaluant les similarités entre l'inhibition comportementale chez des enfants et leurs parents respectifs durant l'enfance, ainsi que l'association éventuelle entre l'inhibition et les attitudes parentales. L'inhibition comportementale - définie comme une prédisposition de l'enfant à réagir avec réticence et angoisse à des situations inhabituelles - est une caractéristique de tempérament qui apparaît tôt dans la vie et est relativement stable dans le temps. L'intérêt qui y est associé repose sur son association avec des conséquences développementales négatives pour l'individu: risque accentué chez les enfants d'inadaptation scolaire, de dysfonctionnement social et de sentiments de détresse; facteur de risque en ce qui concerne le développement ultérieur de troubles psychiatriques de l'adulte, dont les troubles anxieux en particulier. Nous avons dès lors investigué dans cette recherche 1) la présence d'une association entre l'inhibition comportementale actuelle des enfants et celle rétrospective de leurs parents, dans ses deux dimensions spécifiques, soit sociale (« peurs à l'école ») et non-sociale (« peurs générales »), et 2) si ces dimensions d'inhibition étaient associées au niveau de chaleur, d'affectivité et de soutien prodigués par les parents. C'est à partir de la récolte d'auto-questionnaires remplis dans le contexte d'une vaste étude initiée à Lausanne en 1994, que nous avons extrait un échantillon de 453 enfants scolarisés en 6/7ème années et 741 de leurs parents biologiques respectifs. Les analyses ont porté sur les auto-questionnaires évaluant, chez les enfants, leurs degrés d'inhibition comportementale, de symptomatologie psychiatrique actuelle et de perception de chaleur, d'affectivité et de soutien reçu de la part des parents, et, en ce qui concerne les parents, des auto¬questionnaires évaluant rétrospectivement leurs degrés d'inhibition comportementale durant l'enfance et de symptomatologie psychiatrique actuelle. La comparaison des scores des enfants avec ceux de leurs parents sur les échelles de l'inhibition comportementale (CSRCI-Child version of the Self-Report of Child Inhibition - et RSRI - Retrospective Self- Report of Inhibition -), montre la présence d'une similarité significative pour chacune des dimensions spécifiques, suggérant que les enfants de parents ayant présenté une inhibition comportementale dans l'enfance sont plus à risque d'en développer une à leur tour. Nous avons par ailleurs pu établir, au cours d'analyses complémentaires, que cette association se maintenait après l'ajustement des degrés de symptomatologie psychiatrique respectifs des enfants et de leurs parents, ce qui a permis d'écarter ce facteur confondant potentiel. Bien que la nature de cette association ne puisse être élucidée par cette recherche, le fait que la taille de l'effet soit modeste suggère qu'un rôle important dans le développement de l'inhibition soit joué par des facteurs environnementaux non partagés dans une famille. Un de ceux-ci semble suggéré par l'association négative qui apparaît dans cette étude entre le degré d'inhibition dans sa dimension « peurs à l'école » et l'attitude parentale: un bas niveau de chaleur, d'affectivité et de soutien perçu par l'enfant de la part de ses parents aurait ainsi une influence sur le développement de peurs en situations sociales en particulier. Alternativement, cette association négative pourrait suggérer que les peurs de l'enfant aient une influence négative sur le développement d'une attitude parentale de soutien ou, du moins, sur la perception de celle-ci par l'enfant. Il est aussi bien évidemment envisageable que ces processus interagissent dans une boucle de rétroaction. Seules des études longitudinales pourraient nous éclairer sur la nature de cette association. Quoi qu'il en soit, ce résultat présente un intérêt clinique pour orienter des interventions de prévention du développement de troubles psychiatriques chez des enfants inhibés, en proposant d'agir, soit directement sur les attitudes parentales, soit sur les peurs de l'enfant ainsi que sur les caractéristiques perçues par l'enfant de l'attitude parentale. Des études longitudinales sur les ressemblances entre les parents et leurs enfants concernant l'inhibition comportementale, incluant des études de jumeaux, d'enfants adoptés et de familles, sont nécessaires pour mieux comprendre les interactions entre les facteurs génétiques, familiaux et environnementaux dans le développement de l'inhibition comportementale des enfants.