57 resultados para Negative dimensional integration method (NDIM)
em Universit
Resumo:
The integration of geophysical data into the subsurface characterization problem has been shown in many cases to significantly improve hydrological knowledge by providing information at spatial scales and locations that is unattainable using conventional hydrological measurement techniques. The investigation of exactly how much benefit can be brought by geophysical data in terms of its effect on hydrological predictions, however, has received considerably less attention in the literature. Here, we examine the potential hydrological benefits brought by a recently introduced simulated annealing (SA) conditional stochastic simulation method designed for the assimilation of diverse hydrogeophysical data sets. We consider the specific case of integrating crosshole ground-penetrating radar (GPR) and borehole porosity log data to characterize the porosity distribution in saturated heterogeneous aquifers. In many cases, porosity is linked to hydraulic conductivity and thus to flow and transport behavior. To perform our evaluation, we first generate a number of synthetic porosity fields exhibiting varying degrees of spatial continuity and structural complexity. Next, we simulate the collection of crosshole GPR data between several boreholes in these fields, and the collection of porosity log data at the borehole locations. The inverted GPR data, together with the porosity logs, are then used to reconstruct the porosity field using the SA-based method, along with a number of other more elementary approaches. Assuming that the grid-cell-scale relationship between porosity and hydraulic conductivity is unique and known, the porosity realizations are then used in groundwater flow and contaminant transport simulations to assess the benefits and limitations of the different approaches.
Resumo:
Midazolam is a widely accepted probe for phenotyping cytochrome P4503A. A gas chromatography-mass spectrometry (GC-MS)-negative chemical ionization method is presented which allows measuring very low levels of midazolam (MID), 1-OH midazolam (1OHMID) and 4-OH midazolam (4OHMID), in plasma, after derivatization with the reagent N-tert-butyldimethylsilyl-N-methyltrifluoroacetamide. The standard curves were linear over a working range of 20 pg/ml to 5 ng/ml for the three compounds, with the mean coefficients of correlation of the calibration curves (n = 6) being 0.999 for MID and 1OHMID, and 1.0 for 4OHMID. The mean recoveries measured at 100 pg/ml, 500 pg/ml, and 2 ng/ml, ranged from 76 to 87% for MID, from 76 to 99% for 1OHMID, from 68 to 84% for 4OHMID, and from 82 to 109% for N-ethyloxazepam (internal standard). Intra- (n = 7) and inter-day (n = 8) coefficients of variation determined at three concentrations ranged from 1 to 8% for MID, from 2 to 13% for 1OHMID and from 1 to 14% for 4OHMID. The percent theoretical concentrations (accuracy) were within +/-8% for MID and 1OHMID, within +/-9% for 4OHMID at 500 pg/ml and 2 ng/ml, and within +/-28% for 4OHMID at 100 pg/ml. The limits of quantitation were found to be 10 pg/ml for the three compounds. This method can be used for phenotyping cytochrome P4503A in humans following the administration of a very low oral dose of midazolam (75 microg), without central nervous system side-effects.
Resumo:
The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background and aim of the study: In Switzerland no HIV test is performed without the patient's consent based on a Voluntary Counseling and Testing policy (VCT). We hypothesized that a substantial proportion of patients going through an elective surgery falsely believed that an HIV test was performed on a routine basis and that the lack of transmission of result was interpreted as being HIV negative. Material and method: All patients with elective orthopedic surgery during 2007 were contacted by phone in 2008. A structured questionnaire assessed their belief about routine preoperative blood analysis (glycemia, coagulation capacity, HIV serology and cholesterol) as well as result awareness and interpretation. Variables included age and gender. Analysis were conducted using the software JMP 6.0.3. Results: 1123 patients were included. 130 (12%) were excluded (i.e. unreachable, unable to communicate on the phone, not operated). 993 completed the survey (89%). Median age was 51 (16-79). 50% were female. 376 (38%) patients thought they had an HIV test performed before surgery but none of them had one. 298 (79%) interpreted the absence of result as a negative HIV test. A predictive factor to believe an HIV test had been done was an age below 50 years old (45% vs 33% for 16-49 years old and 50-79 years old respectively, p <0.001). No difference was observed between genders. Conclusion: In Switzerland, nearly 40% of the patients falsely thought an HIV test had been performed on a routine basis before surgery and were erroneously reassured about their HIV status. These results should either improve the information given to the patient regarding preoperative exams, or motivate public health policy to consider HIV opt-out screening, as patients are already expecting it.
Resumo:
The objective of this work is to present a multitechnique approach to define the geometry, the kinematics, and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne and terrestrial laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different sources of information. Because of an important point density at the ground surface (4. 1 points m?2), a small laser footprint (0.09 m) and an accurate three-dimensional positioning (0.07 m), airborne laser scanning data are adapted as a source of information to analyze morphological structures at the surface. Seismic tomography surveys (P-wave and S-wave velocities) may highlight the presence of low-seismic-velocity zones that characterize the presence of dense fracture networks at the subsurface. The surface displacements measured from the terrestrial laser scanning data over a period of 2 years (May 2008?May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m?year?1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the preexisting discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two preexisting fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures, and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the sloping local base level method.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Introduction: Ethylglucuronide (EtG) is a direct and specific metabolite of ethanol. Its determination in hair is of increasing interest for detecting and monitoring alcohol abuse. The quantification of EtG in hair requires analytical methods showing highest sensitivity and specificity. We present a fully validated method based on gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS). The method was validated using French Society of Pharmaceutical Sciences and Techniques (SFSTP) guidelines which are based on the determination of the total measurement error and accuracy profiles. Methods: Washed and powdered hair is extracted in water using an ultrasonic incubation. After purification by Oasis MAX solid phase extraction, the derivatized EtG is detected and quantified by GC-NCI-MS/MS method in the selected reaction monitoring mode. The transitions m/z 347 / 163 and m/z 347 / 119 were used for the quantification and identification of EtG. Four quality controls (QC) prepared with hair samples taken post mortem from 2 subjects with a known history of alcoholism were used. A proficiency test with 7 participating laboratories was first run to validate the EtG concentration of each QC sample. Considering the results of this test, these samples were then used as internal controls for validation of the method. Results: The mean EtG concentrations measured in the 4 QC were 259.4, 130.4, 40.8, and 8.4 pg/mg hair. Method validation has shown linearity between 8.4 and 259.4 pg/mg hair (r2 > 0.999). The lower limit of quantification was set up at 8.4 pg/mg. Repeatability and intermediate precision were found less than 13.2% for all concentrations tested. Conclusion: The method proved to be suitable for routine analysis of EtG in hair. GC-NCI-MS/MS method was then successfully applied to the analysis of EtG in hair samples collected from different alcohol consumers.
Resumo:
Ethyl glucuronide (EtG) is a minor and direct metabolite of ethanol. EtG is incorporated into the growing hair allowing retrospective investigation of chronic alcohol abuse. In this study, we report the development and the validation of a method using gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS) for the quantification of EtG in hair. EtG was extracted from about 30 mg of hair by aqueous incubation and purified by solid-phase extraction (SPE) using mixed mode extraction cartridges followed by derivation with perfluoropentanoic anhydride (PFPA). The analysis was performed in the selected reaction monitoring (SRM) mode using the transitions m/z 347-->163 (for the quantification) and m/z 347-->119 (for the identification) for EtG, and m/z 352-->163 for EtG-d(5) used as internal standard. For validation, we prepared quality controls (QC) using hair samples taken post mortem from 2 subjects with a known history of alcoholism. These samples were confirmed by a proficiency test with 7 participating laboratories. The assay linearity of EtG was confirmed over the range from 8.4 to 259.4 pg/mg hair, with a coefficient of determination (r(2)) above 0.999. The limit of detection (LOD) was estimated with 3.0 pg/mg. The lower limit of quantification (LLOQ) of the method was fixed at 8.4 pg/mg. Repeatability and intermediate precision (relative standard deviation, RSD%), tested at 4 QC levels, were less than 13.2%. The analytical method was applied to several hair samples obtained from autopsy cases with a history of alcoholism and/or lesions caused by alcohol. EtG concentrations in hair ranged from 60 to 820 pg/mg hair.
Resumo:
Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly
Resumo:
Background : In the present article, we propose an alternative method for dealing with negative affectivity (NA) biases in research, while investigating the association between a deleterious psychosocial environment at work and poor mental health. First, we investigated how strong NA must be to cause an observed correlation between the independent and dependent variables. Second, we subjectively assessed whether NA can have a large enough impact on a large enough number of subjects to invalidate the observed correlations between dependent and independent variables.Methods : We simulated 10,000 populations of 300 subjects each, using the marginal distribution of workers in an actual population that had answered the Siegrist's questionnaire on effort and reward imbalance (ERI) and the General Health Questionnaire (GHQ).Results : The results of the present study suggested that simulated NA has a minimal effect on the mean scores for effort and reward. However, the correlations between the effort and reward imbalance (ERI) ratio and the GHQ score might be important, even in simulated populations with a limited NA.Conclusions : When investigating the relationship between the ERI ratio and the GHQ score, we suggest the following rules for the interpretation of the results: correlations with an explained variance of 5% and below should be considered with caution; correlations with an explained variance between 5% and 10% may result from NA, although this effect does not seem likely; and correlations with an explained variance of 10% and above are not likely to be the result of NA biases. [Authors]
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Over the past decades, several sensitive post-electrophoretic stains have been developed for an identification of proteins in general, or for a specific detection of post-translational modifications such as phosphorylation, glycosylation or oxidation. Yet, for a visualization and quantification of protein differences, the differential two-dimensional gel electrophoresis, termed DIGE, has become the method of choice for a detection of differences in two sets of proteomes. The goal of this review is to evaluate the use of the most common non-covalent and covalent staining techniques in 2D electrophoresis gels, in order to obtain maximal information per electrophoresis gel and for an identification of potential biomarkers. We will also discuss the use of detergents during covalent labeling, the identification of oxidative modifications and review influence of detergents on finger prints analysis and MS/MS identification in relation to 2D electrophoresis.
Resumo:
OBJECTIVES: The goal of the present study was to develop a strategy for three-dimensional (3D) volume acquisition along the major axes of the coronary arteries. BACKGROUND: For high-resolution 3D free-breathing coronary magnetic resonance angiography (MRA), coverage of the coronary artery tree may be limited due to excessive measurement times associated with large volume acquisitions. Planning the 3D volume along the major axis of the coronary vessels may help to overcome such limitations. METHODS: Fifteen healthy adult volunteers and seven patients with X-ray angiographically confirmed coronary artery disease underwent free-breathing navigator-gated and corrected 3D coronary MRA. For an accurate volume targeting of the high resolution scans, a three-point planscan software tool was applied. RESULTS: The average length of contiguously visualized left main and left anterior descending coronary artery was 81.8 +/- 13.9 mm in the healthy volunteers and 76.2 +/- 16.5 mm in the patients (p = NS). For the right coronary artery, a total length of 111.7 +/- 27.7 mm was found in the healthy volunteers and 79.3 +/- 4.6 mm in the patients (p = NS). Comparing coronary MRA and X-ray angiography, a good agreement of anatomy and pathology was found in the patients. CONCLUSIONS: Double-oblique submillimeter free-breathing coronary MRA allows depiction of extensive parts of the native coronary arteries. The results obtained in patients suggest that the method has the potential to be applied in broader prospective multicenter studies where coronary MRA is compared with X-ray angiography.
Resumo:
The structure of the yeast DNA-dependent RNA polymerase I (RNA Pol I), prepared by cryo-negative staining, was studied by electron microscopy. A structural model of the enzyme at a resolution of 1.8 nm was determined from the analysis of isolated molecules and showed an excellent fit with the atomic structure of the RNA Pol II Delta4/7. The high signal-to-noise ratio (SNR) of the stained molecular images revealed a conformational flexibility within the image data set that could be recovered in three-dimensions after implementation of a novel strategy to sort the "open" and "closed" conformations in our heterogeneous data set. This conformational change mapped in the "wall/flap" domain of the second largest subunit (beta-like) and allows a better accessibility of the DNA-binding groove. This displacement of the wall/flap domain could play an important role in the transition between initiation and elongation state of the enzyme. Moreover, a protrusion was apparent in the cryo-negatively stained model, which was absent in the atomic structure and was not detected in previous 3D models of RNA Pol I. This structure could, however, be detected in unstained views of the enzyme obtained from frozen hydrated 2D crystals, indicating that this novel feature is not induced by the staining process. Unexpectedly, negatively charged molybdenum compounds were found to accumulate within the DNA-binding groove, which is best explained by the highly positive electrostatic potential of this region of the molecule, thus, suggesting that the stain distribution reflects the overall surface charge of the molecule.
Resumo:
Usually, the differentiation of inks on questioned documents is carried out by optical methods and thin layer chromatography (TLC). Therefore, spectrometric methods were also proposed in forensic literature for the analysis of dyes. Between these techniques, laser desorption/ionization mass spectrometry (LDI-MS) has demonstrated a great versatility thanks to its sensitivity to blue ballpoint ink dyes and minimal sample destruction. Previous researches concentrated mostly on the LDI-MS positive mode and have shown that this analytical tool offers higher discrimination power than high performance TLC (HPTLC) for the differentiation of blue ballpoint inks. Although LDI-MS negative mode has already been applied in numerous forensic domains like the studies of works of art, automotive paints or rollerball pens, its potential for the discrimination of ballpoint pens was never studied before. The aim of the present paper is therefore to evaluate its potential for the discrimination of blue ballpoint inks. After optimization of the method, ink entries from 33 blue ballpoint pens were analyzed directly on paper in both positive and negative modes by LDI-MS. Several cationic and anionic ink components were identified in inks; therefore, pens were classified and compared according to their formulations. Results show that additional information provided by anionic dyes and pigments significantly increases the discrimination power of positive mode. In fact, it was demonstrated that classifications obtained by the two modes were, to some extent, complementary (i.e., inks with specific cationic dyes not necessarily contained the same anionic components).