990 resultados para meal pre-analytical variability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voriconazole (VRC) is a broad-spectrum antifungal triazole with nonlinear pharmacokinetics. The utility of measurement of voriconazole blood levels for optimizing therapy is a matter of debate. Available high-performance liquid chromatography (HPLC) and bioassay methods are technically complex, time-consuming, or have a narrow analytical range. Objectives of the present study were to develop new, simple analytical methods and to assess variability of voriconazole blood levels in patients with invasive mycoses. Acetonitrile precipitation, reverse-phase separation, and UV detection were used for HPLC. A voriconazole-hypersusceptible Candida albicans mutant lacking multidrug efflux transporters (cdr1Delta/cdr1Delta, cdr2Delta/cdr2Delta, flu1Delta/flu1Delta, and mdr1Delta/mdr1Delta) and calcineurin subunit A (cnaDelta/cnaDelta) was used for bioassay. Mean intra-/interrun accuracies over the VRC concentration range from 0.25 to 16 mg/liter were 93.7% +/- 5.0%/96.5% +/- 2.4% (HPLC) and 94.9% +/- 6.1%/94.7% +/- 3.3% (bioassay). Mean intra-/interrun coefficients of variation were 5.2% +/- 1.5%/5.4% +/- 0.9% and 6.5% +/- 2.5%/4.0% +/- 1.6% for HPLC and bioassay, respectively. The coefficient of concordance between HPLC and bioassay was 0.96. Sequential measurements in 10 patients with invasive mycoses showed important inter- and intraindividual variations of estimated voriconazole area under the concentration-time curve (AUC): median, 43.9 mg x h/liter (range, 12.9 to 71.1) on the first and 27.4 mg x h/liter (range, 2.9 to 93.1) on the last day of therapy. During therapy, AUC decreased in five patients, increased in three, and remained unchanged in two. A toxic encephalopathy probably related to the increase of the VRC AUC (from 71.1 to 93.1 mg x h/liter) was observed. The VRC AUC decreased (from 12.9 to 2.9 mg x h/liter) in a patient with persistent signs of invasive aspergillosis. These preliminary observations suggest that voriconazole over- or underexposure resulting from variability of blood levels might have clinical implications. Simple HPLC and bioassay methods offer new tools for monitoring voriconazole therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inhibitory control refers to the ability to suppress planned or ongoing cognitive or motor processes. Electrophysiological indices of inhibitory control failure have been found to manifest even before the presentation of the stimuli triggering the inhibition, suggesting that pre-stimulus brain-states modulate inhibition performance. However, previous electrophysiological investigations on the state-dependency of inhibitory control were based on averaged event-related potentials (ERPs), a method eliminating the variability in the ongoing brain activity not time-locked to the event of interest. These studies thus left unresolved whether spontaneous variations in the brain-state immediately preceding unpredictable inhibition-triggering stimuli also influence inhibitory control performance. To address this question, we applied single-trial EEG topographic analyses on the time interval immediately preceding NoGo stimuli in conditions where the responses to NoGo trials were correctly inhibited [correct rejection (CR)] vs. committed [false alarms (FAs)] during an auditory spatial Go/NoGo task. We found a specific configuration of the EEG voltage field manifesting more frequently before correctly inhibited responses to NoGo stimuli than before FAs. There was no evidence for an EEG topography occurring more frequently before FAs than before CR. The visualization of distributed electrical source estimations of the EEG topography preceding successful response inhibition suggested that it resulted from the activity of a right fronto-parietal brain network. Our results suggest that the fluctuations in the ongoing brain activity immediately preceding stimulus presentation contribute to the behavioral outcomes during an inhibitory control task. Our results further suggest that the state-dependency of sensory-cognitive processing might not only concern perceptual processes, but also high-order, top-down inhibitory control mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cape Verde is a tropical oceanic ecosystem, highly fragmented and dispersed, with islands physically isolated by distance and depth. To understand how isolation affects the ecological variability in this archipelago, we conducted a research project on the community structure of the 18 commercially most important demersal fishes. An index of ecological distance based on species relative dominance (Di) is developed from Catch Per Unit Effort, derived from an extensive database of artisanal fisheries. Two ecological measures of distance between islands are calculated: at the species level, DDi, and at the community level, DD (sum of DDi). A physical isolation factor (Idb) combining distance (d) and bathymetry (b) is proposed. Covariance analysis shows that isolation factor is positively correlated with both DDi and DD, suggesting that Idb can be considered as an ecological isolation factor. The effect of Idb varies with season and species. This effect is stronger in summer (May to November), than in winter (December to April), which appears to be more unstable. Species react differently to Idb, independently of season. A principal component analysis on the monthly (DDi) for the 12 islands and the 18 species, complemented by an agglomerative hierarchical clustering, shows a geographic pattern of island organization, according to Idb. Results indicate that the ecological structure of demersal fish communities of Cape Verde archipelago, both in time and space, can be explained by a geographic isolation factor. The analytical approach used here is promising and could be tested in other archipelago systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Protein-energy malnutrition is highly prevalent in aged populations. Associated clinical, economic, and social burden is important. A valid screening method that would be robust and precise, but also easy, simple, and rapid to apply, is essential for adequate therapeutic management. OBJECTIVES: To compare the interobserver variability of 2 methods measuring food intake: semiquantitative visual estimations made by nurses versus calorie measurements performed by dieticians on the basis of standardized color digital photographs of servings before and after consumption. DESIGN: Observational monocentric pilot study. SETTING/PARTICIPANTS: A geriatric ward. The meals were randomly chosen from the meal tray. The choice was anonymous with respect to the patients who consumed them. MEASUREMENTS: The test method consisted of the estimation of calorie consumption by dieticians on the basis of standardized color digital photographs of servings before and after consumption. The reference method was based on direct visual estimations of the meals by nurses. Food intake was expressed in the form of a percentage of the serving consumed and calorie intake was then calculated by a dietician based on these percentages. The methods were applied with no previous training of the observers. Analysis of variance was performed to compare their interobserver variability. RESULTS: Of 15 meals consumed and initially examined, 6 were assessed with each method. Servings not consumed at all (0% consumption) or entirely consumed by the patient (100% consumption) were not included in the analysis so as to avoid systematic error. The digital photography method showed higher interobserver variability in calorie intake estimations. The difference between the compared methods was statistically significant (P < .03). CONCLUSIONS: Calorie intake measures for geriatric patients are more concordant when estimated in a semiquantitative way. Digital photography for food intake estimation without previous specific training of dieticians should not be considered as a reference method in geriatric settings, as it shows no advantages in terms of interobserver variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM). Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT) models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and sensitive liquid chromatography-electrospray ionization mass spectrometry method was developed for the simultaneous quantification in human plasma of all selective serotonin reuptake inhibitors (citalopram, fluoxetine, fluvoxamine, paroxetine and sertraline) and their main active metabolites (desmethyl-citalopram and norfluoxetine). A stable isotope-labeled internal standard was used for each analyte to compensate for the global method variability, including extraction and ionization variations. After sample (250μl) pre-treatment with acetonitrile (500μl) to precipitate proteins, a fast solid-phase extraction procedure was performed using mixed mode Oasis MCX 96-well plate. Chromatographic separation was achieved in less than 9.0min on a XBridge C18 column (2.1×100mm; 3.5μm) using a gradient of ammonium acetate (pH 8.1; 50mM) and acetonitrile as mobile phase at a flow rate of 0.3ml/min. The method was fully validated according to Société Française des Sciences et Techniques Pharmaceutiques protocols and the latest Food and Drug Administration guidelines. Six point calibration curves were used to cover a large concentration range of 1-500ng/ml for citalopram, desmethyl-citalopram, paroxetine and sertraline, 1-1000ng/ml for fluoxetine and fluvoxamine, and 2-1000ng/ml for norfluoxetine. Good quantitative performances were achieved in terms of trueness (84.2-109.6%), repeatability (0.9-14.6%) and intermediate precision (1.8-18.0%) in the entire assay range including the lower limit of quantification. Internal standard-normalized matrix effects were lower than 13%. The accuracy profiles (total error) were mainly included in the acceptance limits of ±30% for biological samples. The method was successfully applied for routine therapeutic drug monitoring of more than 1600 patient plasma samples over 9 months. The β-expectation tolerance intervals determined during the validation phase were coherent with the results of quality control samples analyzed during routine use. This method is therefore precise and suitable both for therapeutic drug monitoring and pharmacokinetic studies in most clinical laboratories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: While the assessment of analytical precision within medical laboratories has received much attention in scientific enquiry, the degree of as well as the sources causing variation between them remains incompletely understood. In this study, we quantified the variance components when performing coagulation tests with identical analytical platforms in different laboratories and computed intraclass correlations coefficients (ICC) for each coagulation test. METHODS: Data from eight laboratories measuring fibrinogen twice in twenty healthy subjects with one out of 3 different platforms and single measurements of prothrombin time (PT), and coagulation factors II, V, VII, VIII, IX, X, XI and XIII were analysed. By platform, the variance components of (i) the subjects, (ii) the laboratory and the technician and (iii) the total variance were obtained for fibrinogen as well as (i) and (iii) for the remaining factors using ANOVA. RESULTS: The variability for fibrinogen measurements within a laboratory ranged from 0.02 to 0.04, the variability between laboratories ranged from 0.006 to 0.097. The ICC for fibrinogen ranged from 0.37 to 0.66 and from 0.19 to 0.80 for PT between the platforms. For the remaining factors the ICC's ranged from 0.04 (FII) to 0.93 (FVIII). CONCLUSIONS: Variance components that could be attributed to technicians or laboratory procedures were substantial, led to disappointingly low intraclass correlation coefficients for several factors and were pronounced for some of the platforms. Our findings call for sustained efforts to raise the level of standardization of structures and procedures involved in the quantification of coagulation factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drug screening is an important issue in clinical and forensic toxicology. Gas chromatography coupled to mass spectrometry (GC-MS) remains the gold standard technique for the screening of unknown compounds in urine samples. However, this technique requires substantial sample preparation, which is time consuming. Moreover, some common drugs such as cannabis cannot be easily detected in urine using general procedures. In this work, a sample preparation protocol for treating 200 μL of urine in less than 30 min is described. The enzymatic hydrolysis of glucuro-conjugates was performed in 5 min thanks to the use of microwaves. The use of a deconvolution software allowed reducing the GC-MS run to 10 min, without impairing the quality of the compound identifications. Comparing the results from 139 authentic urine samples to those obtained using the current routine analysis indicated this method performed well. Moreover, additional 5-min GC-MS/MS programs are described, enabling a very sensitive target screening of 54 drugs, including THC-COOH or buprenorphine, without further sample preparation. These methods appeared as an interesting alternative to immuno-assays based screening. The analytical strategy presented in this article proved to be a promising approach for systematic toxicological analysis (STA) of drugs in urine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The increase of apparent diffusion coefficient (ADC) in treated hepatic malignancies compared to pre-therapeutic values has been interpreted as treatment success; however, the variability of ADC measurements remains unknown. Furthermore, ADC has been usually measured in the whole lesion, while measurements should be probably centered on the area with the most restricted diffusion (MRDA) as it represents potential tumoral residue. Our objective was to compare the inter/intraobserver variability of ADC measurements in the whole lesion and in MRDA. Material and methods: Forty patients previously treated with chemoembolization or radiofrequency were evaluated (20 on 1.5T and 20 on 3.0T). After consensual agreement on the best ADC image, two readers measured the ADC values using separate regions of interest that included the whole lesion and the whole MRDA without exceeding their borders. The same measurements were repeated two weeks later. Spearman test and the Bland-Altman method were used. Results: Interobserver correlation in ADC measurements in the whole lesion and MRDA was as follows: 0.962 and 0.884. Intraobserver correlation was, respectively, 0.992 and 0.979. Interobserver limits of variability (mm2/sec*10-3) were between -0.25/+0.28 in the whole lesion and between -0.51/+0.46 in MRDA. Intraobserver limits of variability were, respectively: -0.25/+0.24 and -0.43/+0.47. Conclusion: We observed a good inter/intraobserver correlation in ADC measurements. Nevertheless, a limited variability does exist, and it should be considered when interpreting ADC values of hepatic malignancies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Well developed experimental procedures currently exist for retrieving and analyzing particle evidence from hands of individuals suspected of being associated with the discharge of a firearm. Although analytical approaches (e.g. automated Scanning Electron Microscopy with Energy Dispersive X-ray (SEM-EDS) microanalysis) allow the determination of the presence of elements typically found in gunshot residue (GSR) particles, such analyses provide no information about a given particle's actual source. Possible origins for which scientists may need to account for are a primary exposure to the discharge of a firearm or a secondary transfer due to a contaminated environment. In order to approach such sources of uncertainty in the context of evidential assessment, this paper studies the construction and practical implementation of graphical probability models (i.e. Bayesian networks). These can assist forensic scientists in making the issue tractable within a probabilistic perspective. The proposed models focus on likelihood ratio calculations at various levels of detail as well as case pre-assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to study the influence of several analytical parameters on the variability of Raman spectra of paint samples. In the present study, microtome thin section and direct (no preparation) analysis are considered as sample preparation. In order to evaluate their influence on the measures, an experimental design such as 'fractional full factorial' with seven factors (including the sampling process) is applied, for a total of 32 experiments representing 160 measures. Once the influence of sample preparation highlighted, a depth profile of a paint sample is carried out by changing the focusing plane in order to measure the colored layer under a clearcoat. This is undertaken in order to avoid sample preparation such a microtome sectioning. Finally, chemometric treatments such as principal component analysis are applied to the resulting spectra. The findings of this study indicate the importance of sample preparation, or more specifically, the surface roughness, on the variability of the measurements on a same sample. Moreover, the depth profile experiment highlights the influence of the refractive index of the upper layer (clearcoat) when measuring through a transparent layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The establishment of legislative rules about explosives in the eighties has reduced the illicit use of military and civilian explosives. However, bomb-makers have rapidly taken advantage of substances easily accessible and intended for licit uses to produce their own explosives. This change in strategy has given rise to an increase of improvised explosive charges, which is moreover assisted by the ease of implementation of the recipes, widely available through open sources. While the nature of the explosive charges has evolved, instrumental methods currently used in routine, although more sensitive than before, have a limited power of discrimination and allow mostly the determination of the chemical nature of the substance. Isotope ratio mass spectrometry (IRMS) has been applied to a wide range of forensic materials. Conclusions drawn from the majority of the studies stress its high power of discrimination. Preliminary studies conducted so far on the isotopic analysis of intact explosives (pre-blast) have shown that samples with the same chemical composition and coming from different sources could be differentiated. The measurement of stable isotope ratios appears therefore as a new and remarkable analytical tool for the discrimination or the identification of a substance with a definite source. However, much research is still needed to assess the validity of the results in order to use them either in an operational prospect or in court. Through the isotopic study of black powders and ammonium nitrates, this research aims at evaluating the contribution of isotope ratio mass spectrometry to the investigation of explosives, both from a pre-blast and from a post-blast approach. More specifically, the goal of the research is to provide additional elements necessary to a valid interpretation of the results, when used in explosives investigation. This work includes a fundamental study on the variability of the isotopic profile of black powder and ammonium nitrate in both space and time. On one hand, the inter-variability between manufacturers and, particularly, the intra-variability within a manufacturer has been studied. On the other hand, the stability of the isotopic profile over time has been evaluated through the aging of these substances exposed to different environmental conditions. The second part of this project considers the applicability of this high-precision technology to traces and residues of explosives, taking account of the characteristics specific to the field, including their sampling, a probable isotopic fractionation during the explosion, and the interferences with the matrix of the site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-­‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-­‐variabilité) et entre les traces digitales de donneurs différents (inter-­‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-­‐variabilité des résidus était significativement plus basse que l'inter-­‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-­‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-­‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-­‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-­‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-­‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-­‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-­‐variability) and between fingermarks of different donors (inter-­‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-­‐variability of the fingermark residue was significantly lower than the inter-­‐variability, but that it was possible to reduce both kind of variability using different statistical pre-­‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-­‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-­‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-­‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-­‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-­‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.