887 resultados para Computer forensic analysis
Resumo:
This communication seeks to draw the attention of researchers and practitioners dealing with forensic DNA profiling analyses to the following question: is a scientist's report, offering support to a hypothesis according to which a particular individual is the source of DNA detected during the analysis of a stain, relevant from the point of view of a Court of Justice? This question relates to skeptical views previously voiced by commentators mainly in the judicial area, but is avoided by a large majority of forensic scientists. Notwithstanding, the pivotal role of this question has recently been evoked during the international conference "The hidden side of DNA profiles. Artifacts, errors and uncertain evidence" held in Rome (April 27th to 28th, 2012). Indeed, despite the fact that this conference brought together some of the world's leading forensic DNA specialists, it appeared clearly that a huge gap still exists between questions lawyers are actually interested in, and the answers that scientists deliver to Courts in written reports or during oral testimony. Participants in the justice system, namely lawyers and jurors on the one hand and forensic geneticists on the other, unfortunately talk considerably different languages. It thus is fundamental to address this issue of communication about results of forensic DNA analyses, and open a dialogue with practicing non-scientists at large who need to make meaningful use of scientific results to approach and help solve judicial cases. This paper intends to emphasize the actuality of this topic and suggest beneficial ways ahead towards a more reasoned use of forensic DNA in criminal proceedings.
Resumo:
The flourishing number of publications on the use of isotope ratio mass spectrometry (IRMS) in forensic science denotes the enthusiasm and the attraction generated by this technology. IRMS has demonstrated its potential to distinguish chemically identical compounds coming from different sources. Despite the numerous applications of IRMS to a wide range of forensic materials, its implementation in a forensic framework is less straightforward than it appears. In addition, each laboratory has developed its own strategy of analysis on calibration, sequence design, standards utilisation and data treatment without a clear consensus.Through the experience acquired from research undertaken in different forensic fields, we propose a methodological framework of the whole process using IRMS methods. We emphasize the importance of considering isotopic results as part of a whole approach, when applying this technology to a particular forensic issue. The process is divided into six different steps, which should be considered for a thoughtful and relevant application. The dissection of this process into fundamental steps, further detailed, enables a better understanding of the essential, though not exhaustive, factors that have to be considered in order to obtain results of quality and sufficiently robust to proceed to retrospective analyses or interlaboratory comparisons.
Resumo:
The determination of dyes present in illicit pills is shown to be useful and easy-to-get information in strategic and tactical drug intelligence. An analytical strategy including solid-phase extraction (SPE) thin-layer chromatography (TLC) and capillary zone electrophoresis equipped with a diode array detector (CZE-DAD) was developed to identify and quantify 14 hydrosoluble, acidic, synthetic food dyes allowed in the European Community. Indeed, these may be the most susceptible dyes to be found in illicit pills through their availability and easiness of use. The results show (1) that this analytical method is well adapted to small samples such as illicit pills, (2) that most dyes actually found belong to hydrosoluble, acidic, synthetic food dyes allowed in the European Community, and (3) that this evidence turns out to be important in drug intelligence and may be assessed into a Bayesian framework.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies. SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.
Resumo:
The aim of this study was to evaluate the forensic protocol recently developed by Qiagen for the QIAsymphony automated DNA extraction platform. Samples containing low amounts of DNA were specifically considered, since they represent the majority of samples processed in our laboratory. The analysis of simulated blood and saliva traces showed that the highest DNA yields were obtained with the maximal elution volume available for the forensic protocol, that is 200 ml. Resulting DNA extracts were too diluted for successful DNA profiling and required a concentration. This additional step is time consuming and potentially increases inversion and contamination risks. The 200 ml DNA extracts were concentrated to 25 ml, and the DNA recovery estimated with real-time PCR as well as with the percentage of SGM Plus alleles detected. Results using our manual protocol, based on the QIAamp DNA mini kit, and the automated protocol were comparable. Further tests will be conducted to determine more precisely DNA recovery, contamination risk and PCR inhibitors removal, once a definitive procedure, allowing the concentration of DNA extracts from low yield samples, will be available for the QIAsymphony.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Due to various contexts and processes, forensic science communities may have different approaches, largely influenced by their criminal justice systems. However, forensic science practices share some common characteristics. One is the assurance of a high (scientific) quality within processes and practices. For most crime laboratory directors and forensic science associations, this issue is conditioned by the triangle of quality, which represents the current paradigm of quality assurance in the field. It consists of the implementation of standardization, certification, accreditation, and an evaluation process. It constitutes a clear and sound way to exchange data between laboratories and enables databasing due to standardized methods ensuring reliable and valid results; but it is also a means of defining minimum requirements for practitioners' skills for specific forensic science activities. The control of each of these aspects offers non-forensic science partners the assurance that the entire process has been mastered and is trustworthy. Most of the standards focus on the analysis stage and do not consider pre- and post-laboratory stages, namely, the work achieved at the investigation scene and the evaluation and interpretation of the results, intended for intelligence beneficiaries or for court. Such localized consideration prevents forensic practitioners from identifying where the problems really lie with regard to criminal justice systems. According to a performance-management approach, scientific quality should not be restricted to standardized procedures and controls in forensic science practice. Ensuring high quality also strongly depends on the way a forensic science culture is assimilated (into specific education training and workplaces) and in the way practitioners understand forensic science as a whole.
Resumo:
A second collaborative exercise on RNA/DNA co-analysis for body fluid identification and STR profiling was organized by the European DNA Profiling Group (EDNAP). Six human blood stains, two blood dilution series (5-0.001 μl blood) and, optionally, bona fide or mock casework samples of human or non-human origin were analyzed by the participating laboratories using a RNA/DNA co-extraction or solely RNA extraction method. Two novel mRNA multiplexes were used for the identification of blood: a highly sensitive duplex (HBA, HBB) and a moderately sensitive pentaplex (ALAS2, CD3G, ANK1, SPTB and PBGD). The laboratories used different chemistries and instrumentation. All of the 18 participating laboratories were able to successfully isolate and detect mRNA in dried blood stains. Thirteen laboratories simultaneously extracted RNA and DNA from individual stains and were able to utilize mRNA profiling to confirm the presence of blood and to obtain autosomal STR profiles from the blood stain donors. The positive identification of blood and good quality DNA profiles were also obtained from old and compromised casework samples. The method proved to be reproducible and sensitive using different analysis strategies. The results of this collaborative exercise involving a RNA/DNA co-extraction strategy support the potential use of an mRNA based system for the identification of blood in forensic casework that is compatible with current DNA analysis methodology.
Resumo:
In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent.
Resumo:
The quadrennial need study was developed to assist in identifying county highway financial needs (construction, rehabilitation, maintenance, and administration) and in the distribution of the road use tax fund (RUTF) among the counties in the state. During the period since the need study was first conducted using HWYNEEDS software, between 1982 and 1998, there have been large fluctuations in the level of funds distributed to individual counties. A recent study performed by Jim Cable (HR-363, 1993), found that one of the major factors affecting the volatility in the level of fluctuations is the quality of the pavement condition data collected and the accuracy of these data. In 1998, the Center for Transportation Research and Education researchers (Maze and Smadi) completed a project to study the feasibility of using automated pavement condition data collected for the Iowa Pavement Management Program (IPMP) for the paved county roads to be used in the HWYNEEDS software (TR-418). The automated condition data are objective and also more current since they are collected in a two year cycle compared to the 10-year cycle used by HWYNEEDS right now. The study proved the use of the automated condition data in HWYNEEDS would be feasible and beneficial in educing fluctuations when applied to a pilot study area. In another recommendation from TR-418, the researchers recommended a full analysis and investigation of HWYNEEDS methodology and parameters (for more information on the project, please review the TR-418 project report). The study reported in this document builds on the previous study on using the automated condition data in HWYNEEDS and covers the analysis and investigation of the HWYNEEDS computer program methodology and parameters. The underlying hypothesis for this study is thatalong with the IPMP automated condition data, some changes need to be made to HWYNEEDS parameters to accommodate the use of the new data, which will stabilize the process of allocating resources and reduce fluctuations from one quadrennial need study to another. Another objective of this research is to investigate the gravel roads needs and study the feasibility of developing a more objective approach to determining needs on the counties gravel road network. This study identifies new procedures by which the HWYNEEDS computer program is used to conduct the quadrennial needs study on paved roads. Also, a new procedure will be developed to determine gravel roads needs outside of the HWYNEED program. Recommendations are identified for the new procedures and also in terms of making changes to the current quadrennial need study. Future research areas are also identified.
Resumo:
This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.
Resumo:
BACKGROUND: Physician training in smoking cessation counseling has been shown to be effective as a means to increase quit success. We assessed the cost-effectiveness ratio of a smoking cessation counseling training programme. Its effectiveness was previously demonstrated in a cluster randomized, control trial performed in two Swiss university outpatients clinics, in which residents were randomized to receive training in smoking interventions or a control educational intervention. DESIGN AND METHODS: We used a Markov simulation model for effectiveness analysis. This model incorporates the intervention efficacy, the natural quit rate, and the lifetime probability of relapse after 1-year abstinence. We used previously published results in addition to hospital service and outpatient clinic cost data. The time horizon was 1 year, and we opted for a third-party payer perspective. RESULTS: The incremental cost of the intervention amounted to US$2.58 per consultation by a smoker, translating into a cost per life-year saved of US$25.4 for men and 35.2 for women. One-way sensitivity analyses yielded a range of US$4.0-107.1 in men and US$9.7-148.6 in women. Variations in the quit rate of the control intervention, the length of training effectiveness, and the discount rate yielded moderately large effects on the outcome. Variations in the natural cessation rate, the lifetime probability of relapse, the cost of physician training, the counseling time, the cost per hour of physician time, and the cost of the booklets had little effect on the cost-effectiveness ratio. CONCLUSIONS: Training residents in smoking cessation counseling is a very cost-effective intervention and may be more efficient than currently accepted tobacco control interventions.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.