887 resultados para Computer forensic analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fight against doping is mainly focused on direct detection, using analytical methods for the detection of doping agents in biological samples. However, the World Anti-Doping Code also defines doping as possession, administration or attempted administration of prohibited substances or methods, trafficking or attempted trafficking in any prohibited substance or methods. As these issues correspond to criminal investigation, a forensic approach can help assessing potential violation of these rules.In the context of a rowing competition, genetic analyses were conducted on biological samples collected in infusion apparatus, bags and tubing in order to obtain DNA profiles. As no database of athletes' DNA profiles was available, the use of information from the location detection as well as contextual information were key to determine a population of suspected athletes and to obtain reference DNA profiles for comparison.Analysis of samples from infusion systems provided 8 different DNA profiles. The comparison between these profiles and 8 reference profiles from suspected athletes could not be distinguished.This case-study is one of the first where a forensic approach was applied for anti-doping purposes. Based on this investigation, the International Rowing Federation authorities decided to ban not only the incriminated athletes, but also the coaches and officials for 2 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fight against doping is mainly focused on direct detection, using analytical methods for the detection of doping agents in biological samples. However, the World Anti-Doping Code also defines doping as possession, administration or attempted administration of prohibited substances or methods, trafficking or attempted trafficking in any prohibited substance or methods. As these issues correspond to criminal investigation, a forensic approach can help assessing potential violation of these rules. In the context of a rowing competition, genetic analyses were conducted on biological samples collected in infusion apparatus, bags and tubing in order to obtain DNA profiles. As no database of athletes' DNA profiles was available, the use of information from the location detection as well as contextual information were key to determine a population of suspected athletes and to obtain reference DNA profiles for comparison. Analysis of samples from infusion systems provided 8 different DNA profiles. The comparison between these profiles and 8 reference profiles from suspected athletes could not be distinguished. This case-study is one of the first where a forensic approach was applied for anti-doping purposes. Based on this investigation, the International Rowing Federation authorities decided to ban not only the incriminated athletes, but also the coaches and officials for 2 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This school in the course of Marketing Business Management and specifically Entrepr This school in the course of Marketing Business Management and specifically Entrepreneurship in the discipline of Simulation - Games Marketing year was accordingly for the creation of a company in the computer business in business online simulator called Marketplace, in order to put into practice all the theoretical knowledge acquired during all previous semesters. This platform we were confronted with decisions in eight quarters corresponding 4 every year , in order to encourage learning in a practical way, a virtual and dynamic environment. Every quarter acareados with well organized tasks taking as a reference point defined strategies such as market research analysis, branding , store management after its creation , development of the policy of the 4Ps , identifying opportunities , monitoring of finances and invest heavily . All quarters were subjected decisions and are then given the results , such as: market performance , financial performance, investments in the future , the "health" of the company 's marketing efficiency then analyzed by our company , teaching and also by competition Balanced Scorecard ie , semi-annual and cumulative . For the start of activities it was awarded the 1st year a total of 2,000,000, corresponding to 500,000 out of 4 first quarter , and 5,000,000 in the fifth quarter in a total of 7,000,000 . The capital invested was used to buy market research, opening sales offices , create brands , contract sales force , advertise products created and perform activity R & D in order to make a profit and become self- sufficient to guarantee the payment of principal invested to headquarters ( Corporate Headquarters ) .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several ink dating methods based on solvents analysis using gas chromatography/mass spectrometry (GC/MS) were proposed in the last decades. These methods follow the drying of solvents from ballpoint pen inks on paper and seem very promising. However, several questions arose over the last few years among questioned documents examiners regarding the transparency and reproducibility of the proposed techniques. These questions should be carefully studied for accurate and ethical application of this methodology in casework. Inspired by a real investigation involving ink dating, the present paper discusses this particular issue throughout four main topics: aging processes, dating methods, validation procedures and data interpretation. This work presents a wide picture of the ink dating field, warns about potential shortcomings and also proposes some solutions to avoid reporting errors in court.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of statistical models for forensic fingerprint identification purposes has been the subject of increasing research attention in recent years. This can be partly seen as a response to a number of commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. In addition, key forensic identification bodies such as ENFSI [1] and IAI [2] have recently endorsed and acknowledged the potential benefits of using statistical models as an important tool in support of the fingerprint identification process within the ACE-V framework. In this paper, we introduce a new Likelihood Ratio (LR) model based on Support Vector Machines (SVMs) trained with features discovered via morphometric and spatial analyses of corresponding minutiae configurations for both match and close non-match populations often found in AFIS candidate lists. Computed LR values are derived from a probabilistic framework based on SVMs that discover the intrinsic spatial differences of match and close non-match populations. Lastly, experimentation performed on a set of over 120,000 publicly available fingerprint images (mostly sourced from the National Institute of Standards and Technology (NIST) datasets) and a distortion set of approximately 40,000 images, is presented, illustrating that the proposed LR model is reliably guiding towards the right proposition in the identification assessment of match and close non-match populations. Results further indicate that the proposed model is a promising tool for fingerprint practitioners to use for analysing the spatial consistency of corresponding minutiae configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining the time since deposition of fingermarks may prove necessary to assess their relevance to criminal investigations. The crucial factor is the initial composition of fingermarks, because it represents the starting point of any aging model. This study mainly aimed to characterize the initial composition of fingerprints, which show a high variability between donors (inter-variability), but also to investigate the variations among fingerprints from the same donor (intra-variability). Solutions to reduce this initial variability using squalene and cholesterol as target compounds are proposed and should be further investigated. The influence of substrates was also evaluated, and the initial composition was observed to be larger on porous surface than nonporous surfaces. Preliminary aging of fingerprints over 30 days was finally studied on a porous and a nonporous substrate to evaluate the potential for dating of fingermarks. Squalene was observed to decrease in a faster rate on a nonporous substrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deciding whether two fingerprint marks originate from the same source requires examination and comparison of their features. Many cognitive factors play a major role in such information processing. In this paper we examined the consistency (both between- and within-experts) in the analysis of latent marks, and whether the presence of a 'target' comparison print affects this analysis. Our findings showed that the context of a comparison print affected analysis of the latent mark, possibly influencing allocation of attention, visual search, and threshold for determining a 'signal'. We also found that even without the context of the comparison print there was still a lack of consistency in analysing latent marks. Not only was this reflected by inconsistency between different experts, but the same experts at different times were inconsistent with their own analysis. However, the characterization of these inconsistencies depends on the standard and definition of what constitutes inconsistent. Furthermore, these effects were not uniform; the lack of consistency varied across fingerprints and experts. We propose solutions to mediate variability in the analysis of friction ridge skin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Projective homography sits at the heart of many problems in image registration. In addition to many methods for estimating the homography parameters (R.I. Hartley and A. Zisserman, 2000), analytical expressions to assess the accuracy of the transformation parameters have been proposed (A. Criminisi et al., 1999). We show that these expressions provide less accurate bounds than those based on the earlier results of Weng et al. (1989). The discrepancy becomes more critical in applications involving the integration of frame-to-frame homographies and their uncertainties, as in the reconstruction of terrain mosaics and the camera trajectory from flyover imagery. We demonstrate these issues through selected examples