935 resultados para forensic analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research into formulaic language has focussed on specialised groups of people (e.g. L1 acquisition by infants and adult L2 acquisition) with ordinary adult native speakers of English receiving less attention. Additionally, whilst some features of formulaic language have been used as evidence of authorship (e.g. the Unabomber’s use of you can’t eat your cake and have it too) there has been no systematic investigation into this as a potential marker of authorship. This thesis reports the first full-scale study into the use of formulaic sequences by individual authors. The theory of formulaic language hypothesises that formulaic sequences contained in the mental lexicon are shaped by experience combined with what each individual has found to be communicatively effective. Each author’s repertoire of formulaic sequences should therefore differ. To test this assertion, three automated approaches to the identification of formulaic sequences are tested on a specially constructed corpus containing 100 short narratives. The first approach explores a limited subset of formulaic sequences using recurrence across a series of texts as the criterion for identification. The second approach focuses on a word which frequently occurs as part of formulaic sequences and also investigates alternative non-formulaic realisations of the same semantic content. Finally, a reference list approach is used. Whilst claiming authority for any reference list can be difficult, the proposed method utilises internet examples derived from lists prepared by others, a procedure which, it is argued, is akin to asking large groups of judges to reach consensus about what is formulaic. The empirical evidence supports the notion that formulaic sequences have potential as a marker of authorship since in some cases a Questioned Document was correctly attributed. Although this marker of authorship is not universally applicable, it does promise to become a viable new tool in the forensic linguist’s tool-kit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to examine the quality of evidence collected during interview. Current UK national guidance on the interviewing of victims and witnesses recommends a phased approach, allowing the interviewee to deliver their free report before any questioning takes place, and stipulating that during this free report the interviewee should not be interrupted. Interviewers, therefore, often find it necessary during questioning to reactivate parts of the interviewee's free report for further elaboration. Design/methodology/approach: The first section of this paper draws on a collection of police interviews with women reporting rape, and discusses one method by which this is achieved - the indirect quotation of the interviewee by the interviewer - exploring the potential implications for the quality of evidence collected during this type of interview. The second section of the paper draws on the same data set and concerns itself with a particular method by which information provided by an interviewee has its meaning "fixed" by the interviewer. Findings: It is found that "formulating" is a recurrent practice arising from the need to clarify elements of the account for the benefit of what is termed the "overhearing audience" - in this context, the police scribe, CPS, and potentially the Court. Since the means by which this "fixing" is achieved necessarily involves the foregrounding of elements of the account deemed to be particularly salient at the expense of other elements which may be entirely deleted, formulations are rarely entirely neutral. Their production, therefore, has the potential to exert undue interviewer influence over the negotiated "final version" of interviewees' accounts. Originality/value: The paper highlights the fact that accurate re-presentations of interviewees' accounts are a crucial tool in ensuring smooth progression of interviews and that re-stated speech and formulation often have implications for the quality of evidence collected during significant witness interviews. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on Native Language Identification (NLID), and in particular, on the linguistic identifiers of L1 Persian speakers writing in English. This project comprises three sub-studies; the first study devises a coding system to account for interlingual features present in a corpus of L1 Persian speakers blogging in English, and a corpus of L1 English blogs. Study One then demonstrates that it is possible to use interlingual identifiers to distinguish authorship by L1 Persian speakers. Study Two examines the coding system in relation to the L1 Persian corpus and a corpus of L1 Azeri and L1 Pashto speakers. The findings of this section indicate that the NLID method and features designed are able to discriminate between L1 influences from different languages. Study Three focuses on elicited data, in which participants were tasked with disguising their language to appear as L1 Persian speakers writing in English. This study indicated that there was a significant difference between the features in the L1 Persian corpus, and the corpus of disguise texts. The findings of this research indicate that NLID and the coding system devised have a very strong potential to aid forensic authorship analysis in investigative situations. Unlike existing research, this project focuses predominantly on blogs, as opposed to student data, making the findings more appropriate to forensic casework data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are several unresolved problems in forensic authorship profiling, including a lack of research focusing on the types of texts that are typically analysed in forensic linguistics (e.g. threatening letters, ransom demands) and a general disregard for the effect of register variation when testing linguistic variables for use in profiling. The aim of this dissertation is therefore to make a first step towards filling these gaps by testing whether established patterns of sociolinguistic variation appear in malicious forensic texts that are controlled for register. This dissertation begins with a literature review that highlights a series of correlations between language use and various social factors, including gender, age, level of education and social class. This dissertation then presents the primary data set used in this study, which consists of a corpus of 287 fabricated malicious texts from 3 different registers produced by 96 authors stratified across the 4 social factors listed above. Since this data set is fabricated, its validity was also tested through a comparison with another corpus consisting of 104 naturally occurring malicious texts, which showed that no important differences exist between the language of the fabricated malicious texts and the authentic malicious texts. The dissertation then reports the findings of the analysis of the corpus of fabricated malicious texts, which shows that the major patterns of sociolinguistic variation identified in previous research are valid for forensic malicious texts and that controlling register variation greatly improves the performance of profiling. In addition, it is shown that through regression analysis it is possible to use these patterns of linguistic variation to profile the demographic background of authors across the four social factors with an average accuracy of 70%. Overall, the present study therefore makes a first step towards developing a principled model of forensic authorship profiling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of automatic handwriting investigation in forensic science is described. The general scheme of a computer-based handwriting analysis system is used to point out at the basic problems of image enhancement and segmentation, feature extraction and decision-making. Factors that may compromise the accuracy of expert’s conclusion are underlined and directions for future investigations are marked.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interactions with second language speakers in public service contexts in England are normally conducted with the assistance of one interpreter. Even in situations where team interpreting would be advisable, for example in lengthy courtroom proceedings, financial considerations mean only one interpreter is normally booked. On occasion, however, more than one interpreter, or an individual (or individuals) with knowledge of the languages in question, may be simultaneously present during an interpreted interaction, either monitoring it or indeed volunteering unsolicited input. During police interviews or trials in England this may happen when the interpreter secured by the defence team to interpret during private consultation with the suspect or defendant is present also in the interview room or the courtroom but two independently sourced interpreters need not be limited to legal contexts. In healthcare settings for example, service users sometimes bring friends or relatives along to help them communicate with service providers only to find that the latter have booked an interpreter as a matter of procedure. By analogy to the nature of the English legal system, I refer to contexts where an interpreter’s output is monitored and/or challenged, either during the speech event or subsequently, as ‘adversarial interpreting’. This conceptualisation reflects the fact that interpreters in such encounters are sourced independently, often by opposing parties, and as a result can rarely be considered a team. My main concern in this paper is to throw spotlight on adversarial interpreting as a hitherto rarely discussed problem in its own right. That it is not an anomaly is evidenced by the many cases around the world where the officially recorded interpreted output was challenged, as mentioned in for example Berk-Seligson (2002), Hayes and Hale (2010), and Phelan (2011). This paper reports on the second stage of a research project which has previously involved the analysis of a transcript of an interpreted police interview with a suspect in a murder case. I will mention the findings of the analysis briefly and introduce some new findings based on input from practising interpreters who have shared their experience of adversarial interpreting by completing an online questionnaire. I will try to answer the question of how the presence of two interpreters, or an interpreter and a monitoring participant, in the same speech event impacts on the communication process. I will also address the issue of forensic linguistic arbitration in cases where incompetent interpreting has been identified or an expert opinion is sought in relation to an adversarial interpreting event of significance to a legal dispute. References Berk-Seligson (2002), The Bilingual Courtroom: Court Interpreters in the Judicial Process, University of Chicago Press. Hayes, A. and Hale, S. (2010), "Appeals on incompetent interpreting", Journal of Judicial Administration 20.2, 119-130. Phelan, M. (2011), "Legal Interpreters in the news in Ireland", Translation and Interpreting 3.1, 76-105.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Routledge Handbook of Forensic Linguistics provides a unique work of reference to the leading ideas, debates, topics, approaches and methodologies in Forensic Linguistics. Forensic Linguistics is the study of language and the law, covering topics from legal language and courtroom discourse to plagiarism. It also concerns the applied (forensic) linguist who is involved in providing evidence, as an expert, for the defence and prosecution, in areas as diverse as blackmail, trademarks and warning labels. The Routledge Handbook of Forensic Linguistics includes a comprehensive introduction to the field written by the editors and a collection of thirty-seven original chapters written by the world’s leading academics and professionals, both established and up-and-coming, designed to equip a new generation of students and researchers to carry out forensic linguistic research and analysis. The Routledge Handbook of Forensic Linguistics is the ideal resource for undergraduates or postgraduates new to the area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The volatile chemicals which comprise the odor of the illicit drug cocaine have been analyzed by adsorption onto activated charcoal followed by solvent elution and GC/MS analysis. A series of field tests have been performed to determine the dominant odor compound to which dogs alert. All of our data to date indicate that the dominant odor is due to the presence of methyl benzoate which is associated with the cocaine, rather than the cocaine itself. When methyl benzoate and cocaine are spiked onto U.S. currency, the threshold level of methyl benzoate required for a canine to signal an alert is typically 1-10 $\mu$g. Humans have been shown to have a sensitivity similar to dogs for methyl benzoate but with poorer selectivity/reliability. The dominant decomposition pathway for cocaine has been evaluated at elevated temperatures (up to 280$\sp\circ$C). Benzoic acid, but no detectable methyl benzoate, is formed. Solvent extraction and SFE were used to study the recovery of cocaine from U.S. currency. The amount of cocaine which could be recovered was found to decrease with time. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac troponin I (cTnI) is one of the most useful serum marker test for the determination of myocardial infarction (MI). The first commercial assay of cTnI was released for medical use in the United States and Europe in 1995. It is useful in determining if the source of chest pains, whose etiology may be unknown, is cardiac related. Cardiac TnI is released into the bloodstream following myocardial necrosis (cardiac cell death) as a result of an infarct (heart attack). In this research project the utility of cardiac troponin I as a potential marker for the determination of time of death is investigated. The approach of this research is not to investigate cTnI degradation in serum/plasma, but to investigate the proteolytic breakdown of this protein in heart tissue postmortem. If our hypothesis is correct, cTnI might show a distinctive temporal degradation profile after death. This temporal profile may have potential as a time of death marker in forensic medicine. The field of time of death markers has lagged behind the great advances in technology since the late 1850's. Today medical examiners are using rudimentary time of death markers that offer limited reliability in the medico-legal arena. Cardiac TnI must be stabilized in order to avoid further degradation by proteases in the extraction process. Chemically derivatized magnetic microparticles were covalently linked to anti-cTnI monoclonal antibodies. A charge capture approach was also used to eliminate the antibody from the magnetic microparticles given the negative charge on the microparticles. The magnetic microparticles were used to extract cTnI from heart tissue homogenate for further bio-analysis. Cardiac TnI was eluted from the beads with a buffer and analyzed. This technique exploits banding pattern on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by a western blot transfer to polyvinylidene fluoride (PVDF) paper for probing with anti-cTnI monoclonal antibodies. Bovine hearts were used as a model to establish the relationship of time of death and concentration/band-pattern given its homology to human cardiac TnI. The final concept feasibility was tested with human heart samples from cadavers with known time of death. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The investigations of human mitochondrial DNA (mtDNA) have considerably contributed to human evolution and migration. The Middle East is considered to be an essential geographic area for human migrations out of Africa since it is located at the crossroads of Africa, and the rest of the world. United Arab Emirates (UAE) population inhabits the eastern part of Arabian Peninsula and was investigated in this study. Published data of 18 populations were included in the statistical analysis. The diversity indices showed (1) high genetic distance among African populations and (2) high genetic distance between African populations and non-African populations. Asian populations clustered together in the NJ tree between the African and European populations. MtDNA haplotypes database of the UAE population was generated. By incorporating UAE mtDNA dataset into the existing worldwide mtDNA database, UAE Forensic Laboratories will be able to analyze future mtDNA evidence in a more significant and consistent manner. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive investigation of sensitive ecosystems in South Florida with the main goal of determining the identity, spatial distribution, and sources of both organic biocides and trace elements in different environmental compartments is reported. This study presents the development and validation of a fractionation and isolation method of twelve polar acidic herbicides commonly applied in the vicinity of the study areas, including e.g. 2,4-D, MCPA, dichlorprop, mecroprop, picloram in surface water. Solid phase extraction (SPE) was used to isolate the analytes from abiotic matrices containing large amounts of dissolved organic material. Atmospheric-pressure ionization (API) with electrospray ionization in negative mode (ESP-) in a Quadrupole Ion Trap mass spectrometer was used to perform the characterization of the herbicides of interest. ^ The application of Laser Ablation-ICP-MS methodology in the analysis of soils and sediments is reported in this study. The analytical performance of the method was evaluated on certified standards and real soil and sediment samples. Residential soils were analyzed to evaluate feasibility of using the powerful technique as a routine and rapid method to monitor potential contaminated sites. Forty eight sediments were also collected from semi pristine areas in South Florida to conduct screening of baseline levels of bioavailable elements in support of risk evaluation. The LA-ICP-MS data were used to perform a statistical evaluation of the elemental composition as a tool for environmental forensics. ^ A LA-ICP-MS protocol was also developed and optimized for the elemental analysis of a wide range of elements in polymeric filters containing atmospheric dust. A quantitative strategy based on internal and external standards allowed for a rapid determination of airborne trace elements in filters containing both contemporary African dust and local dust emissions. These distributions were used to qualitative and quantitative assess differences of composition and to establish provenance and fluxes to protected regional ecosystems such as coral reefs and national parks. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is limited scientific knowledge on the composition of human odor from different biological specimens and the effect that physiological and psychological health conditions could have on them. There is currently no direct comparison of the volatile organic compounds (VOCs) emanating from different biological specimens collected from healthy individuals as well as individuals with certain diagnosed medical conditions. Therefore the question of matching VOCs present in human odor across various biological samples and across health statuses remains unanswered. The main purpose of this study was to use analytical instrumental methods to compare the VOCs from different biological specimens from the same individual and to compare the populations evaluated in this project. The goals of this study were to utilize headspace solid-phase microextraction gas chromatography mass spectrometry (HS-SPME-GC/MS) to evaluate its potential for profiling VOCs from specimens collected using standard forensic and medical methods over three different populations: healthy group with no diagnosed medical or psychological condition, one group with diagnosed type 2 diabetes, and one group with diagnosed major depressive disorder. The pre-treatment methods of collection materials developed for the study allowed for the removal of targeted VOCs from the sampling kits prior to sampling, extraction and analysis. Optimized SPME-GC/MS conditions has been demonstrated to be capable of sampling, identifying and differentiating the VOCs present in the five biological specimens collected from different subjects and yielded excellent detection limits for the VOCs from buccal swab, breath, blood, and urine with average limits of detection of 8.3 ng. Visual, Spearman rank correlation, and PCA comparisons of the most abundant and frequent VOCs from each specimen demonstrated that each specimen has characteristic VOCs that allow them to be differentiated for both healthy and diseased individuals. Preliminary comparisons of VOC profiles of healthy individuals, patients with type 2 diabetes, and patients with major depressive disorder revealed compounds that could be used as potential biomarkers to differentiate between healthy and diseased individuals. Finally, a human biological specimen compound database has been created compiling the volatile compounds present in the emanations of human hand odor, oral fluids, breath, blood, and urine.