29 resultados para Digital Forensics, Forensic Computing, Forensic Science
em Aston University Research Archive
Resumo:
Initially the study focussed on the factors affecting the ability of the police to solve crimes. An analysts of over twenty thousand police deployments revealed the proportion of time spent investigating crime contrasted to its perceived importance and the time spent on other activities. The fictional portrayal of skills believed important in successful crime investigation were identified and compared to the professional training and 'taught skills’ given to police and detectives. Police practitioners and middle management provided views on the skills needed to solve crimes. The relative importance of the forensic science role. fingerprint examination and interrogation skills were contrasted with changes in police methods resulting from the Police and Criminal Evidence Act and its effect on confessions. The study revealed that existing police systems for investigating crime excluding specifically cases of murder and other serious offences, were unsystematic, uncoordinated, unsupervised and unproductive in using police resources. The study examined relevant and contemporary research in the United States and United Kingdom and with organisational support introduced an experimental system of data capture and initial investigation with features of case screening and management. Preliminary results indicated increases in the collection of essential information and more effective use of investigative resources. In the managerial framework within which this study has been conducted, research has been undertaken in the knowledge elicitation area as a basis for an expert system of crime investigation and the potential organisational benefits of utilising the Lap computer in the first stages of data gathering and investigation. The conclusions demonstrate the need for a totally integrated system of criminal investigation with emphasis on an organisational rather than individual response. In some areas the evidence produced is sufficient to warrant replication, in others additional research is needed to further explore other concepts and proposed systems pioneered by this study.
Resumo:
This chapter introduces Native Language Identification (NLID) and considers the casework applications with regard to authorship analysis of online material. It presents findings from research identifying which linguistic features were the best indicators of native (L1) Persian speakers blogging in English, and analyses how these features cope at distinguishing between native influences from languages that are linguistically and culturally related. The first chapter section outlines the area of Native Language Identification, and demonstrates its potential for application through a discussion of relevant case history. The next section discusses a development of methodology for identifying influence from L1 Persian in an anonymous blog author, and presents findings. The third part discusses the application of these features to casework situations as well as how the features identified can form an easily applicable model and demonstrates the application of this to casework. The research presented in this chapter can be considered a case study for the wider potential application of NLID.
Resumo:
The judicial interest in ‘scientific’ evidence has driven recent work to quantify results for forensic linguistic authorship analysis. Through a methodological discussion and a worked example this paper examines the issues which complicate attempts to quantify results in work. The solution suggested to some of the difficulties is a sampling and testing strategy which helps to identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers identified as being generally valid and reliable are retested for use in specific authorship analysis cases. The suggested approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with twenty comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to fifteen, ten, five and finally three texts per author. This worked example demonstrates how reducing the amount of data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion which should be reached is that no attribution can be made. The testing process at no point results in instances of a misattribution.
Resumo:
Outcomes measures, which is the measurement of effectiveness of interventions and services has been propelled onto the health service agenda since the introduction of the internal market in the 1990s. It arose as a result of the escalating cost of inpatient care, the need to identify what interventions work and in what situations, and the desire for effective information by service users enabled by the consumerist agenda introduced by Working for Patients white paper. The research reported in this thesis is an assessment of the readiness of the forensic mental health service to measure outcomes of interventions. The research examines the type, prevalence and scope of use of outcomes measures, and further seeks a consensus of views of key stakeholders on the priority areas for future development. It discusses the theoretical basis for defining health and advocates the argument that the present focus on measuring effectiveness of care is misdirected without the input of users, particularly patients in their care, drawing together the views of the many stakeholders who have an interest in the provision of care in the service. The research further draws on the theory of structuration to demonstrate the degree to which a duality of action, which is necessary for the development, and use of outcomes measures is in place within the service. Consequently, it highlights some of the hurdles that need to be surmounted before effective measurement of health gain can be developed in the field of study. It concludes by advancing the view that outcomes research can enable practitioners to better understand the relationship between the illness of the patient and the efficacy of treatment. This understanding it is argued would contribute to improving dialogue between the health care practitioner and the patient, and further providing the information necessary for moving away from untested assumptions, which are numerous in the field about the superiority of one treatment approach over another.
Resumo:
Communication in Forensic Contexts provides in-depth coverage of the complex area of communication in forensic situations. Drawing on expertise from forensic psychology, linguistics and law enforcement worldwide, the text bridges the gap between these fields in a definitive guide to best practice. • Offers best practice for understanding and improving communication in forensic contexts, including interviewing of victims, witnesses and suspects, discourse in courtrooms, and discourse via interpreters • Bridges the knowledge gaps between forensic psychology, forensic linguistics and law enforcement, with chapters written by teams bringing together expertise from each field • Published in collaboration with the International Investigative Interviewing Research Group, dedicated to furthering evidence-based practice and practice-based research amongst researchers and practitioners • International, cross-disciplinary team includes contributors from North America, Europe and Asia Pacific, and from psychology, linguistics and forensic practice
Resumo:
This chapter demonstrates diversity in the activity of authorship and the corresponding diversity of forensic authorship analysis questions and techniques. Authorship is discussed in terms of Love’s (2002) multifunctional description of precursory, executive, declarative and revisionary authorship activities and the implications of this distinction for forensic problem solving. Four different authorship questions are considered. These are ‘How was the text produced?’, ‘How many people wrote the text?’, ‘What kind of person wrote the text?’ and ‘What is the relationship of a queried text with comparison texts?’ Different approaches to forensic authorship analysis are discussed in terms of their appropriateness to answering different authorship questions. The conclusion drawn is that no one technique will ever be appropriate to all problems.
Resumo:
This study investigates plagiarism detection, with an application in forensic contexts. Two types of data were collected for the purposes of this study. Data in the form of written texts were obtained from two Portuguese Universities and from a Portuguese newspaper. These data are analysed linguistically to identify instances of verbatim, morpho-syntactical, lexical and discursive overlap. Data in the form of survey were obtained from two higher education institutions in Portugal, and another two in the United Kingdom. These data are analysed using a 2 by 2 between-groups Univariate Analysis of Variance (ANOVA), to reveal cross-cultural divergences in the perceptions of plagiarism. The study discusses the legal and social circumstances that may contribute to adopting a punitive approach to plagiarism, or, conversely, reject the punishment. The research adopts a critical approach to plagiarism detection. On the one hand, it describes the linguistic strategies adopted by plagiarists when borrowing from other sources, and, on the other hand, it discusses the relationship between these instances of plagiarism and the context in which they appear. A focus of this study is whether plagiarism involves an intention to deceive, and, in this case, whether forensic linguistic evidence can provide clues to this intentionality. It also evaluates current computational approaches to plagiarism detection, and identifies strategies that these systems fail to detect. Specifically, a method is proposed to translingual plagiarism. The findings indicate that, although cross-cultural aspects influence the different perceptions of plagiarism, a distinction needs to be made between intentional and unintentional plagiarism. The linguistic analysis demonstrates that linguistic elements can contribute to finding clues for the plagiarist’s intentionality. Furthermore, the findings show that translingual plagiarism can be detected by using the method proposed, and that plagiarism detection software can be improved using existing computer tools.
Resumo:
Current debate within forensic authorship analysis has tended to polarise those who argue that analysis methods should reflect a strong cognitive theory of idiolect and others who see less of a need to look behind the stylistic variation of the texts they are examining. This chapter examines theories of idiolect and asks how useful or necessary they are to the practice of forensic authorship analysis. Taking a specific text messaging case the chapter demonstrates that methodologically rigorous, theoretically informed authorship analysis need not appeal to cognitive theories of idiolect in order to be valid. By considering text messaging forensics, lessons will be drawn which can contribute to wider debates on the role of theories of idiolect in forensic casework.