9 resultados para digital forensic tool testing
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.
Resumo:
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.
Resumo:
Purpose of this research is to deepen the study on the section in architecture. The survey aims as important elements in the project Teatro Domestico by Aldo Rossi built for the XVII Triennale di Milano in 1986 and, through the implementation on several topics of architecture, verify the timeliness and fertility in the new compositional exercises. Through the study of certain areas of the Rossi’s theory we tried to find a common thread for the reading of the theater project. The theater is the place of the ephemeral and the artificial, which is why his destiny is the end and the fatal loss. The design and construction of theater setting has always had a double meaning between the value of civil architecture and testing of new technologies available. Rossi's experience in this area are clear examples of the inseparable relationship between the representation of architecture as art and design of architecture as a model of reality. In the Teatro Domestico, the distinction between representation and the real world is constantly canceled and returned through the reversal of the meaning and through the skip of scale. At present, studies conducted on the work of Rossi concern the report that the architectural composition is the theory of form, focusing compositional development of a manufacturing process between the typological analysis and form invention. The research, through the analysis of some projects few designs, will try to analyze this issue through the rules of composition both graphical and concrete construction, hoping to decipher the mechanism underlying the invention. The almost total lack of published material on the project Teatro Domestico and the opportunity to visit the archives that preserve the drawings, has allowed the author of this study to deepen the internal issues in the project, thus placing this search as a first step toward possible further analysis on the works of Rossi linked to performance world. The final aim is therefore to produce material that can best describe the work of Rossi. Through the reading of the material published by the same author and the vision of unpublished material preserved in the archives, it was possible to develop new material and increasing knowledge about the work, otherwise difficult to analyze. The research is divided into two groups. The first, taking into account the close relationship most frequently mentioned by Rossi himself between archeology and architectural composition, stresses the importance of tipo such as urban composition reading system as well as open tool of invention. Resuming Ezio Bonfanti’s essay on the work of the architect we wanted to investigate how the paratactic method is applied to the early work conceived and, subsequently as the process reaches a complexity accentuated, while keeping stable the basic terms. Following a brief introduction related to the concept of the section and the different interpretations that over time the term had, we tried to identify with this facility a methodology for reading Rossi’s projects. The result is a constant typological interpretation of the term, not only related to the composition in plant but also through the elevation plans. The section is therefore intended as the overturning of such elevation is marked on the same plane of the terms used, there is a different approach, but a similarity of characters. The identification of architectural phonemes allows comparison with other arts. The research goes in the direction of language trying to identify the relationship between representation and construction, between the ephemeral and the real world. In this sense it will highlight the similarities between the graphic material produced by Ross and some important examples of contemporary author. The comparison between the composition system with the surrealist world of painting and literature will facilitate the understanding and identification of possible rules applied by Rossi. The second part of the research is characterized by a focus on the intent of the project chosen. Teatro Domestico embodies a number of elements that seem to conclude (assuming an end point but also to start) a curriculum author. With it, the experiments carried out on the theater started with the project for the Teatrino Scientifico (1978) through the project for the Teatro del Mondo (1979), into a Laic Tabernacle representative collective and private memory of the city. Starting from a reading of the draft, through the collection of published material, we’ve made an analysis on the explicit themes of the work, finding the conceptual references. Following the taking view of the original materials not published kept at Aldo Rossi's Archive Collection of the Canadian Center for Architecture in Montréal, will be implemented through the existing techniques for digital representation, a virtual reconstruction of the project, adding little to the material, a new element for future studies. The reconstruction is part of a larger research studies where the current technologies of composition and representation in architecture stand side by side with research on the method of composition of this architect. The results achieved are in addition to experiences in the past dealt with the reconstruction of some of the lost works of Aldo Rossi. A partial objective is to reactivate a discourse around this work is considered non-principal, among others born in the prolific activities. Reassessment of development projects which would bring the level of ephemeral works most frequented by giving them the value earned. In conclusion, the research aims to open a new field of interest on the part not only as a technical instrument of representation of an idea but as an actual mechanism through which composition is formed and the idea is developed.
Resumo:
Come risposta positiva alle richieste provenienti dal mondo dei giuristi, spesso troppo distante da quello scientifico, si vuole sviluppare un sistema solido dal punto di vista tecnico e chiaro dal punto di vista giurico finalizzato ad migliore ricerca della verità. L’obiettivo ci si prefigge è quello di creare uno strumento versatile e di facile utilizzo da mettere a disposizione dell’A.G. ed eventualmente della P.G. operante finalizzato a consentire il proseguo dell’attività d’indagine in tempi molto rapidi e con un notevole contenimento dei costi di giustizia rispetto ad una normale CTU. La progetto verterà su analisi informatiche forensi di supporti digitali inerenti vari tipi di procedimento per cui si dovrebbe richiedere una CTU o una perizia. La sperimentazione scientifica prevede un sistema di partecipazione diretta della P.G. e della A.G. all’analisi informatica rendendo disponibile, sottoforma di macchina virtuale, il contenuto dei supporti sequestrati in modo che possa essere visionato alla pari del supporto originale. In questo modo il CT diventa una mera guida per la PG e l’AG nell’ambito dell’indagine informatica forense che accompagna il giudice e le parti alla migliore comprensione delle informazioni richieste dal quesito. Le fasi chiave della sperimentazione sono: • la ripetibilità delle operazioni svolte • dettare delle chiare linee guida per la catena di custodia dalla presa in carico dei supporti • i metodi di conservazione e trasmissione dei dati tali da poter garantire integrità e riservatezza degli stessi • tempi e costi ridotti rispetto alle normali CTU/perizie • visualizzazione diretta dei contenuti dei supporti analizzati delle Parti e del Giudice circoscritte alle informazioni utili ai fini di giustizia
Resumo:
Hair cortisol is a novel marker to measure long-term secretion cortisol free from many methodological caveats associated with other matrices such as plasma, saliva, urine, milk and faeces. For decades hair analysis has been successfully used in forensic science and toxicology to evaluate the exposure to exogenous substances and assess endogenous steroid hormones. Evaluation of cortisol in hair matrix began about a decade ago and have over the past five years had a remarkable development by advancing knowledge and affirming this method as a new and efficient way to study the hypothalamic-pituitary-adrenal (HPA) axis activity over a long time period. In farm animals, certain environmental or management conditions can potentially activate the HPA axis. Given the importance of cortisol in monitoring the HPA axis activity, a first approach has involved the study on the distribution of hair cortisol concentrations (HCC) in healthy dairy cows showing a physiological range of variation of this hormone. Moreover, HCC have been significantly influenced also by changes in environmental conditions and a significant positive correlation was detected between HCC and cows clinically or physiologically compromised suggesting that these cows were subjected to repeated HPA axis activation. Additionally, Crossbreed F1 heifers showed significantly lower HCC compared to pure animals and a breed influence has been seen also on the HPA axis activity stimulated by an environmental change showing thus a higher level of resilience and a better adaptability to the environment of certain genotypes. Hair proved to be an excellent matrix also in the study of the activation of the HPA axis during the perinatal period. The use of hair analysis in research holds great promise to significantly enhance current understanding on the role of HPA axis over a long period of time.
Resumo:
This PhD thesis discusses the impact of Cloud Computing infrastructures on Digital Forensics in the twofold role of target of investigations and as a helping hand to investigators. The Cloud offers a cheap and almost limitless computing power and storage space for data which can be leveraged to commit either new or old crimes and host related traces. Conversely, the Cloud can help forensic examiners to find clues better and earlier than traditional analysis applications, thanks to its dramatically improved evidence processing capabilities. In both cases, a new arsenal of software tools needs to be made available. The development of this novel weaponry and its technical and legal implications from the point of view of repeatability of technical assessments is discussed throughout the following pages and constitutes the unprecedented contribution of this work
Resumo:
The research presented herein aims to investigate the strengths and weaknesses of a relatively new technique called phytoscreening. Parallel to the well-known phytoremediation, it consists of exploiting the absorbing potential of trees to delineate groundwater contamination plumes, especially for chlorinated ethenes (i.e., PCE, TCE, 1,2-cis DCE, and VC). The latter are prevalent contaminants in groundwater but their fate and transport in surface ecosystems, such as trees, are still poorly understood and subjected to high variability. Moreover, the analytical validity of tree-coring is still limited in many countries due to a lack of knowledge of its application opportunities. Tree-cores are extracted from trunks and generally analyzed by gas chromatography/mass spectrometry. A systematic review of former literature on phytoscreening for chlorinated ethenes is presented in this PhD thesis to evaluate the factors influencing the effectiveness of the technique. Besides, we tested the technique by probing eight sites contaminated by chlorinated ethenes in Italy (Emilia-Romagna) in different hydrogeological and seasonal settings. We coupled the technique with the assessment of gaseous-phase concentrations directly on-site, inserting detector tubes or a photoionization detector in the tree-holes left by the coring tool. Finally, we applied rank order statistic analysis on field data along with literature data to assess under which conditions phytoscreening should be applied to either screen or monitor environmental contamination issues. A relatively high correlation exists between tree-core and groundwater concentrations (Spearman’s ρ > 0.6), being higher for compounds with higher sorption, for sites with shallower and thinner aquifers, and when sampling specific tree types with standardized sampling and extraction protocols. These results indicate the opportunities for assessing the occurrence, type, and concentration of solvents directly from the stem of trees. This can reduce the costs of characterization surveys, allowing rapid identification of hotspots and plume direction and thus optimizing the drilling of boreholes.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
Amid the remarkable growth of innovative technologies, particularly immersive technologies like Extended Reality (XR) (comprising of Virtual Reality (VR), Augmented Reality (AR) & Mixed Reality (MR)), a transformation is unfolding in the way we collaborate and interact. The current research takes the initiative to explore XR’s potential for co-creation activities and proposes XR as a future co-creation platform. It strives to develop a XR-based co-creation system, actively engage stakeholders in the co-creation process, with the goal of enhancing their creative businesses. The research leverages XR tools to investigate how they can enhance digital co-creation methods and determine if the system facilitates efficient and effective value creation during XR-based co-creation sessions. In specific terms, the research probes into whether the XR-based co-creation method and environment enhances the quality and novelty of ideas, reduce communication challenges by providing better understanding of the product, problem or process and optimize the process in terms of reduction in time and costs. The research introduces a multi-user, multi-sensory collaborative and interactive XR platform that adapts to various use-case scenarios. This thesis also presents the user testing performed to collect both qualitative and quantitative data, which serves to substantiate the hypothesis. What sets this XR system apart is its incorporation of fully functional prototypes into a mixed reality environment, providing users with a unique dimension within an immersive digital landscape. The outcomes derived from the experimental studies demonstrate that XR-based co-creation surpasses conventional desktop co-creation methods and remarkably, the results are even comparable to a full mock-up test. In conclusion, the research underscores that the utilization of XR as a tool for co-creation generates substantial value. It serves as a method that enhances the process, an environment that fosters interaction and collaboration, and a platform that equips stakeholders with the means to engage effectively.