953 resultados para computer forensics tools
Resumo:
The cybernetics revolution of the last years improved a lot our lives, having an immediate access to services and a huge amount of information over the Internet. Nowadays the user is increasingly asked to insert his sensitive information on the Internet, leaving its traces everywhere. But there are some categories of people that cannot risk to reveal their identities on the Internet. Even if born to protect U.S. intelligence communications online, nowadays Tor is the most famous low-latency network, that guarantees both anonymity and privacy of its users. The aim of this thesis project is to well understand how the Tor protocol works, not only studying its theory, but also implementing those concepts in practice, having a particular attention for security topics. In order to run a Tor private network, that emulates the real one, a virtual testing environment has been configured. This behavior allows to conduct experiments without putting at risk anonymity and privacy of real users. We used a Tor patch, that stores TLS and circuit keys, to be given as inputs to a Tor dissector for Wireshark, in order to obtain decrypted and decoded traffic. Observing clear traffic allowed us to well check the protocol outline and to have a proof of the format of each cell. Besides, these tools allowed to identify a traffic pattern, used to conduct a traffic correlation attack to passively deanonymize hidden service clients. The attacker, controlling two nodes of the Tor network, is able to link a request for a given hidden server to the client who did it, deanonymizing him. The robustness of the traffic pattern and the statistics, such as the true positive rate, and the false positive rate, of the attack are object of a potential future work.
Resumo:
Domain-specific languages (DSLs) are increasingly used as embedded languages within general-purpose host languages. DSLs provide a compact, dedicated syntax for specifying parts of an application related to specialized domains. Unfortunately, such language extensions typically do not integrate well with the development tools of the host language. Editors, compilers and debuggers are either unaware of the extensions, or must be adapted at a non-trivial cost. We present a novel approach to embed DSLs into an existing host language by leveraging the underlying representation of the host language used by these tools. Helvetia is an extensible system that intercepts the compilation pipeline of the Smalltalk host language to seamlessly integrate language extensions. We validate our approach by case studies that demonstrate three fundamentally different ways to extend or adapt the host language syntax and semantics.
Resumo:
The spatio-temporal control of gene expression is fundamental to elucidate cell proliferation and deregulation phenomena in living systems. Novel approaches based on light-sensitive multiprotein complexes have recently been devised, showing promising perspectives for the noninvasive and reversible modulation of the DNA-transcriptional activity in vivo. This has lately been demonstrated in a striking way through the generation of the artificial protein construct light-oxygen-voltage (LOV)-tryptophan-activated protein (TAP), in which the LOV-2-Jα photoswitch of phototropin1 from Avena sativa (AsLOV2-Jα) has been ligated to the tryptophan-repressor (TrpR) protein from Escherichia coli. Although tremendous progress has been achieved on the generation of such protein constructs, a detailed understanding of their functioning as opto-genetical tools is still in its infancy. Here, we elucidate the early stages of the light-induced regulatory mechanism of LOV-TAP at the molecular level, using the noninvasive molecular dynamics simulation technique. More specifically, we find that Cys450-FMN-adduct formation in the AsLOV2-Jα-binding pocket after photoexcitation induces the cleavage of the peripheral Jα-helix from the LOV core, causing a change of its polarity and electrostatic attraction of the photoswitch onto the DNA surface. This goes along with the flexibilization through unfolding of a hairpin-like helix-loop-helix region interlinking the AsLOV2-Jα- and TrpR-domains, ultimately enabling the condensation of LOV-TAP onto the DNA surface. By contrast, in the dark state the AsLOV2-Jα photoswitch remains inactive and exerts a repulsive electrostatic force on the DNA surface. This leads to a distortion of the hairpin region, which finally relieves its tension by causing the disruption of LOV-TAP from the DNA.
Resumo:
Recent advances in tissue-engineered cartilage open the door to new clinical treatments of joint lesions. Common to all therapies with in-vitro-engineered autografts is the need for optimal fit of the construct to allow screwless implantation and optimal integration into the live joint. Computer-assisted surgery (CAS) techniques are prime candidates to ensure the required accuracy, while at the same time simplifying the procedure. A pilot study has been conducted aiming at assembling a new set of methods to support ankle joint arthroplasty using bioengineered autografts. Computer assistance allows planning of the implant shape on a computed tomography (CT) image, manufacturing the construct according to the plan, and interoperatively navigating the surgical tools for implantation. A rotational symmetric model of the joint surface was used to avoid segmentation of the CT image; new software was developed to determine the joint axis and make the implant shape parameterizable. A complete cycle of treatment from planning to operation was conducted on a human cadaveric foot, thus proving the feasibility of computer-assisted arthroplasty using bioengineered autografts
Resumo:
Objective In order to benefit from the obvious advantages of minimally invasive liver surgery there is a need to develop high precision tools for intraoperative anatomical orientation, navigation and safety control. In a pilot study we adapted a newly developed system for computer-assisted liver surgery (CALS) in terms of accuracy and technical feasibility to the specific requirements of laparoscopy. Here, we present practical aspects related to laparoscopic computer assisted liver surgery (LCALS). Methods Our video relates to a patient presenting with 3 colorectal liver metastases in Seg. II, III and IVa who was selected in an appropriate oncological setting for LCALS using the CAScination system combined with 3D MEVIS reconstruction. After minimal laparoscopic mobilization of the liver, a 4- landmark registration method was applied to enable navigation. Placement of microwave needles was performed using the targeting module of the navigation system and correct needle positioning was confirmed by intraoperative sonography. Ablation of each lesion was carried out by application of microwave energy at 100 Watts for 1 minute. Results To acquire an accurate (less 0.5 cm) registration, 4 registration cycles were necessary. In total, seven minutes were required to accomplish precise registration. Successful ablation with complete response in all treated areas was assessed by intraoperative sonography and confirmed by postoperative CT scan. Conclusions This teaching video demonstrates the theoretical and practical key points of LCALS with a special emphasis on preoperative planning, intraoperative registration and accuracy testing by laparoscopic methodology. In contrast to mere ultrasound-guided ablation of liver lesions, LCALS offers a more dimensional targeting and higher safety control. This is currently also in routine use to treat vanishing lesions and other difficult to target focal lesions within the liver.
Resumo:
This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.
Resumo:
Introduced about two decades ago, computer-assisted orthopedic surgery (CAOS) has emerged as a new and independent area, due to the importance of treatment of musculoskeletal diseases in orthopedics and traumatology, increasing availability of different imaging modalities, and advances in analytics and navigation tools. The aim of this paper is to present the basic elements of CAOS devices and to review state-of-the-art examples of different imaging modalities used to create the virtual representations, of different position tracking devices for navigation systems, of different surgical robots, of different methods for registration and referencing, and of CAOS modules that have been realized for different surgical procedures. Future perspectives will also be outlined.
Resumo:
These Data Management Plans are more comprehensive and complex than in the past. Libraries around the nation are trying to put together tools to help researchers write plans that conform to the new requirements. This session will look at some of these tools.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.
Resumo:
Auxetic materials (or metamaterials) are those with a negative Poisson ratio (NPR) and display the unexpected property of lateral expansion when stretched, as well as an equal and opposing densification when compressed. Such geometries are being progressively employed in the development of novel products, especially in the fields of intelligent expandable actuators, shape morphing structures and minimally invasive implantable devices. Although several auxetic and potentially auxetic geometries have been summarized in previous reviews and research, precise information regarding relevant properties for design tasks is not always provided. In this study we present a comparative study of two-dimensional and three-dimensional auxetic geometries carried out by means of computer-aided design and engineering tools (from now on CAD–CAE). The first part of the study is focused on the development of a CAD library of auxetics. Once the library is developed we simulate the behavior of the different auxetic geometries and elaborate a systematic comparison, considering relevant properties of these geometries, such as Poisson ratio(s), maximum volume or area reductions attainable and equivalent Young's modulus, hoping it may provide useful information for future designs of devices based on these interesting structures.
Resumo:
Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)
Resumo:
The crop simulation model AquaCrop, recently developed by FAO can be used for a wide range of purposes. However, in its present form, its use over large areas or for applications that require a large number of simulations runs (e.g., long-term analysis), is not practical without developing software to facilitate such applications. Two tools for managing the inputs and outputs of AquaCrop, named AquaData and AquaGIS, have been developed for this purpose and are presented here. Both software utilities have been programmed in Delphi v. 5 and in addition, AquaGIS requires the Geographic Information System (GIS) programming tool MapObjects. These utilities allow the efficient management of input and output files, along with a GIS module to develop spatial analysis and effect spatial visualization of the results, facilitating knowledge dissemination. A sample of application of the utilities is given here, as an AquaCrop simulation analysis of impact of climate change on wheat yield in Southern Spain, which requires extensive input data preparation and output processing. The use of AquaCrop without the two utilities would have required approximately 1000 h of work, while the utilization of AquaData and AquaGIS reduced that time by more than 99%. Furthermore, the use of GIS, made it possible to perform a spatial analysis of the results, thus providing a new option to extend the use of the AquaCrop model to scales requiring spatial and temporal analyses.