953 resultados para computer forensics tools
Suite of tools for statistical N-gram language modeling for pattern mining in whole genome sequences
Resumo:
Genome sequences contain a number of patterns that have biomedical significance. Repetitive sequences of various kinds are a primary component of most of the genomic sequence patterns. We extended the suffix-array based Biological Language Modeling Toolkit to compute n-gram frequencies as well as n-gram language-model based perplexity in windows over the whole genome sequence to find biologically relevant patterns. We present the suite of tools and their application for analysis on whole human genome sequence.
Resumo:
CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.
Resumo:
A theoretical study has been carried out at the B3LYP/LANL2DZ level to compare the reactivity of phenyl isocyanate and phenyl isothiocyanate towards titanium(IV) alkoxides. Isocyanates are shown to favour both mono insertion and double insertion reactions. Double insertion in a head-to-tail fashion is shown to be more exothermic than double insertion in a head-to-head fashion. The head-to-head double insertion leads to the metathesis product, a carbodiimide, after the extrusion of carbon dioxide. In the case of phenyl isothiocyanate, calculations favour the formation of only mono insertion products. Formation of a double insertion product is highly unfavourable. Further, these studies indicate that the reverse reaction involving the metathesis of N,N-'-diphenyl carbodiimide with carbon dioxide is likely to proceed more efficiently than the metathesis reaction with carbon disulphide. This is in excellent agreement with experimental results as metathesis with carbon disulphide fails to occur. In a second study, multilayer MM/QM calculations are carried out on intermediates generated from reduction of titanium(IV) alkoxides to investigate the effect of alkoxy bridging on the reactivity of multinuclear Ti species. Bimolecular coupling of imines initiated by Ti(III) species leads to a mixture of diastereomers and not diastereoselective coupling of the imine. However if the reaction is carried out by a trimeric biradical species, diastereoselective coupling of the imine is predicted. The presence of alkoxy bridges greatly favours the formation of the d,l (+/-) isomer, whereas the intermediate without alkoxy bridges favours the more stable meso isomer. As a bridged trimeric species, stabilized by bridging alkoxy groups, correctly explains the diastereoselective reaction, it is the most likely intermediate in the reaction.
Resumo:
In the product conceptualization phase of design, sketches are often used for exploration of diverse behaviour patterns of the components to achieve the required functionality. This paper presents a method to animate the sketch produced using a tablet interface to aid verification of the desired behaviour. A sketch is a spatial organization of strokes whose perceptual organization helps one to visually interpret its components and their interconnections. A Gestalt based segmentation followed by interactive grouping and articulation, presented in this paper, enables one to use a mechanism simulation framework to animate the sketch in a “pick and drag” mode to visualize different configurations of the product and gain insight into the product’s behaviour.
Resumo:
Electrical Impedance Tomography (EIT) is a computerized medical imaging technique which reconstructs the electrical impedance images of a domain under test from the boundary voltage-current data measured by an EIT electronic instrumentation using an image reconstruction algorithm. Being a computed tomography technique, EIT injects a constant current to the patient's body through the surface electrodes surrounding the domain to be imaged (Omega) and tries to calculate the spatial distribution of electrical conductivity or resistivity of the closed conducting domain using the potentials developed at the domain boundary (partial derivative Omega). Practical phantoms are essentially required to study, test and calibrate a medical EIT system for certifying the system before applying it on patients for diagnostic imaging. Therefore, the EIT phantoms are essentially required to generate boundary data for studying and assessing the instrumentation and inverse solvers a in EIT. For proper assessment of an inverse solver of a 2D EIT system, a perfect 2D practical phantom is required. As the practical phantoms are the assemblies of the objects with 3D geometries, the developing of a practical 2D-phantom is a great challenge and therefore, the boundary data generated from the practical phantoms with 3D geometry are found inappropriate for assessing a 2D inverse solver. Furthermore, the boundary data errors contributed by the instrumentation are also difficult to separate from the errors developed by the 3D phantoms. Hence, the errorless boundary data are found essential to assess the inverse solver in 2D EIT. In this direction, a MatLAB-based Virtual Phantom for 2D EIT (MatVP2DEIT) is developed to generate accurate boundary data for assessing the 2D-EIT inverse solvers and the image reconstruction accuracy. MatVP2DEIT is a MatLAB-based computer program which simulates a phantom in computer and generates the boundary potential data as the outputs by using the combinations of different phantom parameters as the inputs to the program. Phantom diameter, inhomogeneity geometry (shape, size and position), number of inhomogeneities, applied current magnitude, background resistivity, inhomogeneity resistivity all are set as the phantom variables which are provided as the input parameters to the MatVP2DEIT for simulating different phantom configurations. A constant current injection is simulated at the phantom boundary with different current injection protocols and boundary potential data are calculated. Boundary data sets are generated with different phantom configurations obtained with the different combinations of the phantom variables and the resistivity images are reconstructed using EIDORS. Boundary data of the virtual phantoms, containing inhomogeneities with complex geometries, are also generated for different current injection patterns using MatVP2DEIT and the resistivity imaging is studied. The effect of regularization method on the image reconstruction is also studied with the data generated by MatVP2DEIT. Resistivity images are evaluated by studying the resistivity parameters and contrast parameters estimated from the elemental resistivity profiles of the reconstructed phantom domain. Results show that the MatVP2DEIT generates accurate boundary data for different types of single or multiple objects which are efficient and accurate enough to reconstruct the resistivity images in EIDORS. The spatial resolution studies show that, the resistivity imaging conducted with the boundary data generated by MatVP2DEIT with 2048 elements, can reconstruct two circular inhomogeneities placed with a minimum distance (boundary to boundary) of 2 mm. It is also observed that, in MatVP2DEIT with 2048 elements, the boundary data generated for a phantom with a circular inhomogeneity of a diameter less than 7% of that of the phantom domain can produce resistivity images in EIDORS with a 1968 element mesh. Results also show that the MatVP2DEIT accurately generates the boundary data for neighbouring, opposite reference and trigonometric current patterns which are very suitable for resistivity reconstruction studies. MatVP2DEIT generated data are also found suitable for studying the effect of the different regularization methods on reconstruction process. Comparing the reconstructed image with an original geometry made in MatVP2DEIT, it would be easier to study the resistivity imaging procedures as well as the inverse solver performance. Using the proposed MatVP2DEIT software with modified domains, the cross sectional anatomy of a number of body parts can be simulated in PC and the impedance image reconstruction of human anatomy can be studied.
Resumo:
Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.
Resumo:
Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.
Resumo:
Image and video analysis requires rich features that can characterize various aspects of visual information. These rich features are typically extracted from the pixel values of the images and videos, which require huge amount of computation and seldom useful for real-time analysis. On the contrary, the compressed domain analysis offers relevant information pertaining to the visual content in the form of transform coefficients, motion vectors, quantization steps, coded block patterns with minimal computational burden. The quantum of work done in compressed domain is relatively much less compared to pixel domain. This paper aims to survey various video analysis efforts published during the last decade across the spectrum of video compression standards. In this survey, we have included only the analysis part, excluding the processing aspect of compressed domain. This analysis spans through various computer vision applications such as moving object segmentation, human action recognition, indexing, retrieval, face detection, video classification and object tracking in compressed videos.
Resumo:
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.
Resumo:
The mechanical deformations of nickel nanowire subjected to uniaxial tensile strain at 300 K are simulated by using molecular dynamics with the quantum corrected Sutten-Chen many-body force field. We have used common neighbor analysis method to investigate the structural evolution of Ni nanowire during the elongation process. For the strain rate of 0.1%/ps, the elastic limit is up to about 11% strain with the yield stress of 8.6 GPa. At the elastic stage, the deformation is carried mainly through the uniform elongation of the distances between the layers (perpendicular to the Z-axis) while the atomic structure remains basically unchanged. With further strain, the slips in the {111} planes start to take place in order to accommodate the applied strain to carry the deformation partially, and subsequently the neck forms. The atomic rearrangements in the neck region result in a zigzag change in the stress-strain curve; the atomic structures beyond the region, however, have no significant changes. With the strain close to the point of the breaking, we observe the formation of a one-atom thick necklace in Ni nanowire. The strain rates have no significant effect on the deformation mechanism, but have some influence on the yield stress, the elastic limit, and the fracture strain of the nanowire.
Resumo:
A Monte Carlo simulation is performed to study the dependence of collision frequency on interparticle distance for a system composed of two hard-sphere particles. The simulation quantitatively shows that the collision frequency drops down sharply as the distance between two particles increases. This characteristic provides a useful evidence for the collision-reaction dynamics of aggregation process for the two-particle system described in the other reference.