97 resultados para Computer forensic analysis
Resumo:
The tendency of granular materials in rapid shear ow to form non-uniform structures is well documented in the literature. Through a linear stability analysis of the solution of continuum equations for rapid shear flow of a uniform granular material, performed by Savage (1992) and others subsequently, it has been shown that an infinite plane shearing motion may be unstable in the Lyapunov sense, provided the mean volume fraction of particles is above a critical value. This instability leads to the formation of alternating layers of high and low particle concentrations oriented parallel to the plane of shear. Computer simulations, on the other hand, reveal that non-uniform structures are possible even when the mean volume fraction of particles is small. In the present study, we have examined the structure of fully developed layered solutions, by making use of numerical continuation techniques and bifurcation theory. It is shown that the continuum equations do predict the existence of layered solutions of high amplitude even when the uniform state is linearly stable. An analysis of the effect of bounding walls on the bifurcation structure reveals that the nature of the wall boundary conditions plays a pivotal role in selecting that branch of non-uniform solutions which emerges as the primary branch. This demonstrates unequivocally that the results on the stability of bounded shear flow of granular materials presented previously by Wang et al. (1996) are, in general, based on erroneous base states.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using circular linear progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.
Resumo:
Wireless LAN (WLAN) market consists of IEEE 802.11 MAC standard conformant devices (e.g., access points (APs), client adapters) from multiple vendors. Certain third party certifications such as those specified by the Wi-Fi alliance have been widely used by vendors to ensure basic conformance to the 802.11 standard, thus leading to the expectation that the available devices exhibit identical MAC level behavior. In this paper, however, we present what we believe to be the first ever set of experimental results that highlight the fact that WLAN devices from different vendors in the market can have heterogeneous MAC level behavior. Specifically, we demonstrate with examples and data that in certain cases, devices may not be conformant with the 802.11 standard while in other cases, they may differ in significant details that are not a part of mandatory specifications of the standard. We argue that heterogeneous MAC implementations can adversely impact WLAN operations leading to unfair bandwidth allocation, potential break-down of related MAC functionality and difficulties in provisioning the capacity of a WLAN. However, on the positive side, MAC level heterogeneity can be useful in applications such as vendor/model level device fingerprinting.
Resumo:
We present a sound and complete decision procedure for the bounded process cryptographic protocol insecurity problem, based on the notion of normal proofs [2] and classical unification. We also show a result about the existence of attacks with “high” normal cuts. Our proof of correctness provides an alternate proof and new insights into the fundamental result of Rusinowitch and Turuani [9] for the same setting.
Resumo:
The use of Fourier shape descriptors for morphological studies of vectorcardio-grams (VCGs) is p resented . The FDs can effectively be used as features for classf-fication of VCGs of different clinical categories . In addition , they provide cli-nically significant qualitative shape information for use by the Cardiologist. The initial result sofanalysisof nwrmal and abnormal VCGs areencouraging.
Resumo:
Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.
Resumo:
The last few decades have witnessed application of graph theory and topological indices derived from molecular graph in structure-activity analysis. Such applications are based on regression and various multivariate analyses. Most of the topological indices are computed for the whole molecule and used as descriptors for explaining properties/activities of chemical compounds. However, some substructural descriptors in the form of topological distance based vertex indices have been found to be useful in identifying activity related substructures and in predicting pharmacological and toxicological activities of bioactive compounds. Another important aspect of drug discovery e. g. designing novel pharmaceutical candidates could also be done from the distance distribution associated with such vertex indices. In this article, we will review the development and applications of this approach both in activity prediction as well as in designing novel compounds.
Resumo:
CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Clock synchronisation is an important requirement for various applications in wireless sensor networks (WSNs). Most of the existing clock synchronisation protocols for WSNs use some hierarchical structure that introduces an extra overhead due to the dynamic nature of WSNs. Besides, it is difficult to integrate these clock synchronisation protocols with sleep scheduling scheme, which is a major technique to conserve energy. In this paper, we propose a fully distributed peer-to-peer based clock synchronisation protocol, named Distributed Clock Synchronisation Protocol (DCSP), using a novel technique of pullback for complete sensor networks. The pullback technique ensures that synchronisation phases of any pair of clocks always overlap. We have derived an exact expression for a bound on maximum synchronisation error in the DCSP protocol, and simulation study verifies that it is indeed less than the computed upper bound. Experimental study using a few TelosB motes also verifies that the pullback occurs as predicted.
Resumo:
Knowledge about program worst case execution time (WCET) is essential in validating real-time systems and helps in effective scheduling. One popular approach used in industry is to measure execution time of program components on the target architecture and combine them using static analysis of the program. Measurements need to be taken in the least intrusive way in order to avoid affecting accuracy of estimated WCET. Several programs exhibit phase behavior, wherein program dynamic execution is observed to be composed of phases. Each phase being distinct from the other, exhibits homogeneous behavior with respect to cycles per instruction (CPI), data cache misses etc. In this paper, we show that phase behavior has important implications on timing analysis. We make use of the homogeneity of a phase to reduce instrumentation overhead at the same time ensuring that accuracy of WCET is not largely affected. We propose a model for estimating WCET using static worst case instruction counts of individual phases and a function of measured average CPI. We describe a WCET analyzer built on this model which targets two different architectures. The WCET analyzer is observed to give safe estimates for most benchmarks considered in this paper. The tightness of the WCET estimates are observed to be improved for most benchmarks compared to Chronos, a well known static WCET analyzer.
Resumo:
Memory models for shared-memory concurrent programming languages typically guarantee sequential consistency (SC) semantics for datarace-free (DRF) programs, while providing very weak or no guarantees for non-DRF programs. In effect programmers are expected to write only DRF programs, which are then executed with SC semantics. With this in mind, we propose a novel scalable solution for dataflow analysis of concurrent programs, which is proved to be sound for DRF programs with SC semantics. We use the synchronization structure of the program to propagate dataflow information among threads without requiring to consider all interleavings explicitly. Given a dataflow analysis that is sound for sequential programs and meets certain criteria, our technique automatically converts it to an analysis for concurrent programs.