858 resultados para Inter-procedural analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the predictions of the ‘challenge hypothesis’ (Wingfield et al., 1990) is that androgen patterns during the breeding season should vary among species according to the parenting and mating system. Here we assess this prediction of the challenge hypothesis both at the intra- and at the inter-specific level. To test the hypothesis at the inter-specific level, a literature survey on published androgen pat- terns from teleost fish with different mating systems was carried out. The results confirm the predicted effect of mating system on andro- gen levels. To test the hypothesis at an intra-specific level, a species with flexible reproductive strategies (i.e. monogamy vs. polygyny), the Saint Peter’s fish was studied. Polygynous males had higher 11- ketotestosterone levels. However, males implanted with methyl-tes- tosterone did not became polygynous and the variation of the ten- dency to desert their pair mates was better explained by the repro- ductive state of the female partner. This result stresses the point that the effects of behaviour on hormones cannot be considered without respect to the social context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is an attempt to situate the quality of life and standard of living of local communities in ecotourism destinations inter alia their perception on forest conservation and the satisfaction level of the local community. 650 EDC/VSS members from Kerala demarcated into three zones constitute the data source. Four variables have been considered for evaluating the quality of life of the stakeholders of ecotourism sites, which is then funneled to the income-education spectrum for hypothesizing into the SLI framework. Zone-wise analysis of the community members working in tourism sector shows that the community members have benefited totally from tourism development in the region as they have got both employments as well as secured livelihood options. Most of the quality of life-indicators of the community in the eco-tourist centres show a promising position. The community perception does not show any negative impact on environment as well as on their local culture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations is presented. Such binaries may use operators to directly manipulate stack instead of using native call and ret instructions to achieve equivalent behavior. Since definition of context-sensitivity and algorithms for context-sensitive analysis have thus far been based on the specific semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in 'calling'-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control flow graph (ICFG), the same is not true of changes in 'stack'-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the context-sensitive version of the algorithm generates more precise results and is also computationally more efficient than its context-insensitive counterpart. Copyright © 2010 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using plant level data from a global survey with multiple time frames, one begun in the late 1990s, this paper introduces measures of supply chain integration and discusses the dynamic relationship between the level of integration and a set of internal and external performance measurements. Specifically, data from Hungary, The Netherlands and The People’s Republic of China are used in the analyses. The time frames considered range from the late 1990s till 2009, encompassing major changes and transitions. Our results seem to indicate that SCI has an underlying structure of four sets of indicators, namely: (1) delivery frequency from the supplier or to the customer; (2) sharing internal processes with suppliers; (3) sharing internal processes with buyers and (4) joint facility location with partners. The differences between groups in terms of several performance measures proved to be small, being mostly statistically insignificant - but looking at the ANOVA table we can conclude that in this sample of companies those having joint location with their partners seem to outperform others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of 'valid' paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli's interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.'s algorithm for analyzing x86 binaries without requiring that a binary conforms to a standard compilation model for maintaining procedures, calls, and returns. Experimental results show that a context-sensitive analysis using stack-context performs just as well for programs where the use of Sharir and Pnueli's calling-context produces correct approximations. However, if those programs are transformed to use call obfuscations, a contextsensitive analysis using stack-context still provides the same, correct results and without any additional overhead. © Springer Science+Business Media, LLC 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This chapter discusses the code parallelization environment, where a number of tools that address the main tasks, such as code parallelization, debugging, and optimization are available. The parallelization tools include ParaWise and CAPO, which enable the near automatic parallelization of real world scientific application codes for shared and distributed memory-based parallel systems. The chapter discusses the use of ParaWise and CAPO to transform the original serial code into an equivalent parallel code that contains appropriate OpenMP directives. Additionally, as user involvement can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform near automatic relative debugging of an OpenMP program that has been parallelized either using the tools or manually. In order for these tools to be effective in parallelizing a range of applications, a high quality fully inter-procedural dependence analysis, as well as user interaction is vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of parallelized NASA codes are discussed and show the benefits of using the environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study details the development of a fully validated, rapid and portable sensor based method for the on-site analysis of microcystins in freshwater samples. The process employs a novel lysis method for the mechanical lysis of cyanobacterial cells, with glass beads and a handheld frother in only 10min. The assay utilises an innovative planar waveguide device that, via an evanescent wave excites fluorescent probes, for amplification of signal in a competitive immunoassay, using an anti-microcystin monoclonal with cross-reactivity against the most common, and toxic variants. Validation of the assay showed the limit of detection (LOD) to be 0.78ngmL and the CCß to be 1ngmL. Robustness of the assay was demonstrated by intra- and inter-assay testing. Intra-assay analysis had % C.V.s between 8 and 26% and recoveries between 73 and 101%, with inter-assay analysis demonstrating % C.V.s between 5 and 14% and recoveries between 78 and 91%. Comparison with LC-MS/MS showed a high correlation (R=0.9954) between the calculated concentrations of 5 different Microcystis aeruginosa cultures for total microcystin content. Total microcystin content was ascertained by the individual measurement of free and cell-bound microcystins. Free microcystins can be measured to 1ngmL, and with a 10-fold concentration step in the intracellular microcystin protocol (which brings the sample within the range of the calibration curve), intracellular pools may be determined to 0.1ngmL. This allows the determination of microcystins at and below the World Health Organisation (WHO) guideline value of 1µgL. This sensor represents a major advancement in portable analysis capabilities and has the potential for numerous other applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is often necessary to run response surface designs in blocks. In this paper the analysis of data from such experiments, using polynomial regression models, is discussed. The definition and estimation of pure error in blocked designs are considered. It is recommended that pure error is estimated by assuming additive block and treatment effects, as this is more consistent with designs without blocking. The recovery of inter-block information using REML analysis is discussed, although it is shown that it has very little impact if thc design is nearly orthogonally blocked. Finally prediction from blocked designs is considered and it is shown that prediction of many quantities of interest is much simpler than prediction of the response itself.