960 resultados para Inter-procedural analysis
Resumo:
This study is an attempt to situate the quality of life and standard of living of local communities in ecotourism destinations inter alia their perception on forest conservation and the satisfaction level of the local community. 650 EDC/VSS members from Kerala demarcated into three zones constitute the data source. Four variables have been considered for evaluating the quality of life of the stakeholders of ecotourism sites, which is then funneled to the income-education spectrum for hypothesizing into the SLI framework. Zone-wise analysis of the community members working in tourism sector shows that the community members have benefited totally from tourism development in the region as they have got both employments as well as secured livelihood options. Most of the quality of life-indicators of the community in the eco-tourist centres show a promising position. The community perception does not show any negative impact on environment as well as on their local culture.
Resumo:
A method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations is presented. Such binaries may use operators to directly manipulate stack instead of using native call and ret instructions to achieve equivalent behavior. Since definition of context-sensitivity and algorithms for context-sensitive analysis have thus far been based on the specific semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in 'calling'-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control flow graph (ICFG), the same is not true of changes in 'stack'-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the context-sensitive version of the algorithm generates more precise results and is also computationally more efficient than its context-insensitive counterpart. Copyright © 2010 ACM.
Resumo:
Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules.
Resumo:
Using plant level data from a global survey with multiple time frames, one begun in the late 1990s, this paper introduces measures of supply chain integration and discusses the dynamic relationship between the level of integration and a set of internal and external performance measurements. Specifically, data from Hungary, The Netherlands and The People’s Republic of China are used in the analyses. The time frames considered range from the late 1990s till 2009, encompassing major changes and transitions. Our results seem to indicate that SCI has an underlying structure of four sets of indicators, namely: (1) delivery frequency from the supplier or to the customer; (2) sharing internal processes with suppliers; (3) sharing internal processes with buyers and (4) joint facility location with partners. The differences between groups in terms of several performance measures proved to be small, being mostly statistically insignificant - but looking at the ANOVA table we can conclude that in this sample of companies those having joint location with their partners seem to outperform others.
Resumo:
This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.
Resumo:
Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of 'valid' paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli's interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.'s algorithm for analyzing x86 binaries without requiring that a binary conforms to a standard compilation model for maintaining procedures, calls, and returns. Experimental results show that a context-sensitive analysis using stack-context performs just as well for programs where the use of Sharir and Pnueli's calling-context produces correct approximations. However, if those programs are transformed to use call obfuscations, a contextsensitive analysis using stack-context still provides the same, correct results and without any additional overhead. © Springer Science+Business Media, LLC 2011.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.
Resumo:
It is often necessary to run response surface designs in blocks. In this paper the analysis of data from such experiments, using polynomial regression models, is discussed. The definition and estimation of pure error in blocked designs are considered. It is recommended that pure error is estimated by assuming additive block and treatment effects, as this is more consistent with designs without blocking. The recovery of inter-block information using REML analysis is discussed, although it is shown that it has very little impact if thc design is nearly orthogonally blocked. Finally prediction from blocked designs is considered and it is shown that prediction of many quantities of interest is much simpler than prediction of the response itself.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.
Resumo:
This study aimed to evaluate the effects of a flavor-containing dentifrice on the formation of volatile sulphur compounds (VSCs) in morning bad breath. A two-step, blinded, crossover, randomized study was carried out in 50 dental students with a healthy periodontium divided into two experimental groups: flavor-containing dentifrice (test) and non-flavor-containing dentifrice (control). The volunteers received the designated dentifrice and a new toothbrush for a 3 X/day brushing regimen for 2 periods of 30 days. A seven-day washout interval was used between the periods. The assessed parameters were: plaque index (PI), gingival index (GI), organoleptic breath scores (ORG), VSC levels (as measured by a portable sulphide monitor) before (H1) and after (H2) cleaning of the tongue, tongue coating (TC) wet weight and BANA test from TC samples. The intra-group analysis showed a decrease in ORG, from 3 to 2, after 30 days for the test group (p < 0.05). The inter-group analysis showed lower values in ORG, H1 and H2 for the test group (p < 0.05). There was no difference between the amount of TC between groups and the presence of flavor also did not interfere in the BANA results between groups (p > 0.05). These findings suggest that a flavor-containing dentifrice seems to prevent VSCs formation in morning bad breath regardless of the amount of TC in periodontally healthy subjects.
Resumo:
Forty-five children (31 boys and 14 girls), aged 6-11 years, were included in the study, 15 with a skeletal anterior open bite (SAOB), 15 with a dentoalveolar anterior open bite (DAOB), and 15 with a normal occlusion (CG), defined by clinical evaluation and lateral cephalograms. EMG recordings of the temporal and masseter muscles were performed under maximal voluntary clenching and during chewing. Analysis of variance was used for inter-group analysis, followed by the Tukey post hoc test. A Student`s t-test for paired data was used for intra-group analysis. There were statistically significant differences among the three groups (P < 0.05), with the mean EMG being highest in the CG and lowest in children with a SAOB. The percentage EMG activity during chewing in relation to that during maximal voluntary clenching was more than 100 per cent in the SAOB group. The CG and DAOB groups presented higher EMG activity during clenching compared with chewing (P < 0.001), as well as a greater difference between tasks. In the SAOB group, the neuromuscular system appeared to have a lower capacity to produce EMG activity according to the task, while that in the DAOB group suggests that their functional capacity during growth should also be carefully observed.
Resumo:
RESUMO: Desenho do estudo: Estudo quantitativo, experimental prospectivo de factor único, desenho pré-teste, pós-teste. Objectivos: Determinar a efectividade da ecografia em tempo real, como Informação de Retorno Extrínseca Visual Ecográfica (IRE-VE) na performance do transverso do abdómen (TrA), em sujeitos saudáveis; analisar eventuais diferenças entre a IRE-VE e a Informação de Retorno Extrínseca Verbal Clínica (IRE-VC); medir a performance da musculatura abdominal, através das diferenças na espessura dos músculos TrA e oblíquo interno (OI) e deslizamento do TrA, em repouso e em contracção. Enquadramento: A maioria dos indivíduos não tem conhecimentos nem consciência do contributo de uma boa performance do TrA para a estabilidade da coluna lombar. Vários estudos recentes se têm dedicado a este assunto, tendo sido mostrado o importante contributo da ecografia como Informação de Retorno Extrínseca (IRE). Uma vez que o TrA e o OI contribuem para a estabilidade lombo pélvica, e que a aprendizagem do seu controlo motor é essencial para a recuperação da função, torna-se relevante clarificar o contributo da informação de retorno na primeira fase da aprendizagem da performance desses músculos, bem como encontrar as melhores estratégias para a sua realização. A ecografia foi o instrumento escolhido para servir esse objectivo. Métodos: Participaram no estudo 75 sujeitos, sem queixas lombares, com idades compreendidas entre os 18 e os 38 anos com um valor médio de 21,9 anos (±4,03), divididos aleatoriamente em três grupos com uma tarefa comum: a “Manobra do Transverso”, em que um grupo não recebeu IRE (GC), outro recebeu IRE verbal clínica e palpatória (GIRE-VC) e ooutro recebeu IRE visual ecográfica (GIRE-VE). Para efeitos de análise da contracção da musculatura abdominal, foram estudadas a espessura dos músculos TrA e OI e o deslizamento do TrA, visualizados em imagens ecográficas em tempo real, e congeladas para medição em diferido. Estes procedimentos foram apurados num estudo piloto de fidedignidade das medições em causa. Quanto à abordagem estatística das variáveis de performance muscular foi realizada uma análise da variância simples paramétrica para amostras independentes e um teste para a diferença de médias para amostras emparelhadas. Resultados: Observamos que no GC, a ausência de IRE cursou com uma performance idêntica nos dois momentos de avaliação e que nos dois grupos com IRE, das variáveis de performance, é significativamente diferente a contracção do TrA, para uma diferença de 1,95 mm no GIRE-VE (p=0,000) e de 0,84 mm no GIRE-VC (p=0,000). Ao comparar os grupos entre si houve diferenças no limiar da significância (p=0,056) para uma melhor contracção do TrA no GIRE-VE. As outras variáveis, contracção do OI e deslizamento do TrA, não revelaram efeito relacionado com a IRE em nenhum dos grupos. Conclusão: Dos resultados obtidos, podemos concluir que a IRE-VE, quando usada isoladamente, na Manobra do Transverso provoca um maior aumento na espessura do TrA, quando comparada com a IRE-VC . O uso da ecografia mostrou ser efectivo na facilitação da performance da Manobra do Transverso em sujeitos saudáveis.---------------------ABSTRACT: Study Design: Single Factor Experimental Design: Pre-Test Post-test Control Group Design. Objectives: To measure the contribution of different types of biofeedack on Transversus Abdominis (TrA) and Internal Oblique (IO) performance through changes in thickness and lateral slide of TrA anterior fascia during abdominal hollowing exercise (AHE). Background: Increasingly clinicians are using real-time ultrasound imaging as a form of supplementing feedback when teaching trunk stabilization exercises to patients; however, there has been no evidence of its effectiveness when used alone. Material and Methods: Seventy-five healthy subjects were divided randomly into 3 groups that received: group 1, no feedback; group 2, verbal and palpatory feedback, and group 3, realtime ultrasound feedback. The TrA and IO performance of each subject was twice assessed (before and after receiving feedback) when performing the AHE in a supine hook-lying position. Analysis of variance and T-test were used for the independent and paired samples, respectively, to determine significant changes in the performance of TrA and IO, based on intra and inter group analysis. Results: Group 1 had no differences between moments; group 2 had significant differences concerning TrA thickness (p=0,000) to a 0,84 mm thickness difference; group 3 had significant differences concerning TrA thickness (p=0,000) to a 1,94 mm difference; The ability to perform the AHE differed only among group 3 and group 1 (p=0.056), and only for changes in thickness of TrA muscle. No differences among groups were found neither for the lateral slide of TrA anterior fascia, nor for the internal oblique thickness. Conclusion: From the results of this study we conclude that real-time ultrasound feedback, when used alone during an AHE, can have a larger increase in TrA thickness when compared to verbal and palpatory feedback. The use of real time ultrasound showed to be effective as a feedback tool to facilitate the performance of the AHE in a supine hook-lying position in healthy subjects.