1000 resultados para Java program


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certiflcation of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually application-dependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a datábase, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-deflnable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these deflnitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a signiflcant set of interesting resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally studied in terms of a number of concrete, predefined resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-definable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering an ample set of interesting resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subtype polymorphism is a cornerstone of object-oriented programming. By hiding variability in behavior behind a uniform interface, polymorphism decouples clients from providers and thus enables genericity, modularity and extensi- bility. At the same time, however, it scatters the implementation of the behavior over multiple classes thus potentially hampering program comprehension. The extent to which polymorphism is used in real programs and the impact of polymorphism on program comprehension are not very well understood. We report on a preliminary study of the prevalence of polymorphism in several hundred open source software systems written in Smalltalk, one of the oldest object-oriented programming languages, and in Java, one of the most widespread ones. Although a large portion of the call sites in these systems are polymorphic, a majority have a small number of potential candidates. Smalltalk uses polymorphism to a much greater extent than Java. We discuss how these findings can be used as input for more detailed studies in program comprehension and for better developer support in the IDE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed foraminiferal and nannofossil assemblages and stable isotopes in samples from ODP Hole 807A on the Ontong Java Plateau in order to evaluate productivity and carbonate dissolution cycles over the last 550 kyr (kilo year) in the western equatorial Pacific. Our results indicate that productivity was generally higher in glacials than during interglacials, and gradually increased since MIS 13. Carbonate dissolution was weak in deglacial intervals, but often reached a maximum during interglacial to glacial transitions. Carbonate cycles in the western equatorial Pacific were mainly influenced by changes of deep-water properties rather than by local primary productivity. Fluctuations of the estimated thermocline depth were not related to glacial to interglacial alternations, but changed distinctly at ~280 kyr. Before that time the thermocline was relatively shallow and its depth fluctuated at a comparatively high amplitude and low frequency. After 280 kyr, the thermocline was deeper, and its fluctuations were at lower amplitude and higher frequency. These different patterns in productivity and thermocline variability suggest that thermocline dynamics probably were not a controlling factor of biological productivity in the western equatorial Pacific Ocean. In this region, upwelling, the influx of cool, nutrient-rich waters from the eastern equatorial Pacific or of fresh waters from rivers have probably never been important, and their influence on productivity has been negligible over the studied period. Variations in the inferred productivity in general are well correlated with fluctuations in the eolian flux as recorded in the northwestern Pacific, a proxy for the late Quaternary history of the central East Asian dust flux into the Pacific. Therefore, we suggest that the dust flux from the central East Asian continent may have been an important driver of productivity in the western Pacific.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boron, Ca, Na, and Gd concentrations and H intensity in sediments obtained during Ocean Drilling Program Leg 192 were determined by prompt gamma neutron activation analysis. The results show strong positive correlation between B content and H intensity in carbonate samples; chalk samples have higher B contents than limestone samples. Average B content is 9.1 ppm for the chalk and 5.2 ppm for the limestone. When chert blocks or clay minerals are present in the carbonate samples, B content increases (up to 91 ppm).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a reconstruction of atmospheric CO2 from deep-sea sediments, for the past 625000 years (Milankovitch chron). Our database consists of a Milankovitch template of sea-level variation in combination with a unique data set for the deep-sea record for Ontong Java plateau in the western equatorial Pacific. We redate the Vostok ice-core data of Barnola et al. (1987, doi:10.1038/329408a0). To make the reconstructions we employ multiple regression between deep-sea data, on one hand, and ice-core CO2 data in Antarctica, on the other. The patterns of correlation suggest that the main factors controlling atmospheric CO2 can be described as a combination of sea-level state and sea-level change. For best results squared values of state and change are used. The square-of-sea-level rule agrees with the concept that shelf processes are important modulators of atmospheric CO2 (e.g., budgets of shelf organic carbon and shelf carbonate, nitrate reduction). The square-of-change rule implies that, on short timescales, any major disturbance of the system results in a temporary rise in atmospheric CO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Starting with a UML specification that captures the underlying functionality of some given Java-based concurrent system, we describe a systematic way to construct, from this specification, test sequences for validating an implementation of the system. The approach is to first extend the specification to create UML state machines that directly address those aspects of the system we wish to test. To be specific, the extended UML state machines can capture state information about the number of waiting threads or the number of threads blocked on a given object. Using the SAL model checker we can generate from the extended UML state machines sequences that cover all the various possibilities of events and states. These sequences can then be directly transformed into test sequences suitable for input into a testing tool such as ConAn. As an illustration, the methodology is applied to generate sequences for testing a Java implementation of the producer-consumer system. © 2005 IEEE