866 resultados para Lower bounds
Resumo:
Characteristic remanent magnetizations derived from detailed thermal and alternating-field demagnetization of basalts recovered at Ocean Drilling Program (ODP) Site 807 on the Ontong Java Plateau reveal constant normal polarity consistent with paleontological ages from overlying sediments, suggesting deposition in early Aptian times at the beginning of the Cretaceous Normal Polarity Superchron (K-N). The paleomagnetic data can be divided into 14 distinct inclination groups, which together define a paleolatitude of 18°S, some 16° shallower than expected from a Pacific apparent polar wander path (APWP) based on nonsedimentary data. The data display a trend in paleomagnetic inclination, showing shallower values with increasing depth. We conclude that this trend is a result of local tectonic tilting during the waning phases of volcanism on the plateau. Hotspot-based plate reconstructions for the Early Cretaceous place the Ontong Java Plateau on the Louisville hotspot, presently located at 51°S, whereas the paleolatitude for Site 807 based on the Pacific APWP is 34°S. Because the nominal mean inclination from Site 807 and values derived from Deep Sea Drilling Project (DSDP) sediments of other sites predict shallower paleolatitudes for the Ontong Java Plateau, values from the Pacific APWP provide lower bounds on true polar wander. Considering mantle plume sources on the southern and northern portions of the plateau (DSDP Site 288 and ODP Site 807, respectively), the Louisville hotspot appears to have moved 9°-17° to the south relative to the spin axis since the Early Cretaceous. This sense of motion is consistent with previous results for the Suiko Seamount (65 Ma) of the Hawaiian-Emperor Chain.
Resumo:
Uniaxial strain consolidation experiments were conducted to determine elastic and plastic properties and to estimate the permeability of sediments from 0 to 200 meters below seafloor at Ocean Drilling Program Sites 1194 and 1198. Plastic deformation is described by compression indices, which range from 0.19 to 0.37. Expansion indices, the elastic deformation measured during unload/reload cycles on samples, vary from 0.02 to 0.029. Consolidation experiments provide lower bounds on permeability between 5.4 x 10**-16 m**2 and 1.9 x 10**-18 m**2, depending on the consolidation state of the sample.
Resumo:
Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.
Resumo:
Non-failure analysis aims at inferring that predicate calis in a program will never fail. This type of information has many applications in functional/logic programming. It is essential for determining lower bounds on the computational cost of calis, useful in the context of program parallelization, instrumental in partial evaluation and other program transformations, and has also been used in query optimization. In this paper, we re-cast the non-failure analysis proposed by Debray et al. as an abstract interpretation, which not only allows to investígate it from a standard and well understood theoretical framework, but has also several practical advantages. It allows us to incorpórate non-failure analysis into a standard, generic abstract interpretation engine. The analysis thus benefits from the fixpoint propagation algorithm, which leads to improved information propagation. Also, the analysis takes advantage of the multi-variance of the generic engine, so that it is now able to infer sepárate non-failure information for different cali patterns. Moreover, the implementation is simpler, and allows to perform non-failure and covering analyses alongside other analyses, such as those for modes and types, in the same framework. Finally, besides the precisión improvements and the additional simplicity, our implementation (in the Ciao/CiaoPP multiparadigm programming system) also shows better efRciency.
Resumo:
We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not termínate). The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables. We show that the problem of determining a covering is undecidable in general, and give decidability and complexity results for the Herbrand and linear arithmetic constraint systems. We give sound algorithms for determining covering that are precise and efiicient in practice. Based on this information, we show how to identify goals and procedures that can be guaranteed to not fail at runtime. Applications of such non-failure information include programming error detection, program transiormations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for granularity control. Finally, we report on an implementation of our method and show that better results are obtained than with previously proposed approaches.
Resumo:
Information about the computational cost of programs is potentially useful for a variety of purposes, including selecting among different algorithms, guiding program transformations, in granularity control and mapping decisions in parallelizing compilers, and query optimization in deductive databases. Cost analysis of logic programs is complicated by nondeterminism: on the one hand, procedures can return múltiple Solutions, making it necessary to estímate the number of solutions in order to give nontrivial upper bound cost estimates; on the other hand, the possibility of failure has to be taken into account while estimating lower bounds. Here we discuss techniques to address these problems to some extent.
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.
Resumo:
En este trabajo se da un ejemplo de un conjunto de n puntos situados en posición general, en el que se alcanza el mínimo número de puntos que pueden formar parte de algún k-set para todo k con 1menor que=kmenor quen/2. Se generaliza también, a puntos en posición no general, el resultado de Erdõs et al., 1973, sobre el mínimo número de puntos que pueden formar parte de algún k-set. The study of k- sets is a very relevant topic in the research area of computational geometry. The study of the maximum and minimum number of k-sets in sets of points of the plane in general position, specifically, has been developed at great length in the literature. With respect to the maximum number of k-sets, lower bounds for this maximum have been provided by Erdõs et al., Edelsbrunner and Welzl, and later by Toth. Dey also stated an upper bound for this maximum number of k-sets. With respect to the minimum number of k-set, this has been stated by Erdos el al. and, independently, by Lovasz et al. In this paper the authors give an example of a set of n points in the plane in general position (no three collinear), in which the minimum number of points that can take part in, at least, a k-set is attained for every k with 1 ≤ k < n/2. The authors also extend Erdos’s result about the minimum number of points in general position which can take part in a k-set to a set of n points not necessarily in general position. That is why this work complements the classic works we have mentioned before.
Resumo:
In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a specified uncertainty set under affine data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly efficient solutions, i.e., the weakly efficient solutions of the robust counterpart. We also consider highly robust weakly efficient solutions, i.e., robust feasible solutions which are weakly efficient for any possible instance of the objective matrix within a specified uncertainty set, providing lower bounds for the radius of highly robust efficiency guaranteeing the existence of this type of solutions under affine and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly efficient solutions.
Resumo:
We prove upper and lower bounds relating the quantum gate complexity of a unitary operation, U, to the optimal control cost associated to the synthesis of U. These bounds apply for any optimal control problem, and can be used to show that the quantum gate complexity is essentially equivalent to the optimal control cost for a wide range of problems, including time-optimal control and finding minimal distances on certain Riemannian, sub-Riemannian, and Finslerian manifolds. These results generalize the results of [Nielsen, Dowling, Gu, and Doherty, Science 311, 1133 (2006)], which showed that the gate complexity can be related to distances on a Riemannian manifold.