10 resultados para gap, minproblem, algoritmi, esatti, lower, bound, posta
em Universidad Politécnica de Madrid
Resumo:
It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.
Resumo:
Early 18th century treatise writer Tomas Vicente Tosca1 includes in his Tratado de la montea y cortes de Canteria [On Masonry Design and Stone Cutting], what is an important documentary source about the lantern of Valencia Cathedral. Tosca writes about this lantern as an example of vaulting over cross arches without the need of buttresses. A geometrical description is followed by an explanation of the structural behavior which manifests his deep understanding of the mechanics of masonry structures. He tries to demonstrate the absence of buttresses supporting his thesis on the appropriate distribution of loads which will reduce the "empujos" [horizontal thrusts] to the point of not requiring more than the thickness of the walls to stand (Tosca [1727] 1992, 227-230). The present article2 assesses T osca' s appreciation studying how loads and the thrusts they generate are transmitted through the different masonry elements that constitute this ciborium. In order to do so, we first present a geometrical analysis and make considerations regarding its materials and construction methods to, subsequently, analyze its stability adopting an equilibrium approach within the theoretical framework of the lower bound limit analysis.
Resumo:
Early 18th century treatise writer Tomas Vicente Tosca1 includes in his Tratado de la montea y cortes de Canteria [On Masonry Design and Stone Cutting], what is an important documentary source about the lantern of Valencia Cathedral. Tosca writes about this lantern as an example of vaulting over cross arches without the need of buttresses. A geometrical description is followed by an explanation of the structural behavior which manifests his deep understanding of the mechanics of masonry structures. He tries to demonstrate the absence of buttresses supporting his thesis on the appropriate distribution of loads which will reduce the "empujos" [horizontal thrusts] to the point of not requiring more than the thickness of the walls to stand (Tosca [1727] 1992, 227-230). The present article2 assesses T osca' s appreciation studying how loads and the thrusts they generate are transmitted through the different masonry elements that constitute this ciborium. In order to do so, we first present a geometrical analysis and make considerations regarding its materials and construction methods to, subsequently, analyze its stability adopting an equilibrium approach within the theoretical framework of the lower bound limit analysis.
Resumo:
We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.
Resumo:
We present a static analysis that infers both upper and lower bounds on the usage that a logic program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower-bound) resource usage functions for all predicates in the program. We also present an assertion language which is used to define both such resources and resourcerelated properties that the system can then check based on the results of the analysis. We have performed some preliminary experiments with some concrete resources such as execution steps, bytes sent or received by an application, number of files left open, number of accesses to a datábase, number of calis to a procedure, number of asserts/retracts, etc. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.
Resumo:
We present a generic analysis that infers both upper and lower bounds on the usage that a program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower- bound) cost functions for all procedures in the program. We also present an assertion language which is used to define both such resources and resource-related properties that the system can then check based on the results of the analysis. We have performed some experiments with some concrete resource-related properties such as execution steps, bits sent or received by an application, number of arithmetic operations performed, number of calls to a procedure, number of transactions, etc. presenting the resource usage functions inferred and the times taken to perform the analysis. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
Intermittency phenomenon is a continuous route from regular to chaotic behaviour. Intermittency is an occurrence of a signal that alternates chaotic bursts between quasi-regular periods called laminar phases, driven by the so called reinjection probability density function (RPD). In this paper is introduced a new technique to obtain the RPD for type-II and III intermittency. The new RPD is more general than the classical one and includes the classical RPD as a particular case. The probabilities of the laminar length, the average laminar lengths and the characteristic relations are determined with and without lower bound of the reinjection in agreement with numerical simulations. Finally, it is analyzed the noise effect in intermittency. A method to obtain the noisy RPD is developed extending the procedure used in the noiseless case. The analytical results show a good agreement with numerical simulations.
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
In previous papers, the type-I intermittent phenomenon with continuous reinjection probability density (RPD) has been extensively studied. However, in this paper type-I intermittency considering discontinuous RPD function in one-dimensional maps is analyzed. To carry out the present study the analytic approximation presented by del Río and Elaskar (Int. J. Bifurc. Chaos 20:1185-1191, 2010) and Elaskar et al. (Physica A. 390:2759-2768, 2011) is extended to consider discontinuous RPD functions. The results of this analysis show that the characteristic relation only depends on the position of the lower bound of reinjection (LBR), therefore for the LBR below the tangent point the relation {Mathematical expression}, where {Mathematical expression} is the control parameter, remains robust regardless the form of the RPD, although the average of the laminar phases {Mathematical expression} can change. Finally, the study of discontinuous RPD for type-I intermittency which occurs in a three-wave truncation model for the derivative nonlinear Schrodinger equation is presented. In all tests the theoretical results properly verify the numerical data