45 resultados para Failure time analysis

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses some issues which arise in the dataflow analysis of constraint logic programming (CLP) languages. The basic technique applied is that of abstract interpretation. First, some types of optimizations possible in a number of CLP systems (including efficient parallelization) are presented and the information that has to be obtained at compile-time in order to be able to implement such optimizations is considered. Two approaches are then proposed and discussed for obtaining this information for a CLP program: one based on an analysis of a CLP metainterpreter using standard Prolog analysis tools, and a second one based on direct analysis of the CLP program. For the second approach an abstract domain which approximates groundness (also referred to as "definiteness") information (i.e. constraint to a single valué) and the related abstraction functions are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to UN provisions in the period from 2007 to 2050 world population will grow up to 9200 million people. In fact, for the first time in history, in the year 2008 world urban population became higher than rural population. The increase of urban areas and their transport infrastructures has influenced agricultural land use due to their irreversible change, especially when they remain as periurban vacant land, losing their character and identity. In the Europe of the nineties, the traditional urban-rural gradient, characterized by a neat contact between both land types, has become so complex that it has change to a gradient in which it is difficult to separate urban and rural land uses. [Antrop 2004]. A literature review has been made on methodologies used for the urban-rural gradient analysis. One of these methodologies was selected that integrates ecological characterization based on the use of spatial metrics and geographical characterization based on spatial components. Cartographical sources used were Corine Land Cover at 1: 100000 scale and the Spanish Land Use Information System at 1:25000 scale. Urban-rural gradient paradigm is an analysis methodology, coming from landscape ecology, which enables to investigate how urbanization provokes changes in ecological patterns and processes into landscape. [Hahs and McDonnell 2006].The present research adapt this methodology to study the urban-rural gradient in the outskirts of Madrid, Toledo and Guadalajara. Both scales (1:25000 and 1:100000) were simultaneously used to reach the next objectives: 1) Analysis of landscape pattern dynamics in relation to distance to the town centre and major infrastructures. 2) Analysis of landscape pattern dynamics in the fringe of protected areas. The paper presents a new approach to the urban-rural relationship which allows better planning and management of urban áreas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A framework for the automatic parallelization of (constraint) logic programs is proposed and proved correct. Intuitively, the parallelization process replaces conjunctions of literals with parallel expressions. Such expressions trigger at run-time the exploitation of restricted, goal-level, independent and-parallelism. The parallelization process performs two steps. The first one builds a conditional dependency graph (which can be implified using compile-time analysis information), while the second transforms the resulting graph into linear conditional expressions, the parallel expressions of the &-Prolog language. Several heuristic algorithms for the latter ("annotation") process are proposed and proved correct. Algorithms are also given which determine if there is any loss of parallelism in the linearization process with respect to a proposed notion of maximal parallelism. Finally, a system is presented which implements the proposed approach. The performance of the different annotation algorithms is compared experimentally in this system by studying the time spent in parallelization and the effectiveness of the results in terms of speedups.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stereo video techniques are effective for estimating the space–time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space–time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space–time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space–time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes a case study in WCET analysis of an on-board spacecraft software system. The attitude control system of UPMSat-2, an experimental micro-satellite which is scheduled to be launched in 2013, is used for an experiment on analysing the worst-case execution time of code automatically generated from a Simulink model. In order to properly test the code, a hardware-in-the-loop configuration with a simulation model of the spacecraft environment has been used as a test bench. The code has been analysed with RapiTime, with some modifications to the original instrumentation routines, in order to take into account the particularities of the test configuration. Results from the experiment are described and commented in the paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the dynamical states of a small-world network of recurrently coupled excitable neurons, through both numerical and analytical methods. The dynamics of this system depend mostly on both the number of long-range connections or ?shortcuts?, and the delay associated with neuronal interactions. We find that persistent activity emerges at low density of shortcuts, and that the system undergoes a transition to failure as their density reaches a critical value. The state of persistent activity below this transition consists of multiple stable periodic attractors, whose number increases at least as fast as the number of neurons in the network. At large shortcut density and for long enough delays the network dynamics exhibit exceedingly long chaotic transients, whose failure times follow a stretched exponential distribution. We show that this functional form arises for the ensemble-averaged activity if the failure time for each individual network realization is exponen- tially distributed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the authors? experience with static analysis of both WCET and stack usage of a satellite on-board software subsystem. The work is a continuation of a previous case study that used a dynamic WCET analysis tool on an earlier version of the same software system. In particular, the AbsInt aiT tool has been evaluated by analysing both C and Ada code generated by Simulink within the UPMSat-2 project. Some aspects of the aiT tool, specifically those dealing with SPARC register windows, are compared to another static analysis tool, Bound-T. The results of the analysis are discussed, and some conclusions on the use of static WCET analysis tools on the SPARC architecture are commented in the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic scheduling increases the expressive power of logic programming languages, but also introduces some overhead. In this paper we present two classes of program transformations designed to reduce this additional overhead, while preserving the operational semantics of the original programs, modulo ordering of literals woken at the same time. The first class of transformations simplifies the delay conditions while the second class moves delayed literals later in the rule body. Application of the program transformations can be automated using information provided by compile-time analysis. We provide experimental results obtained from an implementation of the proposed techniques using the CIAO prototype compiler. Our results show that the techniques can lead to substantial performance improvement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto desarrolla una metodología de toma de datos y análisis para la optimización de la producción de la fase de avance de construcción del túnel de Seiró. Este túnel forma parte de las obras de construcción de la línea ferroviaria de alta velocidad corredor norte-noroeste a su paso por la provincia de Ourense. Mediante el análisis de diversos parámetros del diseño de la voladura, en la construcción de un túnel de 73 m2 de sección, en roca granítica, se pretende optimizar el tiempo de ciclo de excavación. Se estudia la variación de la fragmentación, el contorno y el avance obtenido y la influencia sobre estos parámetros del esquema de tiro y el tipo de cuele empleados y sus repercusiones en el tiempo de ciclo. El resultado del análisis concluye que el esquema de tiro óptimo, entre los usados, está formado por 112 barrenos de 51 mm de diámetro, con un cuele de 4 barrenos vacíos, que da lugar a mejores resultados de avance, contorno y fragmentación sin afectar negativamente al tiempo de ciclo. ABSTRACT This project develops a methodology for excavation cycle optimization by collecting and analyzing operational data for Seiró Tunnel construction site. This tunnel is included in construction project of high speed railway, located in N-NW direction, in Ourense province. Analyzing several blasting design parameters from the construction of a 73 m2 tunnel, in granitic rock, allows the optimization of cycle time. Variations in fragmentation, contour and advance length and influence about those parameters from blasting design und cut type used and its impact in the cycle time. Analysis results show up the best blasting design, among used, is formed of 112 holes 51 mm diameter, and a parallel cut with 4 empty holes. This configuration achieves the best fragmentation, contour and advance length without negative effects in time cycle

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The solaR package allows for reproducible research both for photovoltaics (PV) systems performance and solar radiation. It includes a set of classes, methods and functions to calculate the sun geometry and the solar radiation incident on a photovoltaic generator and to simulate the performance of several applications of the photovoltaic energy. This package performs the whole calculation procedure from both daily and intradaily global horizontal irradiation to the final productivity of grid-connected PV systems and water pumping PV systems. It is designed using a set of S4 classes whose core is a group of slots with multivariate time series. The classes share a variety of methods to access the information and several visualization methods. In addition, the package provides a tool for the visual statistical analysis of the performance of a large PV plant composed of several systems. Although solaR is primarily designed for time series associated to a location defined by its latitude/longitude values and the temperature and irradiation conditions, it can be easily combined with spatial packages for space-time analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document