35 resultados para Checks


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a tutorial overview of Ciaopp, the Ciao system preprocessor. Ciao is a public-domain, next-generation logic programming system, which subsumes ISO-Prolog and is specifically designed to a) be highly extensible via librarles and b) support modular program analysis, debugging, and optimization. The latter tasks are performed in an integrated fashion by Ciaopp. Ciaopp uses modular, incremental abstract interpretation to infer properties of program predicates and literals, including types, variable instantiation properties (including modes), non-failure, determinacy, bounds on computational cost, bounds on sizes of terms in the program, etc. Using such analysis information, Ciaopp can find errors at compile-time in programs and/or perform partial verification. Ciaopp checks how programs cali system librarles and also any assertions present in the program or in other modules used by the program. These assertions are also used to genérate documentation automatically. Ciaopp also uses analysis information to perform program transformations and optimizations such as múltiple abstract specialization, parallelization (including granularity control), and optimization of run-time tests for properties which cannot be checked completely at compile-time. We illustrate "hands-on" the use of Ciaopp in all these tasks. By design, Ciaopp is a generic tool, which can be easily tailored to perform these and other tasks for different LP and CLP dialects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a generic preprocessor for combined static/dynamic validation and debugging of constraint logic programs. Passing programs through the preprocessor prior to execution allows detecting many bugs automatically. This is achieved by performing a repertoire of tests which range from simple syntactic checks to much more advanced checks based on static analysis of the program. Together with the program, the user may provide a series of assertions which trigger further automatic checking of the program. Such assertions are written using the assertion language presented in Chapter 2, which allows expressing a wide variety of properties. These properties extend beyond the predefined set which may be understandable by the available static analyzers and include properties defined by means of user programs. In addition to user-provided assertions, in each particular CLP system assertions may be available for predefined system predicates. Checking of both user-provided assertions and assertions for system predicates is attempted first at compile-time by comparing them with the results of static analysis. This may allow statically proving that the assertions hold (Le., they are validated) or that they are violated (and thus bugs detected). User-provided assertions (or parts of assertions) which cannot be statically proved ñor disproved are optionally translated into run-time tests. The implementation of the preprocessor is generic in that it can be easily customized to different CLP systems and dialects and in that it is designed to allow the integration of additional analyses in a simple way. We also report on two tools which are instances of the generic preprocessor: CiaoPP (for the Ciao Prolog system) and CHIPRE (for the CHIP CLP(FL>) system). The currently existing analyses include types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proof carrying code is a general methodology for certifying that the execution of an untrusted mobile code is safe, according to a predefined safety policy. The basic idea is that the code supplier attaches a certifícate (or proof) to the mobile code which, then, the consumer checks in order to ensure that the code is indeed safe. The potential benefit is that the consumer's task is reduced from the level of proving to the level of checking, a much simpler task. Recently, the abstract interpretation techniques developed in logic programming have been proposed as a basis for proof carrying code [1]. To this end, the certifícate is generated from an abstract interpretation-based proof of safety. Intuitively, the verification condition is extracted from a set of assertions guaranteeing safety and the answer table generated during the analysis. Given this information, it is relatively simple and fast to verify that the code does meet this proof and so its execution is safe. This extended abstract reports on experiments which illustrate several issues involved in abstract interpretation-based code certification. First, we describe the implementation of our system in the context of CiaoPP: the preprocessor of the Ciao multi-paradigm (constraint) logic programming system. Then, by means of some experiments, we show how code certification is aided in the implementation of the framework. Finally, we discuss the application of our method within the área of pervasive systems which may lack the necessary computing resources to verify safety on their own. We herein illustrate the relevance of the information inferred by existing cost analysis to control resource usage in this context. Moreover, since the (rather complex) analysis phase is replaced by a simpler, efficient checking process at the code consumer side, we believe that our abstract interpretation-based approach to proof-carrying code becomes practically applicable to this kind of systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, presumably because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper fills this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compile- time/run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, attempting to perform as much work as possible at compiletime. The approach is based on the knowledge of certain properties about run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that illustrate different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El proyecto trata del estudio de la tasa de absorción específica (SAR). En él se estudia la SAR que desprenden distintos dispositivos de comunicaciones inalámbricos. Se ha llevado a cabo en las instalaciones de la SETSI, en el laboratorio de radiofrecuencia situado en El Casar, Guadalajara, que pertenece al Ministerio de Industria Comercio y Turismo. La SAR es una relación entre la energía electromagnética acumulada en una masa de un material o tejido concreto. Por tanto, lo primero es definir la SAR, en la que se exponen sus parámetros. Además, se recogen los límites de exposición fijados por las normas internacionales IEC 62209-1 e IEC 62209-2 en relación a SAR. Posteriormente, acorde con las normas, se realiza una definición detallada de un banco de medidas de SAR, en donde se explica cada uno de los componentes del banco de manera detallada así como los sistemas que intervienen previamente a la realización de la medida, tipos de los sistemas para realizar las verificaciones pertinentes, y también las incertidumbres de ciertos parámetros. También se realiza un proceso completo de medida de SAR en el laboratorio de la SETSI, donde se realizan las comprobaciones necesarias para la realización de una serie de medidas sobre dispositivos de comunicaciones móviles. Éstas medidas se realizan primero sobre un teléfono móvil en las frecuencias de GSM, UMTS y WIFI, en las configuraciones estipuladas por la norma; “tocando” e “inclinada 15°” comparando los valores obtenidos con los límites marcados por las normas internacionales. Por último, en este apartado se realizan ciertas medidas con otras configuraciones que no están recogidas en la norma para intentar obtener los máximos valores de SAR posibles. Después se realiza una comparación entre dos dispositivos tipo “tablet”, para ello se realizan medidas en la banda de WIFI y se comentan los resultados obtenidos, relacionado con el diseño de cada uno de ellos. Posteriormente se realiza un presupuesto de un banco de SAR, donde se detallan todos los componentes que intervienen en la realización de las medidas de SAR, pero no se incluyen en él, los costes de mantenimiento o los costes relacionados con su uso. Por último se explican las conclusiones finales desprendidas de la realización de este proyecto de fin de carrera así como la bibliografía utilizada. ABTRACT This project consists on the study of the specific absorption rate (SAR).It studies the different SAR of several wireless communications devices. It has been held in SETSI’S facilities, in its radio frecuency laboratory located in El Casar, Guadalajara, which belongs to the Ministy of Industry, Trade and Tourism. The SAR is a ratio between the electromagnetic energy accumulated in a mass of concrete material or tissue. Therefore, the SAR is defined first, which sets its parameters. Also lists the exposure limits set by international standards IEC 62209-1 and IEC 62209-2 regarding SAR. Subsequently, according to the guidelines, performing a detailed definition of a SAR measures bench, which explains each of the components in detail of the bench and involved systems prior to the realization of the extent and types of systems to perform the necessary checks, and certain parameters uncertainties. Also performed a complete process for SAR in the SETSI laboratory, located in El Casar, Guadalajara, where the necessary checks are made to carry out a serie of measures on mobile communications devices. These will be carried out first on a mobile phone at frequencies of GSM, UMTS and WiFi, in the configurations set by the standard, "touch" and "tilt 15 °" comparing the values obtained with the limits set by international standards. Finally, this section will perform certain actions with other configurations that are not included in the standard to try to get the maximum possible SAR values. Then a comparison is made between two devices, such as "tablet", this will make measurements in the band WIFI and discussed the results, related to the design of each. Subsequently, a budget of a SAR bench, detailing all components involved in SAR measures, but not included in it, maintenance costs or the costs associated with its use. Finally conclusions are explained detached from the realization of this project as well as the bibliography used on it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Master Final Project is intended to show the process developed to the functional and electrical characterization between different devices that use the SpaceWire space communications standard integrated into an evaluation board designed for this purpose. In order to carry out this characterization, firstly, a study to understand the SpaceWire standard is done. After that, another study for the understanding of the demonstration board with its different interfaces and IPs of SpW is done. According to this, it is expected to find out how the SpW devices are structured, especially at FPGA level, and how is the communication between them. Based on the knowledge obtained about SpaceWire and the SpW devices integrated into the evaluation board, the set of measurements and the strategy to validate electrical interoperability between the different devices are defined, as well as to perform functional checks required to ensure its proper understanding. Furthermore, it will let check whether the standard is met and search the limit of operation within a communication system representative of existing equipment in a satellite. Once finished the test plan and implemented on the representative hardware, the board will be considered characterized at SpW level and a report with the conclusions reached about the operation of the SpW interfaces in the board and constraints found will be done. RESUMEN. El presente Trabajo Fin de Máster pretende mostrar el proceso realizado para la caracterización eléctrica y funcional entre distintos dispositivos que utilizan el estándar de comunicaciones espaciales SpaceWire integrados en una tarjeta de evaluación diseñada para tal efecto. Para poder llevar a cabo dicha caracterización, en primer lugar, se realiza un estudio para el conocimiento del estándar SpaceWire. A continuación, se lleva a cabo otro estudio para el conocimiento de la tarjeta de demostración en la que se encuentran los distintos interfaces e IPs de SpW. Con esto último, se pretende conocer como están estructurados los dispositivos SpW, sobre todo a nivel de FPGA, y como se realiza la comunicación entre ellos. En base a los conocimientos adquiridos acerca de SpaceWire y los dispositivos SpW de la tarjeta de evaluación, se definen el conjunto de medidas y la estrategia a seguir para validar eléctricamente la interoperabilidad entre los distintos dispositivos, así como para realizar las comprobaciones funcionales necesarias para asegurar su correcto entendimiento. Además, con ello se podrá comprobar si se cumple el estándar y se podrá también buscar el límite de operación dentro de un sistema de comunicaciones representativo de los equipos existentes en un satélite. Realizado el plan de pruebas y aplicado sobre el hardware representativo se podrá dar por caracterizada la tarjeta a nivel SpW y realizar un informe con las conclusiones alcanzadas acerca del funcionamiento de los interfaces SpW de la tarjeta y las limitaciones encontradas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern object oriented languages like C# and JAVA enable developers to build complex application in less time. These languages are based on selecting heap allocated pass-by-reference objects for user defined data structures. This simplifies programming by automatically managing memory allocation and deallocation in conjunction with automated garbage collection. This simplification of programming comes at the cost of performance. Using pass-by-reference objects instead of lighter weight pass-by value structs can have memory impact in some cases. These costs can be critical when these application runs on limited resource environments such as mobile devices and cloud computing systems. We explore the problem by using the simple and uniform memory model to improve the performance. In this work we address this problem by providing an automated and sounds static conversion analysis which identifies if a by reference type can be safely converted to a by value type where the conversion may result in performance improvements. This works focus on C# programs. Our approach is based on a combination of syntactic and semantic checks to identify classes that are safe to convert. We evaluate the effectiveness of our work in identifying convertible types and impact of this transformation. The result shows that the transformation of reference type to value type can have substantial performance impact in practice. In our case studies we optimize the performance in Barnes-Hut program which shows total memory allocation decreased by 93% and execution time also reduced by 15%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los pasos inferiores son muy numerosos en las líneas de ferrocarril. Su comportamiento dinámico ha recibido mucha menos atención que el de otras estructuras como los puentes, pero su elevado número hace que su estudio sea económicamente relevante con vista a optimizar su forma, manteniendo la seguridad. El proyecto de puentes según el Eurocódigo incluye comprobaciones de estados límite de tensiones bajo carga dinámica. En el caso de pasos inferiores, las comprobaciones pueden resultar tan costosas como aquellas de puentes, pese a que su coste es mucho menor. Por tanto, se impone la búsqueda de unas reglas de cálculo simplificado que pongan en consonancia el coste de la estructura con el esfuerzo necesario para su proyecto. Este artículo propone un conjunto de reglas basadas en un estudio paramétrico = Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. The bridge design rules in accordance with the Eurocode involve checks on stresses according to dynamic loading. In the case of underpasses, those checks may be as much as those for bridges. Therefore, simplified design rules may align the design effort with their cost. Such a set of rules may provide estimations of response parameters based on the key parameters influencing the result. This paper contains a proposal based on a parametric study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta memoria se muestra el desarrollo del proyecto realizado para la creación de un sistema implantado en espacios verdes de las ciudades. A partir del estudio previo del mercado en torno a las Smart Cities se plantea el desarrollo de un sistema que mejore la sostenibilidad, la eficiencia y la seguridad de sistemas de riego, de medidores de la calidad del aire o estaciones meteorológicas instalados en los espacios verdes y jardines de las ciudades. Sirve de demostrador de cómo mejorar la eficiencia del consumo de agua y energía, además de añadir la funcionalidad de monitorizar la información recibida por sensores instalados en estos espacios en tiempo real a través del sistema desarrollado. La finalidad de este proyecto es la de crear un demostrador que sirva de ejemplo e inspiración para proyectos futuros que se basen en las mismas herramientas que son ofrecidas por Telefónica. El demostrador consiste en el control, gestión y mantenimiento de estos espacios, de modo que se pueda realizar un control y un mantenimiento de los dispositivos y sensores instalados en cada localización y la monitorización de la información suministrada por dichos elementos gracias a la plataforma M2M mediante los simuladores de sensores que ofrece Telefónica para demostraciones. Desde la aplicación, y mediante tecnologías REST y WEB, se realiza la comunicación con los elementos ofrecidos por Telefónica. La información de los sensores almacenada se muestra mediante la creación de informes y gráficas, mostrando la información recibida de los distintos parámetros de medida posibles, como son consumo de agua, temperatura, humedad y medidas de calidad del aire entre otras disponibles. Además, hay un espacio disponible para localizar y gestionar los dispositivos de los que se disponga existiendo la posibilidad de creación de alertas y avisos en función de lo que requiera el usuario. La solución creada como demostrador tiene dos componentes realizados por el estudiante. El primer componente desarrollado consiste en una plataforma web mediante la cual se pueda gestionar en tiempo real los dispositivos instalados en estos espacios verdes, su localización, realizar informes y gráficas, mostrando la información recibida de los sensores y la gestión de reglas para controlar valores límites impuestos por el usuario. El segundo componente es un servicio REST al cual se accede desde la plataforma web creada en el anterior componente y realiza la comunicación con los servicios ofrecidos por Telefónica y la identificación del proveedor del servicio con Telefónica a través del uso de un certificado privado. Además, este componente lleva a cabo la gestión de usuarios, mediante la creación de un servicio de autenticación para limitar los recursos ofrecidos en el servicio REST a determinados usuarios. ABSTRACT. In this memorandum the development of the project carried out for the creation of an implanted into green spaces in cities system is shown. From the previous study of the market around the Smart Cities it arise development of a system to improve the sustainability, efficiency and safety of irrigation systems, meter air quality and green spaces installed weather stations and gardens of cities. It serves as a demonstrator of how to improve the efficiency of water and energy consumption, as well as adding functionality to monitor the information received by sensors installed in these spaces in real time through the system developed. The purpose of this project is to create a demonstrator to provide an example and an inspiration for future projects based on the same tools that are provided by Telefónica. The demonstrator consists of control, management and maintenance of these spaces, so that it can carry out checks and maintenance of devices and sensors installed at each location and monitoring of the information provided by these elements through the M2M platform sensor simulators offered by Telefónica. From the application, and using REST and Web technologies, communication with the items provided by Telefónica is performed. Information stored sensors is shown by creating reports and graphs showing information received from the different parameters as possible, such as water consumption, temperature, humidity and air quality measures among others available. Addition there is an available space to locate and manage devices which becomes available with the possibility of creating alerting and warnings based on what the user requires. The solution created as a demonstrator has two components made by the student. The first component consists of a developed web platform through which to manage real-time devices installed in these green spaces, displaying the information received from the sensors and management rules to control limit values by the user. The second component is a REST service which is accessible from the web platform created in the previous component and performs communication with the services provided by Telefónica and the identification of the service provider with Telefónica through the use of a private certificate. In addition, this component performs user management, by creating an authentication service to restrict the resources available on the REST service to certain users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the increasing sophistication of embedded multimedia systems and wireless communication technologies has promoted a widespread utilization of video streaming applications. It has been reported in 2013 that youngsters, aged between 13 and 24, spend around 16.7 hours a week watching online video through social media, business websites, and video streaming sites. Video applications have already been blended into people daily life. Traditionally, video streaming research has focused on performance improvement, namely throughput increase and response time reduction. However, most mobile devices are battery-powered, a technology that grows at a much slower pace than either multimedia or hardware developments. Since battery developments cannot satisfy expanding power demand of mobile devices, research interests on video applications technology has attracted more attention to achieve energy-efficient designs. How to efficiently use the limited battery energy budget becomes a major research challenge. In addition, next generation video standards impel to diversification and personalization. Therefore, it is desirable to have mechanisms to implement energy optimizations with greater flexibility and scalability. In this context, the main goal of this dissertation is to find an energy management and optimization mechanism to reduce the energy consumption of video decoders based on the idea of functional-oriented reconfiguration. System battery life is prolonged as the result of a trade-off between energy consumption and video quality. Functional-oriented reconfiguration takes advantage of the similarities among standards to build video decoders reconnecting existing functional units. If a feedback channel from the decoder to the encoder is available, the former can signal the latter changes in either the encoding parameters or the encoding algorithms for energy-saving adaption. The proposed energy optimization and management mechanism is carried out at the decoder end. This mechanism consists of an energy-aware manager, implemented as an additional block of the reconfiguration engine, an energy estimator, integrated into the decoder, and, if available, a feedback channel connected to the encoder end. The energy-aware manager checks the battery level, selects the new decoder description and signals to build a new decoder to the reconfiguration engine. It is worth noting that the analysis of the energy consumption is fundamental for the success of the energy management and optimization mechanism. In this thesis, an energy estimation method driven by platform event monitoring is proposed. In addition, an event filter is suggested to automate the selection of the most appropriate events that affect the energy consumption. At last, a detailed study on the influence of the training data on the model accuracy is presented. The modeling methodology of the energy estimator has been evaluated on different underlying platforms, single-core and multi-core, with different characteristics of workload. All the results show a good accuracy and low on-line computation overhead. The required modifications on the reconfiguration engine to implement the energy-aware manager have been assessed under different scenarios. The results indicate a possibility to lengthen the battery lifetime of the system in two different use-cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A pesar de los importantes avances en la reducción del hambre, la seguridad alimentaria continúa siendo un reto de dimensión internacional. La seguridad alimentaria es un concepto amplio y multidimensional, cuyo análisis abarca distintas escalas y horizontes temporales. Dada su complejidad, la identificación de las causas de la inseguridad alimentaria y la priorización de las medias para abordarlas, son dos cuestiones que suscitan un intenso debate en la actualidad. El objetivo de esta tesis es evaluar el impacto de la globalización y el crecimiento económico en la seguridad alimentaria en los países en desarrollo, desde una perspectiva macro y un horizonte temporal a largo plazo. La influencia de la globalización se aborda de una manera secuencial. En primer lugar, se analiza la relación entre la inversión público-privada en infraestructuras y las exportaciones agrarias. A continuación, se estudia el impacto de las exportaciones agrarias en los indicadores de seguridad alimentaria. El estudio del impacto del crecimiento económico aborda los cambios paralelos en la distribución de la renta, y cómo la inequidad influye en el comportamiento de la seguridad alimentaria nacional. Además, se analiza en qué medida el crecimiento económico contribuye a acelerar el proceso de mejora de la seguridad alimentaria. Con el fin de conseguir los objetivos mencionados, se llevan a cabo varios análisis econométricos basados en datos de panel, en el que se combinan datos de corte transversal de 52 países y datos temporales comprendidos en el periodo 1991-2012. Se analizan tanto variables en niveles como variables en tasas de cambio anual. Se aplican los modelos de estimación de efectos variables y efectos fijos, ambos en niveles y en primeras diferencias. La tesis incluye cuatro tipos de modelos econométricos, cada uno de ellos con sus correspondientes pruebas de robustez y especificaciones. Los resultados matizan la importancia de la globalización y el crecimiento económico como mecanismos de mejora de la seguridad alimentaria en los países en desarrollo. Se obtienen dos conclusiones relativas a la globalización. En primer lugar, los resultados sugieren que la promoción de las inversiones privadas en infraestructuras contribuye a aumentar las exportaciones agrarias. En segundo lugar, se observa que las exportaciones agrarias pueden tener un impacto negativo en los indicadores de seguridad alimentaria. La combinación de estas dos conclusiones sugiere que la apertura comercial y financiera no contribuye por sí misma a la mejora de la seguridad alimentaria en los países en desarrollo. La apertura internacional de los países en desarrollo ha de ir acompañada de políticas e inversiones que desarrollen sectores productivos de alto valor añadido, que fortalezcan la economía nacional y reduzcan su dependencia exterior. En relación al crecimiento económico, a pesar del incuestionable hecho de que el crecimiento económico es una condición necesaria para reducir los niveles de subnutrición, no es una condición suficiente. Se han identificado tres estrategias adicionales que han de acompañar al crecimiento económico con el fin de intensificar su impacto positivo sobre la subnutrición. Primero, es necesario que el crecimiento económico sea acompañado de una distribución más equitativa de los ingresos. Segundo, el crecimiento económico ha de reflejarse en un aumento de inversiones en salud, agua y saneamiento y educación. Se observa que, incluso en ausencia de crecimiento económico, mejoras en el acceso a agua potable contribuyen a reducir los niveles de población subnutrida. Tercero, el crecimiento económico sostenible en el largo plazo parece tener un mayor impacto positivo sobre la seguridad alimentaria que el crecimiento económico más volátil o inestable en el corto plazo. La estabilidad macroeconómica se identifica como una condición necesaria para alcanzar una mayor mejora en la seguridad alimentaria, incluso habiéndose mejorado la equidad en la distribución de los ingresos. Por último, la tesis encuentra que los países en desarrollo analizados han experimentado diferentes trayectorias no lineales en su proceso de mejora de sus niveles de subnutrición. Los resultados sugieren que un mayor nivel inicial de subnutrición y el crecimiento económico son responsables de una respuesta más rápida al reto de la mejora de la seguridad alimentaria. ABSTRACT Despite the significant reductions of hunger, food security still remains a global challenge. Food security is a wide concept that embraces multiple dimensions, and has spatial-temporal scales. Because of its complexity, the identification of the drivers underpinning food insecurity and the prioritization of measures to address them are a subject of intensive debate. This thesis attempts to assess the impact of globalization and economic growth on food security in developing countries with a macro level scale (country) and using a long-term approach. The influence of globalization is addressed in a sequential way. First, the impact of public-private investment in infrastructure on agricultural exports in developing countries is analyzed. Secondly, an assessment is conducted to determine the impact of agricultural exports on food security indicators. The impact of economic growth focuses on the parallel changes in income inequality and how the income distribution influences countries' food security performance. Furthermore, the thesis analyzes to what extent economic growth helps accelerating food security improvements. To address the above mentioned goals, various econometric models are formulated. Models use panel data procedures combining cross-sectional data of 52 countries and time series data from 1991 to 2012. Yearly data are expressed both in levels and in changes. The estimation models applied are random effects estimation and fixed effects estimations, both in levels and in first differences. The thesis includes four families of econometric models, each with its own set of robustness checks and specifications. The results qualify the relevance of globalization and economic growth as enabling mechanisms for improving food security in developing countries. Concerning globalization, two main conclusions can be drawn. First, results showed that enhancing foreign private investment in infrastructures contributes to increase agricultural exports. Second, agricultural exports appear to have a negative impact on national food security indicators. These two conclusions suggest that trade and financial openness per se do not contribute directly to improve food security in development countries. Both measures should be accompanied by investments and policies to support the development of national high value productive sectors, to strengthen the domestic economy and reduce its external dependency. Referring to economic growth, despite the unquestionable fact that income growth is a pre-requisite for reducing undernourishment, results suggest that it is a necessary but not a sufficient condition. Three additional strategies should accompany economic growth to intensifying its impact on food security. Firstly, it is necessary that income growth should be accompanied by a better distribution of income. Secondly, income growth needs to be followed by investments and policies in health, sanitation and education to improve food security. Even if economic growth falters, sustained improvements in the access to drinking water may still give rise to reductions in the percentage of undernourished people. And thirdly, long-term economic growth appears to have a greater impact on reducing hunger than growth regimes that combine periods of growth peaks followed by troughs. Macroeconomic stability is a necessary condition for accelerating food security. Finally, the thesis finds that the developing countries analyzed have experienced different non-linear paths toward improving food security. Results also show that a higher initial level of undernourishment and economic growth result in a faster response for improving food security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis se ha desarrollado en el contexto del proyecto Cajal Blue Brain, una iniciativa europea dedicada al estudio del cerebro. Uno de los objetivos de esta iniciativa es desarrollar nuevos métodos y nuevas tecnologías que simplifiquen el análisis de datos en el campo neurocientífico. El presente trabajo se ha centrado en diseñar herramientas que combinen información proveniente de distintos canales sensoriales con el fin de acelerar la interacción y análisis de imágenes neurocientíficas. En concreto se estudiará la posibilidad de combinar información visual con información háptica. Las espinas dendríticas son pequeñas protuberancias que recubren la superficie dendrítica de muchas neuronas del cerebro. A día de hoy, se cree que tienen un papel clave en la transmisión de señales neuronales. Motivo por el cual, el interés por parte de la comunidad científica por estas estructuras ha ido en aumento a medida que las técnicas de adquisición de imágenes mejoraban hasta alcanzar una calidad suficiente para analizar dichas estructuras. A menudo, los neurocientíficos utilizan técnicas de microscopía con luz para obtener los datos que les permitan analizar estructuras neuronales tales como neuronas, dendritas y espinas dendríticas. A pesar de que estas técnicas ofrezcan ciertas ventajas frente a su equivalente electrónico, las técnicas basadas en luz permiten una menor resolución. En particular, estructuras pequeñas como las espinas dendríticas pueden capturarse de forma incorrecta en las imágenes obtenidas, impidiendo su análisis. En este trabajo, se presenta una nueva técnica, que permite editar imágenes volumétricas, mediante un dispositivo háptico, con el fin de reconstruir de los cuellos de las espinas dendríticas. Con este objetivo, en un primer momento se desarrolló un algoritmo que proporciona retroalimentación háptica en datos volumétricos, completando la información que provine del canal visual. Dicho algoritmo de renderizado háptico permite a los usuarios tocar y percibir una isosuperficie en el volumen de datos. El algoritmo asegura un renderizado robusto y eficiente. Se utiliza un método basado en las técnicas de “marching tetrahedra” para la extracción local de una isosuperficie continua, lineal y definida por intervalos. La robustez deriva tanto de una etapa de detección de colisiones continua de la isosuperficie extraída, como del uso de técnicas eficientes de renderizado basadas en un proxy puntual. El método de “marching tetrahedra” propuesto garantiza que la topología de la isosuperficie extraída coincida con la topología de una isosuperficie equivalente determinada utilizando una interpolación trilineal. Además, con el objetivo de mejorar la coherencia entre la información háptica y la información visual, el algoritmo de renderizado háptico calcula un segundo proxy en la isosuperficie pintada en la pantalla. En este trabajo se demuestra experimentalmente las mejoras en, primero, la etapa de extracción de isosuperficie, segundo, la robustez a la hora de mantener el proxy en la isosuperficie deseada y finalmente la eficiencia del algoritmo. En segundo lugar, a partir del algoritmo de renderizado háptico propuesto, se desarrolló un procedimiento, en cuatro etapas, para la reconstrucción de espinas dendríticas. Este procedimiento, se puede integrar en los cauces de segmentación automática y semiautomática existentes como una etapa de pre-proceso previa. El procedimiento está diseñando para que tanto la navegación como el proceso de edición en sí mismo estén controlados utilizando un dispositivo háptico. Se han diseñado dos experimentos para evaluar esta técnica. El primero evalúa la aportación de la retroalimentación háptica y el segundo se centra en evaluar la idoneidad del uso de un háptico como dispositivo de entrada. En ambos casos, los resultados demuestran que nuestro procedimiento mejora la precisión de la reconstrucción. En este trabajo se describen también dos casos de uso de nuestro procedimiento en el ámbito de la neurociencia: el primero aplicado a neuronas situadas en la corteza cerebral humana y el segundo aplicado a espinas dendríticas situadas a lo largo de neuronas piramidales de la corteza del cerebro de una rata. Por último, presentamos el programa, Neuro Haptic Editor, desarrollado a lo largo de esta tesis junto con los diferentes algoritmos ya mencionados. ABSTRACT This thesis took place within the Cajal Blue Brain project, a European initiative dedicated to the study of the brain. One of the main goals of this project is the development of new methods and technologies simplifying data analysis in neuroscience. This thesis focused on the development of tools combining information originating from distinct sensory channels with the aim of accelerating both the interaction with neuroscience images and their analysis. In concrete terms, the objective is to study the possibility of combining visual information with haptic information. Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The interest of the neuroscience community toward those structures kept increasing as and when acquisition methods improved, eventually to the point that the produced datasets enabled their analysis. Quite often, neuroscientists use light microscopy techniques to produce the dataset that will allow them to analyse neuronal structures such as neurons, dendrites and dendritic spines. While offering some advantages compared to their electronic counterpart, light microscopy techniques achieve lower resolutions. Particularly, small structures such as dendritic spines might suffer from a very low level of fluorescence in the final dataset, preventing further analysis. This thesis introduces a new technique enabling the edition of volumetric datasets in order to recreate dendritic spine necks using a haptic device. In order to fulfil this objective, we first presented an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is derived using a Continuous Collision Detection step coupled with acknowledged proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the coherence between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Three experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm. We then introduce our four-steps procedure for the complete reconstruction of dendritic spines. Based on our haptic rendering algorithm, this procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. We evaluated our procedure through two experiments. The first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy. We also report two concrete cases where our procedure was employed in the neuroscience field, the first one concerning dendritic spines in the human cortex, the second one referring to an ongoing experiment studying dendritic spines along dendrites of mouse cortical pyramidal neurons. Finally, we present the software program, Neuro Haptic Editor, that was built along the development of the different algorithms implemented during this thesis, and used by neuroscientists to use our procedure.