46 resultados para Spatial Database Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of correlation processes in design systems has as a target measurements in 3D directly and according to the users criteria in order to generate the required database for the development of the project. In the phase of photogrammetric works, internal and external orientation parameters are calculated and stereo models are created from standard images. The aforementioned are integrated in the system where the measurement of the selected items is done by applying developed correlation algorithms. The processing period has the tools to carry out the calculations in an easy and automatic way, as well as image measurement techniques to acquire the most correct information. The proposed software development is done on Visual Studio platforms for PC, applying the most apt codes and symbols according to the terms of reference required for the design. The results of generating the data base in an interactive way with the geometric study of the structures, facilitates and improves the quality of the works in the projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims Dehesas are agroforestry systems characterized by scattered trees among pastures, crops and/or fallows. A study at a Spanish dehesa has been carried out to estimate the spatial distribution of the soil organic carbon stock and to assess the influence of the tree cover. Methods The soil organic carbon stock was estimated from the five uppermost cm of themineral soil with high spatial resolution at two plots with different grazing intensities. The Universal Kriging technique was used to assess the spatial distribution of the soil organic carbon stocks, using tree coverage within a buffering area as an auxiliary variable. Results A significant positive correlation between tree presence and soil organic carbon stocks up to distances of around 8 m from the trees was found. The tree crown cover within a buffer up to a distance similar to the crown radius around the point absorbed 30 % of the variance in the model for both grazing intensities, but residual variance showed stronger spatial autocorrelation under regular grazing conditions. Conclusions Tree cover increases soil organic carbon stocks, and can be satisfactorily estimated by means of crown parameters. However, other factors are involved in the spatial pattern of the soil organic carbon distribution. Livestock plays an interactive role together with tree presence in soil organic carbon distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El actual proyecto consiste en la creación de una interfaz gráfica de usuario (GUI) en entorno de MATLAB que realice una representación gráfica de la base de datos de HRTF (Head-Related Transfer Function). La función de transferencia de la cabeza es una herramienta muy útil en el estudio de la capacidad del ser humano para percibir su entorno sonoro, además de la habilidad de éste en la localización de fuentes sonoras en el espacio que le rodea. La HRTF biaural (terminología para referirse al conjunto de HRTF del oído izquierdo y del oído derecho) en sí misma, posee información de especial interés ya que las diferencias entre las HRTF de cada oído, conceden la información que nuestro sistema de audición utiliza en la percepción del campo sonoro. Por ello, la funcionalidad de la interfaz gráfica creada presenta gran provecho dentro del estudio de este campo. Las diferencias interaurales se caracterizan en amplitud y en tiempo, variando en función de la frecuencia. Mediante la transformada inversa de Fourier de la señal HRTF, se obtiene la repuesta al impulso de la cabeza, es decir, la HRIR (Head-Related Impulse Response). La cual, además de tener una gran utilidad en la creación de software o dispositivos de generación de sonido envolvente, se utiliza para obtener las diferencias ITD (Interaural Time Difference) e ILD (Interaural Time Difference), comúnmente denominados “parámetros de localización espacial”. La base de datos de HRTF contiene la información biaural de diferentes puntos de ubicación de la fuente sonora, formando una red de coordenadas esféricas que envuelve la cabeza del sujeto. Dicha red, según las medidas realizadas en la cámara anecoica de la EUITT (Escuela Universitaria de Ingeniería Técnica de Telecomunicación), presenta una precisión en elevación de 10º y en azimut de 5º. Los receptores son dos micrófonos alojados en el maniquí acústico llamado HATS (Hats and Torso Simulator) modelo 4100D de Brüel&Kjaer. Éste posee las características físicas que influyen en la percepción del entorno como son las formas del pabellón auditivo (pinna), de la cabeza, del cuello y del torso humano. Será necesario realizar los cálculos de interpolación para todos aquellos puntos no contenidos en la base de datos HRTF, este proceso es sumamente importante no solo para potenciar la capacidad de la misma sino por su utilidad para la comparación entre otras bases de datos existentes en el estudio de este ámbito. La interfaz gráfica de usuario está concebida para un manejo sencillo, claro y predecible, a la vez que interactivo. Desde el primer boceto del programa se ha tenido clara su filosofía, impuesta por las necesidades de un usuario que busca una herramienta práctica y de manejo intuitivo. Su diseño de una sola ventana reúne tanto los componentes de obtención de datos como los que hacen posible la representación gráfica de las HRTF, las HRIR y los parámetros de localización espacial, ITD e ILD. El usuario podrá ir alternando las representaciones gráficas a la vez que introduce las coordenadas de los puntos que desea visualizar, definidas por phi (elevación) y theta (azimut). Esta faceta de la interfaz es la que le otorga una gran facilidad de acceso y lectura de la información representada en ella. Además, el usuario puede introducir valores incluidos en la base de datos o valores intermedios a estos, de esta manera, se indica a la interfaz la necesidad de realizar la interpolación de los mismos. El método de interpolación escogido es el de la ponderación de la distancia inversa entre puntos. Dependiendo de los valores introducidos por el usuario se realizará una interpolación de dos o cuatro puntos, siendo éstos limítrofes al valor introducido, ya sea de phi o theta. Para añadir versatilidad a la interfaz gráfica de usuario, se ha añadido la opción de generar archivos de salida en forma de imagen de las gráficas representadas, de tal forma que el usuario pueda extraer los datos que le interese para cualquier valor de phi y theta. Se completa el presente proyecto fin de carrera con un trabajo de investigación y estudio comparativo de la función y la aplicación de las bases de datos de HRTF dentro del marco científico y de investigación. Esto ha hecho posible concentrar información relacionada a través de revistas científicas de investigación como la JAES (Journal of the Audio Engineering Society) o la ASA (Acoustical Society of America), además, del IEEE ( Institute of Electrical and Electronics Engineers) o la “Web of knowledge” entre otras. Además de realizar la búsqueda en estas fuentes, se ha optado por vías de información más comunes como Google Académico o el portal de acceso “Ingenio” a los todos los recursos electrónicos contenidos en la base de datos de la universidad. El estudio genera una ampliación en el conocimiento de la labor práctica de las HRTF. La mayoría de los estudios enfocan sus esfuerzos en mejorar la percepción del evento sonoro mediante su simulación en la escucha estéreo o multicanal. A partir de las HRTF, esto es posible mediante el análisis y el cálculo de datos como pueden ser las regresiones, siendo éstas muy útiles en la predicción de una medida basándose en la información de la actual. Otro campo de especial interés es el de la generación de sonido 3D. Mediante la base de datos HRTF es posible la simulación de una señal biaural. Se han diseñado algoritmos que son implementados en dispositivos DSP, de tal manera que por medio de retardos interaurales y de diferencias espectrales es posible llegar a un resultado óptimo de sonido envolvente, sin olvidar la importancia de los efectos de reverberación para conseguir un efecto creíble de sonido envolvente. Debido a la complejidad computacional que esto requiere, gran parte de los estudios coinciden en desarrollar sistemas más eficientes, llegando a objetivos tales como la generación de sonido 3D en tiempo real. ABSTRACT. This project involves the creation of a Graphic User Interface (GUI) in the Matlab environment which creates a graphic representation of the HRTF (Head-Related Transfer Function) database. The head transfer function is a very useful tool in the study of the capacity of human beings to perceive their sound environment, as well as their ability to localise sound sources in the area surrounding them. The binaural HRTF (terminology which refers to the HRTF group of the left and right ear) in itself possesses information of special interest seeing that the differences between the HRTF of each ear admits the information that our system of hearing uses in the perception of each sound field. For this reason, the functionality of the graphic interface created presents great benefits within the study of this field. The interaural differences are characterised in space and in time, varying depending on the frequency. By means of Fourier's transformed inverse of the HRTF signal, the response to the head impulse is obtained, in other words, the HRIR (Head-Related Impulse Response). This, as well as having a great use in the creation of software or surround sound generating devices, is used to obtain ITD differences (Interaural Time Difference) and ILD (Interaural Time Difference), commonly named “spatial localisation parameters”. The HRTF database contains the binaural information of different points of sound source location, forming a network of spherical coordinates which surround the subject's head. This network, according to the measures carried out in the anechoic chamber at the EUITT (School of Telecommunications Engineering) gives a precision in elevation of 10º and in azimuth of 5º. The receivers are two microphones placed on the acoustic mannequin called HATS (Hats and Torso Simulator) Brüel&Kjaer model 4100D. This has the physical characteristics which affect the perception of the surroundings which are the forms of the auricle (pinna), the head, neck and human torso. It will be necessary to make interpolation calculations for all those points which are not contained the HRTF database. This process is extremely important not only to strengthen the database's capacity but also for its usefulness in making comparisons with other databases that exist in the study of this field. The graphic user interface is conceived for a simple, clear and predictable use which is also interactive. Since the first outline of the program, its philosophy has been clear, based on the needs of a user who requires a practical tool with an intuitive use. Its design with only one window unites not only the components which obtain data but also those which make the graphic representation of the HRTFs possible, the hrir and the ITD and ILD spatial location parameters. The user will be able to alternate the graphic representations at the same time as entering the point coordinates that they wish to display, defined by phi (elevation) and theta (azimuth). The facet of the interface is what provides the great ease of access and reading of the information displayed on it. In addition, the user can enter values included in the database or values which are intermediate to these. It is, likewise, indicated to the interface the need to carry out the interpolation of these values. The interpolation method is the deliberation of the inverse distance between points. Depending on the values entered by the user, an interpolation of two or four points will be carried out, with these being adjacent to the entered value, whether that is phi or theta. To add versatility to the graphic user interface, the option of generating output files in the form of an image of the graphics displayed has been added. This is so that the user may extract the information that interests them for any phi and theta value. This final project is completed with a research and comparative study essay on the function and application of HRTF databases within the scientific and research framework. It has been possible to collate related information by means of scientific research magazines such as the JAES (Journal of the Audio Engineering Society), the ASA (Acoustical Society of America) as well as the IEEE (Institute of Electrical and Electronics Engineers) and the “Web of knowledge” amongst others. In addition to carrying out research with these sources, I also opted to use more common sources of information such as Academic Google and the “Ingenio” point of entry to all the electronic resources contained on the university databases. The study generates an expansion in the knowledge of the practical work of the HRTF. The majority of studies focus their efforts on improving the perception of the sound event by means of its simulation in stereo or multichannel listening. With the HRTFs, this is possible by means of analysis and calculation of data as can be the regressions. These are very useful in the prediction of a measure being based on the current information. Another field of special interest is that of the generation of 3D sound. Through HRTF databases it is possible to simulate the binaural signal. Algorithms have been designed which are implemented in DSP devices, in such a way that by means of interaural delays and wavelength differences it is possible to achieve an excellent result of surround sound, without forgetting the importance of the effects of reverberation to achieve a believable effect of surround sound. Due to the computational complexity that this requires, a great many studies agree on the development of more efficient systems which achieve objectives such as the generation of 3D sound in real time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesis doctoral desarrolla una investigación original, dentro del marco disciplinario de la historia de la construcción, sobre los fundamentos constructivos de las fortificaciones bajomedievales fronterizas entre las Coronas de Castilla y Aragón en la actual provincia de Soria. En el título de la tesis ya queda expresado el objeto fundamental y fundacional, así como el ámbito temporal —desde la reconquista del oriente soriano por parte de Alfonso I el Batallador a principios del siglo XII hasta la unificación de las coronas hispánicas en el siglo XV bajo el común mandato de los Reyes Católicos— y la extensión territorial que delimita la investigación: aquéllas comarcas castellanas lindantes con Aragón pertenecientes a la actual provincia de Soria. Durante este período bajomedieval se produjeron una serie de enfrentamientos fronterizos que obligó a fortificar la frontera y las vías de comunicación entre ambas coronas. La falta de estudios de conjunto de estas fortificaciones entendiéndolas como participantes en un sistema fortificado ha constituido la justificación de la investigación, que se realiza en varios niveles de análisis: territorial, histórico, arquitectónico, poliorcético y constructivo. Así mismo, se ha detectado cierta falta de rigor acompañada de inexactitudes en las consideraciones constructivas publicadas sobre algunas de las fortificaciones del ámbito de estudio, lo que ha provocado errores en su datación al no más haber elementos de corte artístico o estilístico que marquen indudablemente la pertenencia a una época. En la tesis se ponen en duda las dataciones tradicionalmente aceptadas planteando la hipótesis que da pie a la investigación: ante la falta de elementos artísticos o estilísticos en unos sobrios edificios eminentemente funcionales es posible establecer con suficiente aproximación la fecha de construcción en base a criterios constructivos una vez formada una clasificación cronotipológica de cada técnica constructiva. La hipótesis, por lo tanto, plantea un objetivo principal —el estudio de la razón constructiva del sistema fortificado fronterizo— desarrollado en una serie de objetivos específicos cuya consecución programa los sucesivos niveles de análisis: - Conocer y detallar los elementos históricos que originaron los enfrentamientos entre las Coronas de Castilla y Aragón y su desarrollo mediante herramientas historiográficas y analizar las características naturales del territorio en litigio mediante instrumentos cartográficos. - Conocer y analizar los tipos arquitectónicos y las tradiciones constructivas empleadas en las construcciones castrenses en el ámbito temporal en que se enmarca la investigación. - Localizar, documentar y seleccionar para su análisis las fortalezas y construcciones militares erigidas durante dichas luchas fronterizas en la actual provincia de Soria a través del trabajo de campo y métodos cartográficos y bibliográficos. - Realizar un estudio general sobre el sistema fortificado a escala territorial - Investigar la tipología arquitectónica, poliorcética y constructiva del conjunto de estas fortificaciones bajomedievales fronterizas. - Analizar los fundamentos constructivos de los casos de estudio seleccionados entre estas construcciones y caracterizarlas en cuanto al material, elementos, sistemas y procesos constructivos. - Ordenar la información histórica dispersa y corregir errores para hacer una base sobre la que establecer un discurso histórico de cada caso de estudio. - Comparar y relacionar las técnicas constructivas empleadas en estas fortalezas con los utilizados en el mismo ámbito temporal. - Difundir para su debate los resultados de la investigación por los foros científicos habituales. El método empleado combina los trabajos de gabinete con una intensa labor de campo, en la que se han documentado cincuenta fortificaciones y se han redactado sus correspondientes fichas de toma de datos. La recopilación de datos se ha incluido en una base de datos que incluye aspectos generales, tipológicos, constructivos y bibliográficos básicos del conjunto, a modo de inventario, de fortificaciones de la provincia. Las fortificaciones seleccionadas se agrupan según una clasificación tipológica y constructiva que marca las líneas de estudio posteriores. Se desarrolla un capítulo de antecedentes en el que se estudia la historia de la construcción fortificada medieval tanto en Europa como en España analizando la evolución de los tipos arquitectónicos y las múltiples influencias culturales que surcaron el Mediterráneo desde el Oriente cruzado e islámico al Poniente donde se desarrollaba la empresa reconquistadora que mantuvo en estado de guerra continuo a la Península Ibérica durante ochocientos años. El análisis del territorio como contenedor del hecho fortificado revela que hay una relación íntima entre la ubicación de las fortificaciones y las formas naturales que definen las vías de comunicación entre los valles del Duero, del Ebro y del Tajo. En efecto, el ámbito de estudio ha supuesto desde la Antigüedad un territorio de paso fundamental en la articulación de las comunicaciones en la Península Ibérica. Este carácter de paso más que de frontera explica las inquietudes y la preocupación por su control tanto por Roma como por el califato cordobés como por los reinos cristianos medievales. El análisis de los elementos históricos se complementa con el estudio detallado de los enfrentamientos fronterizos entre Castilla y Aragón así como los aspectos sociales y políticos que provocaron la fortificación como sistema de definición de la frontera y de organización espacial, jurisdiccional, social y administrativa del territorio. La arquitectura fortificada es esencialmente funcional: su cometido es la defensa. En este sentido, tras un estudio morfológico de los castillos seleccionados se realiza un extenso análisis poliorcético de sus elementos, investigando su origen y aplicación para servir también de parámetros de datación. Siendo el objeto inaugural de la tesis el estudio de los fundamentos constructivos, se explican los distintos materiales de construcción empleados y se agrupan las fábricas de las fortificaciones seleccionadas en dos grandes grupos constructivos: las fábricas aparejadas y las fábricas encofradas. Se han destacado y estudiado la evolución histórica y la tipología y mensiología constructiva de tres técnicas destacadas: el uso del ladrillo, la tapia de cal y canto o mampostería encofrada y la tapia de tierra. Para el estudio de la componente histórica y de la dimensión constructiva de cada técnica ha sido necesario documentar numerosos casos tanto en el ámbito de estudio como en la Península Ibérica con el fin de establecer grupos cronotipológicos constructivos entre los que poder ubicar las fábricas de estas técnicas presentes en el ámbito de estudio. Se ha observado una evolución dimensional de las fábricas de tapia que es más evidente en las hispanomusulmanas al modularse en codos pero que también se advierte significativamente en las cristianas bajomedievales. De cada una de las técnicas analizadas se ha seleccionado un caso de estudio singular y representativo. El castillo de Arcos de Jalón es un ejemplo significativo del empleo de la fábrica mixta de mampostería con verdugadas de ladrillo, así como las murallas de la ciudad fortificada de Peñalcázar lo es de la fábrica de mampostería encofrada y el castillo de Serón de Nágima constituye un caso característico y principal de la utilización de la tapia de tierra en la arquitectura militar bajomedieval. Cada uno de estos tres casos de estudios se examina bajo los mismos cuatro niveles anteriormente mencionados: territorial, histórico, arquitectónico y defensivo y constructivo. El sistemático método de estudio ha facilitado el orden en la investigación y la obtención de unos resultados y conclusiones que verifican la hipótesis y cumplen los objetivos marcados al comienzo. Se ha revisado la datación en la construcción de las fortificaciones analizadas mediante el estudio cronotipológico de sus fábricas, pudiendo trasladarse el método a otros sistemas fortificados. La tesis abre, finalmente, dos vías principales de investigación encaminadas a completar el estudio del sistema fortificado fronterizo bajomedieval en la raya oriental soriana de Castilla: la caracterización y datación por métodos físico-químicos de las muestras de piezas de madera de construcción que se conservan embebidas en las fábricas y la búsqueda documental y archivística que pueda revelar nuevos datos respecto a la fundación, reparación, venta o cualquier aspecto económico, legislativo, organizativo o administrativo relativo a las fortificaciones en documentos coetáneos. ABSTRACT The doctoral thesis develops an original research, held in the field of the Construction History, about the constructive reason of the frontier fortifications in the Late Middle Age between the Crowns of Castile and Aragón in the actual province of Soria, Spain. In the title is expressed the main objective, and also the temporal scope —from the reconquest in the 12th Century by Alfonso the First of Aragón to the unification under the common kingdom of the Catholic Kings— and the territorial extension that the research delimits: those Castilian regions in the border with Aragón in the actual province of Soria. During this period, a series of border wars were been, and this is the reason for the fortification of the border line and the main roads between both Crowns. The lack of studies of these fortifications as participants in a fortified system is the justification of the research. There is several analysis levels: territorial, historical, architectonic, defensive and constructive. Likewise, there is a lack of strictness and inaccuracy in the constructive items in the publications about several fortifications of this study field. This aspect has caused mistakes in the dating because there is neither artistic nor stylistic elements which determines a epoch. The traditionally accepted datings are challenged. An hypothesis is formulated: in the absence of artistic or stylistic elements in a sober and functional buildings is possible to date the time of construction with sufficiently approximation based on construction criteria once formed a cronotypologic classification of each building technique. The hypothesis, therefore, propose a main aim: the study of the constructive reason of the fortified border system. This aim is developed in a series of specifically targets whose achievement programs the analysis levels: - To know and to detail the historical elements which started the wars between Castile and Aragon and its development using historiographical tools, and to analyze the natural characteristic of the territory through cartographical tools. - To understand and to analyze the different architectural types and the building traditions employed in the military buildings in the time researched. - To locate, to document, and to select for their analysis the fortresses and military constructions erected during these border wars in the actual province of Soria through fieldwork and bibliographical and cartographical methods. - To conduct a general study on the fortified system in territorial scale. - To research the architectural, constructive and defensive typology of the system of these border late medieval fortifications. - To analyze the construction logic of the selected case studies and to characterize in the items of material, elements, systems and construction processes. - To sort scattered historical information and to correct mistakes to make a base by which to establish a historic speech of each case study. - To compare and to relate the construction techniques employed in these fortresses with those used in the same time range . - To spread for discussion the research results in the usual scientists forums. The method combines the destock work with an intense fieldwork. Fifty fortifications have been documented and it has written their corresponding data collection card. Data collection has been included in a database that includes general aspects, typological, constructive and basic bibliographical data, as an inventory of fortifications in the province. The selected fortifications are grouped according to a typological and constructive classification which lead the lines of the later study. There is a chapter for the antecedents in which the history of the medieval fortified construction in Europe and in Spain is studied by analyzing the evolution of architectural types and the many cultural influences along the opposite seasides of the Mediterranean Sea, from the Islamic and Crusader East to the Iberian Peninsula in where there were a long and continuous war during eight hundred years. The territory is analyzed as a container of fortifications. This analysis reveals that there is an intimate relationship between the location of the fortifications and the natural forms that define the communication roads between the Duero, Ebro and Tajo valleys. Indeed, the study area has been a cross-territory from ancient times more than a frontierterritory. This communication character explains the concerns about its control both by Rome and by the Muslims of Córdoba as medieval Christian kingdoms. The analysis of historical elements is supplemented by detailed study of border war between Castile and Aragon and the social and political issues that led to the fortification as border definition system and spatial, jurisdictional, social and administrative planning. The fortified architecture is essentially functional: it is responsible for defense. In this sense, after a morphological study of selected castles is performed an extensive analysis of its defensive elements, investigating its origin and application. This analisis serves for the definition of parameters for dating. The purpose of the thesis is the study of the constructive logic. First, various building materials are explained. Then, masonry is grouped into two major constructive groups: rigged masonry and formwork masonry. The historical evolution and the constructive typology and mensiology are studied for each one of the three main techniques: the use of brickwork, the mortar wall and rammed-earth. Many case studies have been documented along the Iberian Peninsula and also in the study area. As conclusion, there is a dimensional evolution of the rammed-earth walls. This evolution is more evident into the Muslim masonry than in the late medieval walls: the reason is the use of the cubit as module. From each of the techniques discussed, a singular and representative case of study has been selected. The castle of Arcos de Jalon is a significant example of mix masonry of stone and brick rows. The walled city of Peñalcázar is built with masonry formwork. Serón de Nágima castle, at last, is a typical and main case of the use of the rammedearth wall of late medieval military architecture. Each of these three case studies were examined under the same four analysis levels above mentioned: territorial, historical, architectural and defensive and constructive. The systematic method of study has facilitated the order in the research and the obtaining of results and conclusions that verify the hypotheses and achieve the research objectives. Dating of the fortifications construction has been revised by studying the cronotypological issues of its masonry. The method can be transferred to the study of other fortified systems. Finally, the thesis describes two main research new ways aimed at completing the study of the late medieval fortified border of Castile in the actual province of Soria. The first of them is the characterization and datig by physicochemical methods the sample pieces of wood construction preserved embedded in the masonry. The second research way is the investigation of the documents in archives that may reveal new information about the foundation, repair, sale or any aspect to economic, legal, organizational or administrative concerning fortifications in contemporary documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, remote sensing imaging systems for the measurement of oceanic sea states have attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms, which focus on sea states with wavelengths in the range of 0.01 m to 1 m. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting observed waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss future lines of research to improve their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters or the processing time that it takes to retrieve ocean wave measurements from the stereo videos, which are very large datasets that need to be processed efficiently to be of practical usage. Multiresolution and short-time approaches would improve efficiency and scalability of the techniques so that wave displacements are obtained in feasible times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitrate leaching (NL) is an important N loss process in irrigated agriculture that imposes a cost on the farmer and the environment. A meta-analysis of published experimental results from agricultural irrigated systems was conducted to identify those strategies that have proven effective at reducing NL and to quantify the scale of reduction that can be achieved. Forty-four scientific articles were identified which investigated four main strategies (water and fertilizer management, use of cover crops and fertilizer technology) creating a database with 279 observations on NL and 166 on crop yield. Management practices that adjust water application to crop needs reduced NL by a mean of 80% without a reduction in crop yield. Improved fertilizer management reduced NL by 40%, and the best relationship between yield and NL was obtained when applying the recommended fertilizer rate. Replacing a fallow with a non-legume cover crop reduced NL by 50% while using a legume did not have any effect on NL. Improved fertilizer technology also decreased NL but was the least effective of the selected strategies. The risk of nitrate leaching from irrigated systems is high, but optimum management practices may mitigate this risk and maintain crop yields while enhancing environmental sustainability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lacunarity as a means of quantifying textural properties of spatial distributions suggests a classification into three main classes of the most abundant soils that cover 92% of Europe. Soils with a well-defined self-similar structure of the linear class are related to widespread spatial patterns that are nondominant but ubiquitous at continental scale. Fractal techniques have been increasingly and successfully applied to identify and describe spatial patterns in natural sciences. However, objects with the same fractal dimension can show very different optical properties because of their spatial arrangement. This work focuses primary attention on the geometrical structure of the geographical patterns of soils in Europe. We made use of the European Soil Database to estimate lacunarity indexes of the most abundant soils that cover 92% of the surface of Europe and investigated textural properties of their spatial distribution. We observed three main classes corresponding to three different patterns that displayed the graphs of lacunarity functions, that is, linear, convex, and mixed. They correspond respectively to homogeneous or self-similar, heterogeneous or clustered and those in which behavior can change at different ranges of scales. Finally, we discuss the pedological implications of that classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims of study: The goals of this paper are to summarize and to compare plant species richness and floristic similarity at two spatial scales; mesohabitat (normal, eutrophic, and oligotrophic dehesas) and dehesa habitat; and to establish guidelines for conserving species diversity in dehesas. Area of study: We considered four dehesa sites in the western Peninsular Spain, located along a climatic and biogeographic gradient from north to south. Main results: Average alpha richness for mesohabitats was 75.6 species, and average alpha richness for dehesa sites was 146.3. Gamma richness assessed for the overall dehesa habitat was 340.0 species. The species richness figures of normal dehesa mesohabitat were significantly lesser than of the eutrophic mesohabitat and lesser than the oligotrophic mesohabitat too. No significant differences were found for species richness among dehesa sites. We have found more dissimilarity at local scale (mesohabitat) than at regional scale (habitat). Finally, the results of the similarity assessment between dehesa sites reflected both climatic and biogeographic gradients. Research highlights: An effective conservation of dehesas must take into account local and regional conditions all along their distribution range for ensuring the conservation of the main vascular plant species assemblages as well as the associated fauna

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing imaging systems for the measurement of oceanic sea states have recently attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting observed waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss the improvement of their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in stereoscopic 3D coding, transmission and subjective assessment methodology depends largely on the availability of source content that can be used in cross-lab evaluations. While several studies have already been presented using proprietary content, comparisons between the studies are difficult since discrepant contents are used. Therefore in this paper, a freely available dataset of high quality Full-HD stereoscopic sequences shot with a semiprofessional 3D camera is introduced in detail. The content was designed to be suited for usage in a wide variety of applications, including high quality studies. A set of depth maps was calculated from the stereoscopic pair. As an application example, a subjective assessment has been performed using coding and spatial degradations. The Absolute Category Rating with Hidden Reference method was used. The observers were instructed to vote on video quality only. Results of this experiment are also freely available and will be presented in this paper as a first step towards objective video quality measurement for 3DTV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing call for inventories that evaluate geographic patterns in diversity of plant genetic resources maintained on farm and in species' natural populations in order to enhance their use and conservation. Such evaluations are relevant for useful tropical and subtropical tree species, as many of these species are still undomesticated, or in incipient stages of domestication and local populations can offer yet-unknown traits of high value to further domestication. For many outcrossing species, such as most trees, inbreeding depression can be an issue, and genetic diversity is important to sustain local production. Diversity is also crucial for species to adapt to environmental changes. This paper explores the possibilities of incorporating molecular marker data into Geographic Information Systems (GIS) to allow visualization and better understanding of spatial patterns of genetic diversity as a key input to optimize conservation and use of plant genetic resources, based on a case study of cherimoya (Annona cherimola Mill.), a Neotropical fruit tree species. We present spatial analyses to (1) improve the understanding of spatial distribution of genetic diversity of cherimoya natural stands and cultivated trees in Ecuador, Bolivia and Peru based on microsatellite molecular markers (SSRs); and (2) formulate optimal conservation strategies by revealing priority areas for in situ conservation, and identifying existing diversity gaps in ex situ collections. We found high levels of allelic richness, locally common alleles and expected heterozygosity in cherimoya's putative centre of origin, southern Ecuador and northern Peru, whereas levels of diversity in southern Peru and especially in Bolivia were significantly lower. The application of GIS on a large microsatellite dataset allows a more detailed prioritization of areas for in situ conservation and targeted collection across the Andean distribution range of cherimoya than previous studies could do, i.e. at province and department level in Ecuador and Peru, respectively.