12 resultados para biological reference points

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents a cartographic system to facilitate cooperative manoeuvres among autonomous vehicles in a well-known environment. The main objective is to design an extended cartographic system to help in the navigation of autonomous vehicles. This system has to allow the vehicles not only to access the reference points needed for navigation, but also noticeable information such as the location and type of traffic signals, the proximity to a crossing, the streets en route, etc. To do this, a hierarchical representation of the information has been chosen, where the information has been stored in two levels. The lower level contains the archives with the Universal Traverse Mercator (UTM) coordinates of the points that define the reference segments to follow. The upper level contains a directed graph with the relational database in which streets, crossings, roundabouts and other points of interest are represented. Using this new system it is possible to know when the vehicle approaches a crossing, what other paths arrive at that crossing, and, should there be other vehicles circulating on those paths and arriving at the crossing, which one has the highest priority. The data obtained from the cartographic system is used by the autonomous vehicles for cooperative manoeuvres.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study reports the results of a water footprint (WF) assessment of five types of textiles commonly used for the production of jeans, including two different fibres (cotton and Lyocell fibre) and five corresponding production methods for spinning, dyeing and weaving. The results show that the fibre production is the stage with the highest water consumption, being cotton production particularly relevant. Therefore, the study pays particular attention to the water footprint of cotton production and analyses the effects of external factors influencing the water footprint of a product, in this case, the incentives provided by the EU Common Agricultural Policy (CAP), and the relevance of agricultural practices to the water footprint of a product is emphasised. An extensification of the crop production led to higher WF per unit, but a lower overall pressure on the basins water resources. This study performs a sustainability assessment of the estimated cotton WFs with the water scarcity index, as proposed by Hoekstra et al. (2011), and shows their variations in different years as a result of different water consumption by crops in the rest of the river basin. In our case, we applied the assessment to the Guadalquivir, Guadalete and Barbate river basins, three semi-arid rivers in South Spain. Because they are found to be relevant, the available water stored in dams and the outflow are also incorporated as reference points for the sustainability assessment. The study concludes that, in the case of Spanish cotton production, the situation of the basin and the policy impact are more relevant for the status of the basin s water resources than the actual WF of cotton production. Therefore, strategies aimed at reducing the impact of the water footprint of a product need to analyse both the WF along the value chain and within the local context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic measurements will become a standard for bridge monitoring in the near future. This fact will produce an important cost reduction for maintenance. US Administration has a long term intensive research program in order to diminish the estimated current maintenance cost of US$7 billion per year over 20 years. An optimal intervention maintenance program demands a historical dynamical record, as well as an updated mathematical model of the structure to be monitored. In case that a model of the structure is not actually available it is possible to produce it, however this possibility does not exist for missing measurement records from the past. Current acquisition systems to monitor structures can be made more efficient by introducing the following improvements, under development in the Spanish research Project “Low cost bridge health monitoring by ambient vibration tests using wireless sensors”: (a) a complete wireless system to acquire sensor data, (b) a wireless system that permits the localization and the hardware identification of the whole sensor system. The applied localization system has been object of a recent patent, and (c) automatization of the modal identification process, aimed to diminish human intervention. This system is assembled with cheap components and allows the simultaneous use of a large number of sensors at a low placement cost. The engineer’s intervention is limited to the selection of sensor positions, probably based on a preliminary FE analysis. In case of multiple setups, also the position of a number of fixed reference sensors has to be decided. The wireless localization system will obtain the exact coordinates of all these sensors positions. When the selection of optimal positions is difficult, for example because of the lack of a proper FE model, this can be compensated by using a higher number of measuring (also reference) points. The described low cost acquisition system allows the responsible bridge administration to obtain historical dynamic identification records at reasonable costs that will be used in future maintenance programs. Therefore, due to the importance of the baseline monitoring record of a new bridge, a monitoring test just after its construction should be highly recommended, if not compulsory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Determinar con buena precisión la posición en la que se encuentra un terminal móvil, cuando éste se halla inmerso en un entorno de interior (centros comerciales, edificios de oficinas, aeropuertos, estaciones, túneles, etc), es el pilar básico sobre el que se sustentan un gran número de aplicaciones y servicios. Muchos de esos servicios se encuentran ya disponibles en entornos de exterior, aunque los entornos de interior se prestan a otros servicios específicos para ellos. Ese número, sin embargo, podría ser significativamente mayor de lo que actualmente es, si no fuera necesaria una costosa infraestructura para llevar a cabo el posicionamiento con la precisión adecuada a cada uno de los hipotéticos servicios. O, igualmente, si la citada infraestructura pudiera tener otros usos distintos, además del relacionado con el posicionamiento. La usabilidad de la misma infraestructura para otros fines distintos ofrecería la oportunidad de que la misma estuviera ya presente en las diferentes localizaciones, porque ha sido previamente desplegada para esos otros usos; o bien facilitaría su despliegue, porque el coste de esa operación ofreciera un mayor retorno de usabilidad para quien lo realiza. Las tecnologías inalámbricas de comunicaciones basadas en radiofrecuencia, ya en uso para las comunicaciones de voz y datos (móviles, WLAN, etc), cumplen el requisito anteriormente indicado y, por tanto, facilitarían el crecimiento de las aplicaciones y servicios basados en el posicionamiento, en el caso de poderse emplear para ello. Sin embargo, determinar la posición con el nivel de precisión adecuado mediante el uso de estas tecnologías, es un importante reto hoy en día. El presente trabajo pretende aportar avances significativos en este campo. A lo largo del mismo se llevará a cabo, en primer lugar, un estudio de los principales algoritmos y técnicas auxiliares de posicionamiento aplicables en entornos de interior. La revisión se centrará en aquellos que sean aptos tanto para tecnologías móviles de última generación como para entornos WLAN. Con ello, se pretende poner de relieve las ventajas e inconvenientes de cada uno de estos algoritmos, teniendo como motivación final su aplicabilidad tanto al mundo de las redes móviles 3G y 4G (en especial a las femtoceldas y small-cells LTE) como al indicado entorno WLAN; y teniendo siempre presente que el objetivo último es que vayan a ser usados en interiores. La principal conclusión de esa revisión es que las técnicas de triangulación, comúnmente empleadas para realizar la localización en entornos de exterior, se muestran inútiles en los entornos de interior, debido a efectos adversos propios de este tipo de entornos como la pérdida de visión directa o los caminos múltiples en el recorrido de la señal. Los métodos de huella radioeléctrica, más conocidos bajo el término inglés “fingerprinting”, que se basan en la comparación de los valores de potencia de señal que se están recibiendo en el momento de llevar a cabo el posicionamiento por un terminal móvil, frente a los valores registrados en un mapa radio de potencias, elaborado durante una fase inicial de calibración, aparecen como los mejores de entre los posibles para los escenarios de interior. Sin embargo, estos sistemas se ven también afectados por otros problemas, como por ejemplo los importantes trabajos a realizar para ponerlos en marcha, y la variabilidad del canal. Frente a ellos, en el presente trabajo se presentan dos contribuciones originales para mejorar los sistemas basados en los métodos fingerprinting. La primera de esas contribuciones describe un método para determinar, de manera sencilla, las características básicas del sistema a nivel del número de muestras necesarias para crear el mapa radio de la huella radioeléctrica de referencia, junto al número mínimo de emisores de radiofrecuencia que habrá que desplegar; todo ello, a partir de unos requerimientos iniciales relacionados con el error y la precisión buscados en el posicionamiento a realizar, a los que uniremos los datos correspondientes a las dimensiones y realidad física del entorno. De esa forma, se establecen unas pautas iniciales a la hora de dimensionar el sistema, y se combaten los efectos negativos que, sobre el coste o el rendimiento del sistema en su conjunto, son debidos a un despliegue ineficiente de los emisores de radiofrecuencia y de los puntos de captura de su huella. La segunda contribución incrementa la precisión resultante del sistema en tiempo real, gracias a una técnica de recalibración automática del mapa radio de potencias. Esta técnica tiene en cuenta las medidas reportadas continuamente por unos pocos puntos de referencia estáticos, estratégicamente distribuidos en el entorno, para recalcular y actualizar las potencias registradas en el mapa radio. Un beneficio adicional a nivel operativo de la citada técnica, es la prolongación del tiempo de usabilidad fiable del sistema, bajando la frecuencia en la que se requiere volver a capturar el mapa radio de potencias completo. Las mejoras anteriormente citadas serán de aplicación directa en la mejora de los mecanismos de posicionamiento en interiores basados en la infraestructura inalámbrica de comunicaciones de voz y datos. A partir de ahí, esa mejora será extensible y de aplicabilidad sobre los servicios de localización (conocimiento personal del lugar donde uno mismo se encuentra), monitorización (conocimiento por terceros del citado lugar) y seguimiento (monitorización prolongada en el tiempo), ya que todos ellas toman como base un correcto posicionamiento para un adecuado desempeño. ABSTRACT To find the position where a mobile is located with good accuracy, when it is immersed in an indoor environment (shopping centers, office buildings, airports, stations, tunnels, etc.), is the cornerstone on which a large number of applications and services are supported. Many of these services are already available in outdoor environments, although the indoor environments are suitable for other services that are specific for it. That number, however, could be significantly higher than now, if an expensive infrastructure were not required to perform the positioning service with adequate precision, for each one of the hypothetical services. Or, equally, whether that infrastructure may have other different uses beyond the ones associated with positioning. The usability of the same infrastructure for purposes other than positioning could give the opportunity of having it already available in the different locations, because it was previously deployed for these other uses; or facilitate its deployment, because the cost of that operation would offer a higher return on usability for the deployer. Wireless technologies based on radio communications, already in use for voice and data communications (mobile, WLAN, etc), meet the requirement of additional usability and, therefore, could facilitate the growth of applications and services based on positioning, in the case of being able to use it. However, determining the position with the appropriate degree of accuracy using these technologies is a major challenge today. This paper provides significant advances in this field. Along this work, a study about the main algorithms and auxiliar techniques related with indoor positioning will be initially carried out. The review will be focused in those that are suitable to be used with both last generation mobile technologies and WLAN environments. By doing this, it is tried to highlight the advantages and disadvantages of each one of these algorithms, having as final motivation their applicability both in the world of 3G and 4G mobile networks (especially in femtocells and small-cells of LTE) and in the WLAN world; and having always in mind that the final aim is to use it in indoor environments. The main conclusion of that review is that triangulation techniques, commonly used for localization in outdoor environments, are useless in indoor environments due to adverse effects of such environments as loss of sight or multipaths. Triangulation techniques used for external locations are useless due to adverse effects like the lack of line of sight or multipath. Fingerprinting methods, based on the comparison of Received Signal Strength values measured by the mobile phone with a radio map of RSSI Recorded during the calibration phase, arise as the best methods for indoor scenarios. However, these systems are also affected by other problems, for example the important load of tasks to be done to have the system ready to work, and the variability of the channel. In front of them, in this paper we present two original contributions to improve the fingerprinting methods based systems. The first one of these contributions describes a method for find, in a simple way, the basic characteristics of the system at the level of the number of samples needed to create the radio map inside the referenced fingerprint, and also by the minimum number of radio frequency emitters that are needed to be deployed; and both of them coming from some initial requirements for the system related to the error and accuracy in positioning wanted to have, which it will be joined the data corresponding to the dimensions and physical reality of the environment. Thus, some initial guidelines when dimensioning the system will be in place, and the negative effects into the cost or into the performance of the whole system, due to an inefficient deployment of the radio frequency emitters and of the radio map capture points, will be minimized. The second contribution increases the resulting accuracy of the system when working in real time, thanks to a technique of automatic recalibration of the power measurements stored in the radio map. This technique takes into account the continuous measures reported by a few static reference points, strategically distributed in the environment, to recalculate and update the measurements stored into the map radio. An additional benefit at operational level of such technique, is the extension of the reliable time of the system, decreasing the periodicity required to recapture the radio map within full measurements. The above mentioned improvements are directly applicable to improve indoor positioning mechanisms based on voice and data wireless communications infrastructure. From there, that improvement will be also extensible and applicable to location services (personal knowledge of the location where oneself is), monitoring (knowledge by other people of your location) and monitoring (prolonged monitoring over time) as all of them are based in a correct positioning for proper performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La aparición del tren de alta velocidad en Europa en las últimas décadas del siglo XX supuso el resurgir de un medio de transporte en progresivo declive desde la popularización del automóvil y del avión. La decadencia del ferrocarril había supuesto en muchos casos el abandono, o incluso la demolición, de estaciones históricas y el deterioro de su entorno urbano. Como reacción a esa desatención surgió, también en el último cuarto de siglo, una mayor conciencia social preocupada por la conservación del patrimonio construido del ferrocarril. La necesidad de adaptación de las grandes estaciones de ferrocarril para dar servicio al nuevo sistema de transporte, junto con el interés por poner en valor sus construcciones históricas y su céntrico entorno, ha dado como resultado la realización de importantes transformaciones. El objeto de la presente investigación es el estudio de las transformaciones que han sufrido las grandes estaciones europeas del siglo XIX con la llegada del tren de alta velocidad, profundizando de manera especial en el caso más significativo que tenemos en nuestro país: la estación de Atocha. En el ámbito europeo es donde se localizan los ejemplos más relevantes de estaciones que tuvieron gran trascendencia en el siglo XIX y que ahora, con la llegada de la Alta Velocidad, vuelven a recuperar su grandeza. En España, el crecimiento de la Alta Velocidad en los últimos años ha sido extraordinario, hasta situarse como el segundo país del mundo con más kilómetros de líneas de alta velocidad en operación y, en consecuencia, se ha construido un gran número de estaciones adaptadas a este servicio. El caso más notable es el de la estación de Atocha, que desde la llegada del AVE en 1992 hasta el día de hoy, se ha convertido en uno de los complejos ferroviarios más importantes del mundo. El trabajo parte del estudio de otros referentes europeos, como las Gares de París, la estación de St Pancras en Londres y de otras cinco estaciones del centro de Europa –Amsterdam Centraal, Antwerpen Centraal, Köln Hauptbahnhof, Frankfurt (Main) Hauptbahnhof y la Gare de Strasbourg–, para establecer el marco analítico sobre el que se profundiza con la estación de Atocha. El proceso de transformación de la estación de Atocha se ha gestado a través de una serie de proyectos que han ido configurando la estación hasta el momento actual y planteando la previsión de futuro: el proyecto del Plan General de Madrid, el concurso de ideas para el diseño de la estación, la estación de Cercanías, la estación de Alta Velocidad y Largo Recorrido, la ampliación de esta para separar los flujos por niveles, los Estudios Informativos del Nuevo Complejo Ferroviario de la Estación de Atocha y su primera fase de construcción. Estos siete proyectos son objeto de un análisis en tres niveles: análisis cronológico, análisis funcional y análisis formal. La estación de Atocha fue la primera estación histórica europea en sufrir una gran transformación vinculada a la llegada de la Alta Velocidad. Aporta el entendimiento de la estación como un todo y la intermodalidad como sus principales valores, además de la gran mejora urbana que supuso la «operación Atocha», y adolece de ciertas carencias en su desarrollo comercial, vinculadas en parte a la presencia del jardín tropical, y de un pobre espacio en las salas de embarque para los pasajeros de salidas. La estación de Atocha completa su transformación a partir de su renovación funcional, manteniendo la carga simbólica de su historia. De la confrontación del caso de Atocha con otras importantes estaciones europeas resulta la definición de las principales consecuencias de la llegada de la Alta Velocidad a las grandes terminales europeas y la identificación de los elementos clave en su transformación. Las consecuencias principales son: la potenciación de la intermodalidad con otros medios de transporte, el desarrollo comercial no necesariamente destinado a los usuarios de los servicios ferroviarios, y la puesta en valor de la antigua estación y de su entorno urbano. Por su parte, los elementos clave en la transformación de las grandes estaciones tienen que ver directamente con la separación de flujos, el entendimiento de la estación por niveles, la dotación de nuevos accesos laterales y la construcción de una nueva gran cubierta para los nuevos andenes. La preeminencia de unos elementos sobre otros depende del carácter propio de cada estación y de cada país, de la magnitud de la intervención y, también, de la estructura y composición de los equipos encargados del diseño de la nueva estación. En la actualidad, nos encontramos en un momento interesante respecto a las estaciones de Alta Velocidad. Tras el reciente atentado frustrado en el Thalys que viajaba de Ámsterdam a París, se ha acordado establecer controles de identidad y equipajes en todas las estaciones de la red europea de alta velocidad, lo que implicará modificaciones importantes en las grandes estaciones que, probablemente, tomarán el modelo de la estación de Atocha como referencia. ABSTRACT The emergence of the high speed train in Europe in the last few decades of the 20th century represented the resurgence of a means of transport in progressive decline since the popularization of the car and the airplane. The railway decay brought in many cases the abandonment, or even the demolition, of historical stations and the deterioration of its urban environment. In response to that neglect, a greater social awareness towards the preservation of the railway built heritage raised up, also in the last quarter-century. The need for adaptation of the great railway stations to serve the new transport system, along with the interest in enhancing the historical buildings and its central locations, had resulted in important transformations. The subject of current investigation is the study of the transformations that the great 19th century European stations have experienced with the arrival of the high speed rail, deepening in particular in the most significant case we have in Spain: Atocha railway station. At European level is where the most relevant examples of stations which have had a great significance in the 19th century and now, with the arrival of the high speed train, have regain their greatness, are located. In Spain, the growth of the high speed rail over the past few years has been outstanding. Today is the second country in the world with the longest high speed rail network in operation and, therefore, with a great number of new stations adapted to this service. The most remarkable case is Atocha station. Since the arrival of the AVE in 1992, the station has become one of the world's most important railway hub. The research starts with the study of other European reference points, as the Gares of Paris, St Pancras station in London and five other stations of Central Europe –Amsterdam Centraal, Antwerpen Centraal, Köln Hauptbahnhof, Frankfurt (Main) Hauptbahnhof y la Gare de Strasbourg–, to establish the analytical framework that will be deepen with Atocha station. The transformation process of Atocha station has been created through a number of projects that have forged the station to date and have raised the sights in the future: the project of the General Urban Development Plan, the ideas competition for the station design, the Suburban train station, the High Speed and Long Distance station, its enlargement in order to separate passenger flows in different levels, the 'Masterplans' for the new Atocha transport hub and its first phase of construction. These seven projects are under scrutiny at three levels: chronological analysis, functional analysis and formal analysis. Atocha station was the first European historical station to undergo a great transformation tied to the arrival of the high speed rail. It brings the understanding of the station as a whole and the intermodality as its greatest values, besides the great urban improvement of the 'Atocha operation', and suffers from certain shortcomings in its commercial development, partly linked to the presence of the tropical garden, and from a poor space in the departure lounges. Atocha station completes its transformation on the basis of its functional renewal, keeping the symbolic charge of its history. The confrontation of Atocha case with the great European stations results in the definition of the principal consequences of the high speed rail arrival to the great European terminals and the identification of the key elements in its transformation. The principal consequences are: the empowering of the intermodality with other means of transport, of the commercial development, not necessarily intended for railway services users, and the enhancement of the old station and its urban environment. On the other hand, the key elements in the transformation of the great stations are directly related with the separation of passenger flows, the understanding of the station in different levels, the placement of new lateral accesses and the construction of a new deck over the new platforms. The pre-eminence of some elements over the others depends on the particular nature of each station and each country, on the scale of the intervention and also in the structure and composition of the teams in charge of the new station design. Nowadays, this is an interesting time concerning the high speed rail stations. After the recent foiled terrorist attempt in the Thalys train travelling from Amsterdam to Paris, it was agreed to establish passenger and luggage controls in every European high speed rail station. This will mean important changes in these great stations, which probably will take Atocha station's model as a reference.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fish communities are a key element in fluvial ecosystems Their position in the top of the food chain and their sensitivity to a whole range of impacts make them a clear objective for ecosystem conservation and a sound indicator of biological integrity. The UE Water Framework Directive includes fish community composition, abundance and structure as relevant elements for the evaluation os biological condition. Several approaches have been proposed for the evaluation of the condition of fish communities, from the bio-indicator concept to the IBI (Index of biotic integrity) proposals. However, the complexity of fish communities and their ecological responses make this evaluation difficult, and we must avoid both oversimplified and extreme analytical procedures. In this work we present a new proposal to define reference conditions in fish communities, discussing them from an ecological viewpoint. This method is a synthetic approach called SYNTHETIC OPEN METHODOLOGICAL FRAMEWORK (SOMF) that has been applied to the rivers of Navarra. As a result, it is recommended the integration of all the available information from spatial, modelling, historical and expert sources, providing the better approach to fish reference conditions, keeping the highest level of information and meeting the legal requirements of the WFD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental methods based on single particle tracking (SPT) are being increasingly employed in the physical and biological sciences, where nanoscale objects are visualized with high temporal and spatial resolution. SPT can probe interactions between a particle and its environment but the price to be paid is the absence of ensemble averaging and a consequent lack of statistics. Here we address the benchmark question of how to accurately extract the diffusion constant of one single Brownian trajectory. We analyze a class of estimators based on weighted functionals of the square displacement. For a certain choice of the weight function these functionals provide the true ensemble averaged diffusion coefficient, with a precision that increases with the trajectory resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La calidad del suelo es una herramienta de evaluación que puede facilitar la adaptación de prácticas de manejo que promuevan sistemas agropecuarios sostenibles. La investigación de este trabajo se inició con un diagnóstico participativo en 12 comunidades rurales de la provincia de Las Tunas en el año 2009 en el cual los productores identificaron los puntos críticos de calidad de los suelos de la región y sirvieron de punta de partida para seleccionar las variables físicas, químicas y biológicas a determinar en cinco sistemas de uso agropecuario (arboleda, pasto natural, pasto cultivado y dos sistemas silvopastoriles) en la zona La Veguita, municipio Las Tunas. El sistema arboleda se utilizó como referencia de las propiedades naturales del suelo. El pasto natural se distingue por el desarrollo de especies de baja productividad, sin embargo el pasto cultivado está representado por Pennisetum purpureum vc CUBA CT-115, y constituye una contribución a la tecnología de bancos de biomasa, para utilizarse en el pastoreo durante la seca. Los sistemas silvopastoriles están representados por Leucaena leucocephala Lam. en franjas y Panicum maximun vc. Likoni, los que se diferencian en su diseño, manejo y propiedades mineralógicas. El objetivo fundamental fue valorar indicadores de calidad de los suelos Luvisoles háplicos sobre granitoides, para diseñar e implementar tecnologías de manejo que permitan incrementar la capacidad agroproductiva de los suelos. Mediante el análisis de componentes principales se obtuvo un conjunto mínimo de indicadores físicos, químicos y biológicos que proporcionaron información útil referente a los procesos edáficos y se integraron para determinar un índice de calidad. En el sistema de uso, caracterizado por el pasto cultivado (Pennisetum purpureum) se estableció, en parcelas experimentales, un ensayo de corta duración, en el que se comparó el laboreo tradicional y el laboreo sin inversión del prisma, con y sin aplicación de compost. En ambos sistemas de labranza se evaluó el desarrollo del cultivo e indicadores de calidad del suelo. Los resultados mostraron que del conjunto de indicadores edáficos estudiados se seleccionaron 6 en los que la capacidad de intercambio catiónico, materia orgánica, potasio intercambiable, contenido de arena, densidad aparente y biomasa de lombrices explicaron la mayor variabilidad y sirvieron de base para evaluar la calidad de estos suelos. Se establecieron valores umbrales de referencia de indicadores de calidad, que permitirán evaluar y monitorear los sistemas de uso y manejo de la región. El sistema Silvopastoril 2 resultó el de mayor índice de calidad de los suelos tomando como referencia la arboleda por su condición natural. El manejo silvopastoril influyó predominantemente en mejores resultados productivos pero las características edáficas principalmente físicas, deben definir su diseño y manejo. El sistema de pastos cultivados con Pennisetum purpureum vc CUBA CT 115, alcanzó la mayor acumulación de carbono orgánico, sin embargo, el manejo limitó su calidad física y el funcionamiento productivo del sistema. De manera general los sistemas de uso no garantizan un índice de la calidad del suelo, puesto que se ve afectado por las propiedades edáficas y las prácticas de manejo. En el ámbito biológico, las lombrices constituyeron los organismos más numerosos con predominio en los sistemas silvopastoriles y arboleda. Los valores superiores de densidad y biomasa de oligoquetos y mayor diversidad de otros individuos de la macrofauna, indican que la presencia de árboles en los pastizales de gramínea potencia y diversifican las comunidades de macroinvertebrados del suelo. El sistema de labranza sin inversión del prisma propicia una mejor calidad física del suelo, manteniendo el carbono e incrementando los rendimientos del Penisetum purpureum cv CUBA CT 115. La labranza tradicional, a base de aradura y grada, afecta a los contenidos de materia orgánica en el corto plazo y mantiene capas compactas en el horizonte subyacente, además influye desfavorablemente al flujo del aire, agua y al desarrollo radical de los pastos. La aplicación de compost favoreció mejores resultados productivos en ambas tecnologías de manejo. Los resultados alcanzados recomiendan la implantación de tecnologías de manejo conservacionistas y la aplicación de materiales orgánicos que restituyan los elementos nutricionales requeridos por los pastos, por lo que no se justifica la continuidad del uso de prácticas tradicionales de laboreo con inversión del prisma que se realizan actualmente. ABSTRACT The soil quality is an assessment tool, which could facilitate the adaptation of management practices that promote sustainable agricultural systems. The present investigation was carried out with a participatory diagnostic in twelve rural communities from Las Tunas province in 2009, in which producers identified the critical soil quality points of region and served as a starting point to select the physical, chemical and biological variables, in order to determine on five agricultural used systems (grove, natural grass, cultivated grass and two silvopastoral systems) in La Veguita zone from municipality Las Tunas. The system grove was used as reference of natural soil properties. The natural grass is distinguished by the development of low-productivity species, however the cultivated grass is represented by Pennisetum purpureum vc CUBA CT-115, and is a contribution to the biomass banks technology, in order to use in grazing during the dry season. The silvopastoral systems are represented by Leucaena leucocephala Lam. in stripes and Panicum maximum cv. Likoni, which differ in their design, handling and mineralogical properties. The main aim of this study was to assess the quality indicators for haplic Luvisols on granitoids for designing and implementing management technologies in order to increase the agroproductive capacity of soils. A minimal set of physical, chemical and biological indicators by Principal Component Analysis was obtained, which provided some useful information regarding soil processes and their integration for determining an index of quality. In the use system, characterized for the cultivated grass (Pennisetum purpureum) a short term assay in experimental plots was established, where the traditional and prism without inversion tillage were compared with and without compost application. In both tillage systems were evaluated the crop development and soil quality indicators. The results showed that the studied soil indicators set, six were selected, specifically the ones with exchangeable cationic capacity, organic matter, interchangeable potassium, sand content, bulk density and earthworm biomass, which explained the higher variability and served as the basis for evaluating the soil quality. The Reference threshold values of quality indicators for evaluating and monitoring the use and management systems from the region were established. The silvopastoral system 2 had the highest quality soil index, taking of reference the grove system for its natural condition. The silvopastoral management influenced on better productive results, but the soil characteristics, particularly the physical properties to be defined its design and management. However, the cultivated grass system with Pennisetum purpureum vc CUBA CT 115, reached the greatest accumulation of organic carbon. However, the management limited its physical quality and productive performance of the system. In addition, the use systems do not guarantee an index of soil quality, since it is affected by soil properties and management practices. From the biological aspect, the earthworms are the most numerous organisms on the silvopastoral systems and grove. The higher values of oligochaetes biomass and density and the greater diversity of other organisms from macrofauna indicate that the tree presence on the pasture grasses allows enhancing and diversifying soil macro invertebrate communities. The non-inversion prism tillage system provides a better physical quality of soil, maintaining the carbon content and increasing the yields of Penisetum purpureum vc CUBA CT 115. The traditional tillage, using the plowing and harrowing affects the organic matter content in a short term and keeps on compact layers of underlying horizon, and adversely affects the air and water flow, and pasture radical development. The compost application favored the best production results in both management technologies. The results obtained recommend the implementation of conservation management technologies and the application of organic materials that restore the nutritional elements required by the pasture, so it does not justify the continued use of traditional tillage practices with prism investment that are currently being made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phaseolus vulgaris L. (frijol común o judía) es una leguminosa de gran demanda para la nutrición humana y un producto agrícola muy importante. Sin embargo, la producción de frijol se ve limitada por presiones ambientales como la sequía. En México, el 85% de la cosecha de frijol se produce en la temporada de primavera-verano, principalmente en las regiones del altiplano semiárido con una precipitación anual entre 250 y 400 mm. A pesar del implemento de tecnología en el campo, los factores naturales impiden al agricultor llegar a los rendimientos deseados. El Instituto Nacional de Investigaciones Forestales, Agrícolas y Pecuarias (INIFAP), como instituto de investigación gubernamental en México, tiene como objetivo la mejora de cultivos estratégicos, uno de ellos, P. vulgaris. Los estudios en relación a la sequía se enfocan especialmente en la selección de genotipos tolerantes, los cuales son sometidos en condiciones de estrés y monitoreando parámetros como el rendimiento y peso de semilla, además de algunos indicadores tales como índice de cosecha. El resultado de estos trabajos ha sido la obtención de variedades con mayor tolerancia a la sequía, tales como Pinto Villa y Pinto Saltillo. En los últimos años se ha avanzado notablemente en el conocimiento de las bases moleculares en las respuestas de las plantas al estrés. De acuerdo a diversos estudios se ha demostrado que las plantas bajo estrés por sequía experimentan cambios en la expresión de genes involucrados en la señalización, regulación de la transcripción y la traducción, transporte de agua y la función directa en la protección celular. También se ha observado que el déficit de agua es causado por las temperaturas extremas y la alta concentración de sales, por lo que al nivel molecular, las respuestas al estrés tienen puntos de especificidad y puntos de entrecruzamiento. La sequía puede generar estreses secundarios, tales como el nutricional, oxidativo y osmótico. Sin embargo, es necesario identificar y caracterizar muchos de los componentes involucrados en las respuestas al déficit hídrico, la caracterización de estos genes permitirá tener una mejor comprensión de los mecanismos bioquímicos y fisiológicos involucrados en la tolerancia al estrés. Actualmente, con el apoyo de la biología molecular se han identificado algunos genes que otorgan ventajas para la adaptación a ambientes desfavorables. Por lo que el objetivo del presente trabajo es identificar marcadores genéticos asociados a rasgos fenotípicos con énfasis a la tolerancia a estrés hídrico en P. vulgaris. Una vez establecidos los marcadores asociados al estrés hídrico, es factible considerar su uso para la selección asistida por marcadores en líneas o variedades de frijol de interés para los mejoradores. Se evaluaron 282 familias F3:5 derivadas de la cruza entre los cultivares Pinto Villa y Pinto Saltillo. Las familias se sembraron bajo un diseño simple de látice 17x17, el experimento se llevo acabo en el ciclo primavera-verano del 2010 y 2011, y otoñoinvierno de 2010 en el Campo Experimental Bajío del INIFAP con dos repeticiones para cada tratamiento de humedad (riego completo y sequía terminal). En todos los genotipos se realizó el fenotipado (variables fenotípicas) y el genotipado a través de marcadores moleculares. Los análisis estadísticos se basaron en el análisis de componentes principales (Eigen Analysis Selection Index Method, ESIM), la asociación entre marcadores SNP y el fenotipado (paquete SNPassoc para R) y el análisis de varianza (ANOVA). Los valores ESIM mostraron que las variables de Rendimiento, Días a floración, Días a madurez fisiológica e Índice de cosecha fueron sobresalientes en sequía terminal, por lo que se sugieren tomarse en consideración para los estudios de sequía en P. vulgaris como monitores de evaluación a la resistencia. Se identificaron nueve familias sobresalieron por sus valores ESIM (PV/PS6, 22, 131, 137, 149, 154, 201, 236 y 273), además de presentar valores superiores para el rendimiento en comparación con los parentales. Estos genotipos son candidatos interesantes para realizar estudios de identificación de loci asociados con la respuesta al estrés, y como potenciales parentales en el desarrollo de nuevas variedades de frijol. En los análisis de asociación SNPassoc se identificaron 83 SNPs significativos (p<0,0003) asociados a los rasgos fenotípicos, obteniendo un total de 222 asociaciones, de las cuales predomina el modelo genético de codominancia para las variables Días a floración, Periodo reproductivo y Biomasa total. Treinta y siete SNPs se identificaron a diferentes funciones biológicas a través del análisis de anotación funcional, de los cuales 12 SNPs (9, 18, 28, 39, 61, 69, 80, 106, 115, 128, 136 y 142) sobresalen por su asociación al fenotipado, y cuya anotación funcional indica que se encuentran en genes relacionados a la tolerancia a la sequía, tales como la actividad kinasa, actividad metabólica del almidón, carbohidratos y prolina, respuesta al estrés oxidativo, así como en los genes LEA y posibles factores de transcripción. En el caso de los análisis ANOVA, se identificaron 72 asociaciones entre los SNPs y las variables fenotípicas (F< 3,94E-04). Las 72 asociaciones corresponden a 30 SNPs y 7 variables fenotípicas, de las que predomina Peso de 100 semillas y Periodo reproductivo. Para los rasgos de Rendimiento, Índice de cosecha y Días a madurez fisiológica se presentaron asociaciones con seis SNPs (17, 34, 37, 50, 93 y 107), de los cuales, a los SNP37 y SNP107 fueron identificados a la anotación biológica de protein binding. Por otro lado, los SNP106 y SNP128 asociados al Periodo reproductivo, son genes con actividad kinasa y actividad metabólica del almidón, respectivamente. Para los marcadores tipo AFLP, se identificaron 271 asociaciones (F<2,34E-04). Las asociaciones corresponden a 86 AFLPs con todas las variables fenotípicas evaluadas, de las que predomina peso de 100 semillas, Días a floración y Periodo reproductivo. Debido a que los en los AFLPs no es posible determinar su anotación biológica, se proponen como marcadores potenciales relacionados a la resistencia a la sequía en frijol. Los AFLPs candidatos requieren más estudios tales como la secuenciación de los alelos respectivos, así como la identificación de éstas secuencias en el genoma de referencia y su anotación biológica, entre otros análisis, de esta manera podríamos establecer aquellos marcadores candidatos a la validación para la selección asistida. El presente trabajo propone tanto genotipos como marcadores genéticos, que deben ser validados para ser utilizados en el programa de mejoramiento de P. vulgaris, con el objetivo de desarrollar nuevas líneas o variedades tolerantes a la sequía. ABSTRACT Phaseolus vulgaris L. (common bean or judia) is a legume of great demand for human consumption and an important agricultural product. However, the common bean production is limited by environmental stresses, such as drought. In Mexico, 85% of the common bean crop is produced in the spring-summer season mainly in semiarid highland regions with a rainfall between 250 and 400 mm per year. In spite of the improvement of crop technology, the natural factors hamper getting an optimal yield. The National Institute for Forestry, Agriculture and Livestock (INIFAP) is a government research institute from Mexico, whose main objective is the genetic breeding of strategic crops, like P. vulgaris L. The drought tolerance studies particularly focus on the selection of bean tolerant genotypes, which are subjected to stress conditions, by means of monitoring parameters such as yield and seed weight, plus some agronomic indicators such as harvest index. The results of these works have led to obtain cultivars with higher drought tolerance such as Pinto Villa and Pinto Saltillo. Significant achievements have been recently made in understanding the molecular basis of stress plant responses. Several studies have shown that plants under drought stress present changes in gene expression related to cell signalling, transcriptional and translational regulation, water transport and cell protection. In addition, it has been observed that the extreme temperatures and high salt concentrations can cause a water deficiency so, at the molecular level, stress responses have specific and crossover points. The drought can cause secondary stresses, such as nutritional, oxidative and osmotic stress. It is required the identification of more components involved in the response to water deficit, the characterization of these genes will allow a better understanding of the biochemical and physiological mechanisms involved in stress tolerance. Currently, with the support of molecular biology techniques, some genes that confer an advantage for the crop adaptation to unfavourable environments have been identified. The objective of this study is to identify genetic markers associated with phenotypic traits with emphasis on water stress tolerance in P. vulgaris. The establishment of molecular markers linked to drought tolerance would make possible their use for marker-assisted selection in bean breeding programs. Two hundred and eighty two F3:5 families derived from a cross between the drought resistant cultivars Pinto Villa and Pinto Saltillo were evaluated. The families were sowed under a 17x17 simple lattice design. The experiment was conducted between spring-summer seasons in 2010 and 2011, and autumn-winter seasons in 2010 at the Bajio Experimental Station of INIFAP with two treatments (full irrigation and terminal drought). All families were phenotyped and genotyped using molecular markers. Statistical analysis was based on principal component analysis (Eigen Analysis Selection Index Method, ESIM), association analysis between SNP markers and phenotype (SNPassoc package R) and analysis of variance (ANOVA). The ESIM values showed that seed yield, days to flowering, days to physiological maturity and harvest index were outstanding traits in terminal drought treatment, so they could be considered as suitable parameters for drought-tolerance evaluation in P. vulgaris. Nine outstanding families for the ESIM values were identified (PV/PS6, 22, 131, 137, 149, 154, 201, 236 and 273), in addition, these families showed higher values for seed yield compared to the parental cultivars. These families are promising candidates for studies focused on the identification of loci associated to the stress response, and as potential parental cultivars for the development of new varieties of common bean. In the SNPassoc analysis, 83 SNPs were found significantly associated (p<0.0003) with phenotypic traits, obtaining a total of 222 associations, most of which involved the traits days to flowering, reproductive period and total biomass under a codominant genetic model. The functional annotation analysis showed 37 SNPs with different biological functions, 12 of them (9, 18, 28, 39, 61, 69, 80, 106, 115, 128, 136 and 142) stand out by their association to phenotype. The functional annotation suggested a connection with genes related to drought tolerance, such as kinase activity, starch, carbohydrates and proline metabolic processes, responses to oxidative stress, as well as LEA genes and putative transcription factors. In the ANOVA analysis, 72 associations between SNPs and phenotypic traits (F<3.94E- 04) were identified. All of these associations corresponded to 30 SNPs markers and seven phenotypic traits. Weight of 100 seeds and reproductive period were the traits with more associations. Seed yield, harvest index and days to physiological maturity were associated to six SNPs (17, 34, 37, 50, 93 and 107), the SNP37 and SNP107 were identified as located in protein binding genes. The SNP106 and SNP128 were associated with the reproductive period and belonged to genes with kinase activity and genes related to starch metabolic process, respectively. In the case of AFLP markers, 271 associations (F<2.34E-04) were identified. The associations involved 86 AFLPs and all phenotypic traits, being the most frequently associated weight of 100 seeds, days to flowering and reproductive period. Even though it is not possible to perform a functional annotation for AFLP markers, they are proposed as potential markers related to drought resistance in common bean. AFLPs candidates require additional studies such as the sequencing of the respective alleles, identification of these sequences in the reference genome and gene annotation, before their use in marker assisted selection. This work, although requires further validation, proposes both genotypes and genetic markers that could be used in breeding programs of P. vulgaris in order to develop new lines or cultivars with enhanced drought-tolerance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.