970 resultados para Ground control point
Resumo:
This paper reports the effects produced on the organisms of the soil (plants, invertebrates and microorganisms), after the application of two types of poultry manure (sawdust and straw bed) on an agricultural land. The test was made using a terrestrial microcosm, Multi-Species Soil System (MS3) developed in INIA. There was no difference in the germination for any of the three species of plants considered in the study. The biomass was increased in the wheat (Triticum aestivum) coming from ground treated with both kinds of poultry manure. Oilseed rape (Brasica rapa) was not affected and regarding vetch (Vicia sativa) only straw poultry manure showed significant difference. For length only Vicia sativa was affected showing a reduction when straw was exposed to poultry manure. When the effect on invertebrates was studied, we observed a reduction in the number of worms during the test, especially from the ground control (13.7%), higher than in the ground with sawdust poultry manure (6.7%), whereas in the ground with straw poultry manure, there was no reduction. The biomass was affected and at the end of the test it was observed that while the reduction of worms in the ground control was about 48%, the number of those that were in the ground with sawdust poultry manure or straw poultry manure decreased by 41% and 22% respectively. Finally, the effects on microorganisms showed that the enzymatic activities: dehydrogenase (DH) and phosphatase and basal respiration rate increased at the beginning of the test, and the differences were statistically significant compared with the values of the control group. During the test, all these parameters decreased (except DH activities) but they were always higher than in the ground control. This is why it is possible to deduce that the contribution of poultry manure caused an improvement in the conditions of fertilization and also for the soil.
Resumo:
La situación actual del mercado energético en España y el imparable aumento de las tasas por parte de las eléctricas, está fomentando la búsqueda de fuentes de energía alternativas que permitan a la población poder abastecerse de electricidad, sin tener que pagar unos costes tan elevados. Para cubrir esta necesidad, la energía fotovoltaica y sobretodo el autoconsumo con inyección a red o balance neto, está adquiriendo cada vez más importancia dentro del mundo energético. Pero la penetración de esta tecnología en la Red Eléctrica Española tiene un freno, la desconfianza por parte del operador de la red, ya que la fotovoltaica es una fuente de energía intermitente, que puede introducir inestabilidades en el sistema en caso de alta penetración. Por ello se necesita ganar la confianza de las eléctricas, haciendo que sea una energía predecible, que aporte potencia a la red eléctrica cuando se le pida y que opere participando en la regulación de la frecuencia del sistema eléctrico. Para tal fin, el grupo de investigación de Sistemas Fotovoltaicos, perteneciente al IES de la UPM, está llevando a cabo un proyecto de investigación denominado PV CROPS, financiado por la Comisión Europea, y que tiene por objetivo desarrollar estas estrategias de gestión. En este contexto, el objetivo de este Proyecto Fin de Carrera consiste en implementar un Banco de Ensayos con Integración de Baterías en Sistemas FV Conectados a Red, que permita desarrollar, ensayar y validar estas estrategias. Aprovechando la disponibilidad para usar el Hogar Digital, instalado en la EUITT de la UPM, hemos montado el banco de ensayos en un laboratorio contiguo, y así, poder utilizar este Hogar como un caso real de consumos energéticos de una vivienda. Este banco de ensayos permitirá obtener información de la energía generada por la instalación fotovoltaica y del consumo real de la "casa" anexa, para desarrollar posteriormente estrategias de gestión de la electricidad. El Banco de Ensayos está compuesto por tres bloques principales, interconectados entre sí: Subsistema de Captación de Datos y Comunicación. Encargado de monitorizar los elementos energéticos y de enviar la información recopilada al Subsistema de Control. Formado por analizadores de red eléctrica, monofásicos y de continua, y una pasarela orientada a la conversión del medio físico Ethernet a RS485. Subsistema de Control. Punto de observación y recopilación de toda la información que proviene de los elementos energéticos. Es el subsistema donde se crearán y se implementarán estrategias de control energético. Compuesto por un equipo Pxie, controlador empotrado en un chasis de gama industrial, y un equipo PC Host, compuesto por una workstation y tres monitores. Subsistema de Energía. Formado por los elementos que generan, controlan o consumen energía eléctrica, en el Banco de Ensayos. Constituido por una pérgola FV, un inversor, un inversor bidireccional y un bloque de baterías. El último paso ha sido llevar a cabo un Ejemplo de Aplicación Práctica, con el que hemos probado que el Banco de Ensayos está listo para usarse, es operativo y completamente funcional en operaciones de monitorización de generación energética fotovoltaica y consumo energético. ABSTRACT. The current situation of the energetic market in Spain and the unstoppable increase of the tax on the part of the electrical companies, is promoting the search of alternative sources of energy that allow to the population being able to be supplied of electricity, without having to pay so high costs. To meet this need, the photovoltaic power and above all the self-consumption with injection to network, it is increasingly important inside the energetic world. It allows to the individual not only to pay less for the electricity, in addition it allows to obtain benefits for the energy generated in his own home. But the penetration of this technology in the Electrical Spanish Network has an obstacle, the distrust on the part of the operator of the electrical network, due to the photovoltaic is an intermittent source of energy, which can introduce instabilities in the system in case of high penetration. Therefore it´s necessary to reach the confidence of the electricity companies, making it a predictable energy, which provides with power to the electrical network whenever necessary and that operates taking part in the regulation of the frequency of the electric system. For such an end, the group of system investigation Photovoltaic, belonging to the IES of the UPM, there is carrying out a project of investigation named PV CROPS, financed by the European Commission, and that has for aim to develop these strategies of management. In this context, the objective of this Senior Thesis consists in implementing a Bank of Tests with Integration of Batteries in Photovoltaic Systems Connected to Network, which allows developing, testing and validating these strategies. Taking advantage of the availability to use the Digital Home installed in the EUITT of the UPM, we have mounted the bank of tests in a contiguous laboratory to use this Home as a real case of energetic consumptions of a house. This bank of tests will allow obtaining information of the energy generated by the photovoltaic installation and information of the royal consumption of the attached "house", to develop later strategies of management of the electricity. The Bank of Tests is composed by three principal blocks, interconnected each other: Subsystem of Gathering of data and Communication. In charge of monitoring the energetic elements and sending the information compiled to the Subsystem of Control. Formed by power analyzers, AC and DC, and a gateway for the conversion of the Ethernet physical medium to RS485. Subsystem of Control. Point of observation and compilation of all the information that comes from the energetic elements. It is the subsystem where there will be created and there will be implemented strategies of energetic control. Composed of a Pxie, controller fixed in an industrial range chassis, and a PC Host, formed by a workstation and three monitors. Subsystem of Energy. Formed by the elements of generating, controlling or consuming electric power, in the Bank of Tests. Made of photovoltaic modules, an inverter, a twoway inverter and a batteries block. The last step has been performing an Example of Practical Application we have proved that the Bank of Tests is ready to be used, it´s operative and fully functional in monitoring operations of energetic photovoltaic generation and energetic consumption.
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.
Resumo:
Autonomous landing is a challenging and important technology for both military and civilian applications of Unmanned Aerial Vehicles (UAVs). In this paper, we present a novel online adaptive visual tracking algorithm for UAVs to land on an arbitrary field (that can be used as the helipad) autonomously at real-time frame rates of more than twenty frames per second. The integration of low-dimensional subspace representation method, online incremental learning approach and hierarchical tracking strategy allows the autolanding task to overcome the problems generated by the challenging situations such as significant appearance change, variant surrounding illumination, partial helipad occlusion, rapid pose variation, onboard mechanical vibration (no video stabilization), low computational capacity and delayed information communication between UAV and Ground Control Station (GCS). The tracking performance of this presented algorithm is evaluated with aerial images from real autolanding flights using manually- labelled ground truth database. The evaluation results show that this new algorithm is highly robust to track the helipad and accurate enough for closing the vision-based control loop.
Resumo:
El riesgo asociado a la rotura de un depósito de agua en entorno urbano (como la ocurrida, por ejemplo, en la Ciudad Autónoma de Melilla en Noviembre de 1997) y los potenciales daños que puede causar, pone en duda la seguridad de este tipo de infraestructuras que, por necesidades del servicio de abastecimiento de agua, se construyen habitualmente en puntos altos y cercanos a los núcleos de población a los que sirven. Sin embargo, la baja probabilidad de que se produzca una rotura suele rebajar los niveles de alerta asociados a los depósitos, haciéndose hincapié en la mejora de los métodos constructivos sin elaborar metodologías que, como en el caso de las presas y las balsas de riego, establezcan la necesidad de clasificar el riesgo potencial de estas infraestructuras en función de su emplazamiento y de estudiar la posible construcción de medidas mitigadoras de una posible rotura. Por otro lado, para establecer los daños que pueden derivarse de una rotura de este tipo, se hace imprescindible la modelización bidimensional de la ola de rotura por cuanto la malla urbana a la que afectaran no es susceptible de simulaciones unidimensionales, dado que no hay un cauce que ofrezca un camino preferente al agua. Este tipo de simulación requiere de una inversión económica que no siempre está disponible en la construcción de depósitos de pequeño y mediano tamaño. Esta tesis doctoral tiene como objetivo el diseño de una metodología simplificada que, por medio de graficas y atendiendo a las variables principales del fenómeno, pueda estimar un valor para el riesgo asociado a una posible rotura y sirva como guía para establecer si un deposito (existente o de nueva implantación) requiere de un modelo de detalle para estimar el riesgo y si es conveniente implantar alguna medida mitigadora de la energía producida en una rotura de este tipo. Con carácter previo se ha establecido que las variables que intervienen en la definición de riesgo asociado a la rotura, son el calado y la velocidad máxima en cada punto sensible de sufrir daños (daños asociados al vuelco y arrastre de personas principalmente), por lo que se ha procedido a estudiar las ecuaciones que rigen el problema de la rotura del depósito y de la transmisión de la onda de rotura por la malla urbana adyacente al mismo, así como los posibles métodos de resolución de las mismas y el desarrollo informático necesario para una primera aproximación a los resultados. Para poder analizar las condiciones de contorno que influyen en los valores resultantes de velocidad y calado, se ha diseñado una batería de escenarios simplificados que, tras una modelización en detalle y un análisis adimensional, han dado como resultado que las variables que influyen en los valores de calado y velocidad máximos en cada punto son: la altura de la lamina de agua del depósito, la pendiente del terreno, la rugosidad, la forma del terreno (en términos de concavidad) y la distancia del punto de estudio al deposito. Una vez definidas las variables que influyen en los resultados, se ha llevado a cabo una segunda batería de simulaciones de escenarios simplificados que ha servido para la discusión y desarrollo de las curvas que se presentan como producto principal de la metodología simplificada. Con esta metodología, que solamente necesita de unos cálculos simples para su empleo, se obtiene un primer valor de calado y velocidad introduciendo la altura de la lámina de agua máxima de servicio del depósito cuyo riesgo se quiere evaluar. Posteriormente, y utilizando el ábaco propuesto, se obtienen coeficientes correctores de los valores obtenidos para la rugosidad y pendiente media del terreno que se esta evaluando, así como para el grado de concavidad del mismo (a través de la pendiente transversal). Con los valores obtenidos con las curvas anteriores se obtienen los valores de calado y velocidad en el punto de estudio y, aplicando la formulación propuesta, se obtiene una estimación del riesgo asociado a la rotura de la infraestructura. Como corolario a la metodología mencionada, se propone una segunda serie de gráficos para evaluar, también de forma simplificada, la reducción del riesgo que se obtendría con la construcción de alguna medida mitigadora como puede ser un dique o murete perimetral al depósito. Este método de evaluación de posible medidas mitigadoras, aporta una guía para analizar la posibilidad de disminuir el riesgo con la construcción de estos elementos, o la necesidad de buscar otro emplazamiento que, si bien pueda ser no tan favorable desde el punto de vista de la explotación del depósito, presente un menor riesgo asociado a su rotura. Como complemento a la metodología simplificada propuesta, y además de llevar a cabo la calibración de la misma con los datos obtenidos tras la rotura del depósito de agua de Melilla, se ha realizado una serie de ejemplos de utilización de la metodología para, además de servir de guía de uso de la misma, poder analizar la diferencia entre los resultados que se obtendrían con una simulación bidimensional detallada de cada uno de los casos y el método simplificado aplicado a los mismos. The potential risk of a catastrophic collapse of a water supply reservoir in an urban area (such as the one occurred in Melilla in November 1997) and the damages that can cause, make question the security in this kind of infrastructures, which, by operational needs, are frequently built in high elevations and close to the urban areas they serve to. Since the likelihood of breakage is quite low, the alert levels associated to those infrastructures have also been downgraded focussing on the improvement of the constructive methods without developing methodologies (like the ones used in the case of dams or irrigation ponds) where there is a need of classifying the potential risk of those tanks and also of installing mitigating measures. Furthermore, to establish the damages related to a breakage of this kind, a twodimensional modelling of the breakage wave becomes imperative given that the urban layout does not provide a preferential way to the water. This kind of simulation requires financial investment that is not always available in the construction of small and medium sized water tanks. The purpose of this doctoral thesis is to design a simplified methodology, by means of charts and attending to the main variables of the phenomenon, that could estimate a value to the risk associated to a possible breakage. It can also be used as a guidance to establish if a reservoir (existing or a new one) requires a detailed model to estimate the risk of a breakage and the benefits of installing measures to mitigate the breakage wave effects. Previously, it has been established that the variables involved in the risk associated to a breakage are the draft and the maximum speed in every point susceptible to damages (mainly damages related to people). Bellow, the equations ruling the problem of the reservoir breakage have been studied as well as the transmission of the breakage wave through the urban network of the city and the possible methods to solve the equations and the computer development needed to a first approach to the results. In order to be able to analyse the boundary conditions affecting the values resulting (speed and draft), a set of scenarios have been designed. After a detailed modelling and a dimensionless analysis it has been proved that the variables that influence the operational draughts and the maximum speed in every point are the water level in the tank, the slope, the roughness and form (in terms of concavity) of the terrain and the distance between the tank and the control point. Having defined the involving variables, a second set of simulations of the simplified scenarios has been carried out and has helped to discuss and develop the curves that are here presented as the final product of the simplified methodology. This methodology only needs some simple calculations and gives a first value of draft and speed by introducing the maximum water level of the tank being evaluated. Subsequently, using the suggested charts, the method gives correction coefficients of the measured values for roughness and average slope of the assessed terrain as well as the degree of concavity (through transverse gradient).With the values from the previous curves (operational draughts and speed at the point of survey) and applying the proposed formulation, an estimation of the risk associated to the breakage of the infrastructure is finally obtained. As a corollary of the mentioned methodology, another set of diagrams is proposed in order to evaluate, in a simplified manner also, the risk reduction that could be gained with the construction of some mitigating measures such as dikes or retaining walls around the reservoir. This evaluating method provides a guide to analyse the possibility to reduce the risk, constructing those elements or even looking for a different site that could be worse in terms of exploitation of the tank but much safer. As a complement to the simplified methodology here proposed, and apart from completing its calibration with the obtained data after the reservoir breakage in Melilla, a number of examples of the use of the methodology have been made to be used as a user guide of the methodology itself, as well as giving the possibility of analysing the different results that can be obtained from a thorough two-dimensional simulation or from the simplified method applied to the examples.
Resumo:
Our recent studies have shown that deregulated expression of R2, the rate-limiting component of ribonucleotide reductase, enhances transformation and malignant potential by cooperating with activated oncogenes. We now demonstrate that the R1 component of ribonucleotide reductase has tumor-suppressing activity. Stable expression of a biologically active ectopic R1 in ras-transformed mouse fibroblast 10T½ cell lines, with or without R2 overexpression, led to significantly reduced colony-forming efficiency in soft agar. The decreased anchorage independence was accompanied by markedly suppressed malignant potential in vivo. In three ras-transformed cell lines, R1 overexpression resulted in abrogation or marked suppression of tumorigenicity. In addition, the ability to form lung metastases by cells overexpressing R1 was reduced by >85%. Metastasis suppressing activity also was observed in the highly malignant mouse 10T½ derived RMP-6 cell line, which was transformed by a combination of oncogenic ras, myc, and mutant p53. Furthermore, in support of the above observations with the R1 overexpressing cells, NIH 3T3 cells cotransfected with an R1 antisense sequence and oncogenic ras showed significantly increased anchorage independence as compared with control ras-transfected cells. Finally, characteristics of reduced malignant potential also were demonstrated with R1 overexpressing human colon carcinoma cells. Taken together, these results indicate that the two components of ribonucleotide reductase both are unique malignancy determinants playing opposing roles in its regulation, that there is a novel control point important in mechanisms of malignancy, which involves a balance in the levels of R1 and R2 expression, and that alterations in this balance can significantly modify transformation, tumorigenicity, and metastatic potential.
Resumo:
CIITA is a master transactivator of the major histocompatibility complex class II genes, which are involved in antigen presentation. Defects in CIITA result in fatal immunodeficiencies. CIITA activation is also the control point for the induction of major histocompatibility complex class II and associated genes by interferon-γ, but CIITA does not bind directly to DNA. Expression of CIITA in G3A cells, which lack endogenous CIITA, followed by in vivo genomic footprinting, now reveals that CIITA is required for the assembly of transcription factor complexes on the promoters of this gene family, including DRA, Ii, and DMB. CIITA-dependent promoter assembly occurs in interferon-γ-inducible cell types, but not in B lymphocytes. Dissection of the CIITA protein indicates that transactivation and promoter loading are inseparable and reveal a requirement for a GTP binding motif. These findings suggest that CIITA may be a new class of transactivator.
Resumo:
1/2-meter resolution 1:5,000 orthophoto image of the Boston region from April 2001. This datalayer is a subset (covering only the Boston region) of the Massachusetts statewide orthophoto image series available from MassGIS. It consists of 23 orthophoto quads mosaicked together (MassGIS orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 233906, 233910, 237890, 237894, 237898, 237902, 237906, 237910, 241890, 241894, 241898, 241902, 245898, 245902). These medium resolution true color images are considered the new "basemap" for the Commonwealth by MassGIS and the Executive Office of Environmental Affairs (EOEA). MassGIS/EOEA and the Massachusetts Highway Department jointly funded the project. The photography for the mainland was captured in April 2001 when deciduous trees were mostly bare and the ground was generally free of snow. The geographic extent of this dataset is the same as that of the MassGIS dataset: Boston, Massachusetts Region LIDAR First Return Elevation Data, 2002 [see cross references].
Resumo:
Contract no DA-44-009 Eng. 2435, Department of the Army Project no. 8-35-11-101.
Resumo:
Structure from Motion (SfM) is a new form of photogrammetry that automates the rendering of georeferenced 3D models of objects using digital photographs and independently surveyed Ground Control Points (GCPs). This project seeks to quantify the error found in Digital Elevation Models (DEMs) produced using SfM. I modeled a rockslide found at the Cadman Quarry (Monroe, Washington) because the surface is vegetation-free, which is ideal for SfM and Terrestrial LiDAR Scanner (TLS) surveys. By using SfM, TLS, and GPS positioning at the same time, I attempted to find the deviation in the SfM model from the TLS model and GPS points. Using the deviation, I found the Root-Mean-Square Error (RMSE) between the SfM DEM and GPS positions. The RMSE of the SfM model when compared to surveyed GPS points is 17cm. I propagated the uncertainty of the GPS points with the RMSE of the SfM model to find the uncertainty of the SfM model compared to the NAD 1984 datum. The uncertainty of the SfM model compared to the NAD 1984 is 27cm. This study did not produce a model from the TLS that had sufficient resolution on horizontal surfaces to compare to surveyed GPS points.
Resumo:
Generation of neoepitopes on apolipoprotein B within oxidised low-density lipoprotein (LDL) is important in the unregulated uptake of LDL by monocytic scavenger receptors (CD36, SR-AI, LOX-1). Freshly isolated LDL was oxidised by peroxyl radicals generated from the thermal decomposition of an aqueous azo-compound. We describe that formation of carbonyl groups on the protein component is early as protein oxidation was seen after 90min. This is associated with an increased propensity for LDL uptake by U937 monocytes. Three classes of antioxidants (quercetin, dehydroepiandrosterone (DHEA) and ascorbic acid) have been examined for their capacity to inhibit AAPH-induced protein oxidation, (protein carbonyls, Δ electrophoretic mobility and LDL uptake by U937 monocytes). CD36 expression was assessed by flow cytometry and was seen to be unaltered by oxidised LDL uptake. All three classes were effective antioxidants, quercetin (P<0.01), ascorbic acid (P<0.01), DHEA (P<0.05). As LDL protein is the control point for LDL metabolism, the degree of oxidation and protection by antioxidants is likely to be of great importance for (patho)-physiological uptake of LDL by monocytes. © 2003 Elsevier B.V. All rights reserved.
Resumo:
This thesis begins by providing a review of techniques for interpreting the thermal response at the earth's surface acquired using remote sensing technology. Historic limitations in the precision with which imagery acquired from airborne platforms can be geometrically corrected and co-registered has meant that relatively little work has been carried out examining the diurnal variation of surface temperature over wide regions. Although emerging remote sensing systems provide the potential to register temporal image data within satisfactory levels of accuracy, this technology is still not widely available and does not address the issue of historic data sets which cannot be rectified using conventional parametric approaches. In overcoming these problems, the second part of this thesis describes the development of an alternative approach for rectifying airborne line-scanned imagery. The underlying assumption that scan lines within the imagery are straight greatly reduces the number of ground control points required to describe the image geometry. Furthermore, the use of pattern matching procedures to identify geometric disparities between raw line-scanned imagery and corresponding aerial photography enables the correction procedure to be almost fully automated. By reconstructing the raw image data on a truly line-by-line basis, it is possible to register the airborne line-scanned imagery to the aerial photography with an average accuracy of better than one pixel. Providing corresponding aerial photography is available, this approach can be applied in the absence of platform altitude information allowing multi-temporal data sets to be corrected and registered.
Resumo:
This research develops a low cost remote sensing system for use in agricultural applications. The important features of the system are that it monitors the near infrared and it incorporates position and attitude measuring equipment allowing for geo-rectified images to be produced without the use of ground control points. The equipment is designed to be hand held and hence requires no structural modification to the aircraft. The portable remote sensing system consists of an inertia measurement unit (IMU), which is accelerometer based, a low-cost GPS device and a small format false colour composite digital camera. The total cost of producing such a system is below GBP 3000, which is far cheaper than equivalent existing systems. The design of the portable remote sensing device has eliminated bore sight misalignment errors from the direct geo-referencing process. A new processing technique has been introduced for the data obtained from these low-cost devices, and it is found that using this technique the image can be matched (overlaid) onto Ordnance Survey Master Maps at an accuracy compatible with precision agriculture requirements. The direct geo-referencing has also been improved by introducing an algorithm capable of correcting oblique images directly. This algorithm alters the pixels value, hence it is advised that image analysis is performed before image georectification. The drawback of this research is that the low-cost GPS device experienced bad checksum errors, which resulted in missing data. The Wide Area Augmented System (WAAS) correction could not be employed because the satellites could not be locked onto whilst flying. The best GPS data were obtained from the Garmin eTrex (15 m kinematic and 2 m static) instruments which have a highsensitivity receiver with good lock on capability. The limitation of this GPS device is the inability to effectively receive the P-Code wavelength, which is needed to gain the best accuracy when undertaking differential GPS processing. Pairing the carrier phase L1 with the pseudorange C/A-Code received, in order to determine the image coordinates by the differential technique, is still under investigation. To improve the position accuracy, it is recommended that a GPS base station should be established near the survey area, instead of using a permanent GPS base station established by the Ordnance Survey.
Resumo:
The paper in hand presents a mobile testbed –namely the Heavy Duty Planetary Rover (HDPR)– that was designed and constructed at the Automation and Robotics Laboratories (ARL) of the European Space Agency to fulfill the lab’s internal needs in the context of long range rover exploration as well as in order to provide the means to perform in situ testing of novel algorithms. We designed a rover that: a) is able to reliably perform long range routes, and b) carries an abundant of sensors (both current rover technology and futuristic ones). The testbed includes all the additional hardware and software (i.e. ground control station, UAV, networking, mobile power) to allow the prompt deployment on the field. The reader can find in the paper the description of the system as well as a report on our experiences during our first experiments with the testbed.