25 resultados para DROPS
em Universidad Politécnica de Madrid
Resumo:
We study the evolution of a viscous fluid drop rotating about a fixed axis at constant angular velocity $Omega$ or constant angular momentum L surrounded by another viscous fluid. The problem is considered in the limit of large Ekman number and small Reynolds number. The analysis is carried out by combining asymptotic analysis and full numerical simulation by means of the boundary element method. We pay special attention to the stability/instability of equilibrium shapes and the possible formation of singularities representing a change in the topology of the fluid domain. When the evolution is at constant $Omega$, depending on its value, drops can take the form of a flat film whose thickness goes to zero in finite time or an elongated filament that extends indefinitely. When evolution takes place at constant L and axial symmetry is imposed, thin films surrounded by a toroidal rim can develop, but the film thickness does not vanish in finite time. When axial symmetry is not imposed and L is sufficiently large, drops break axial symmetry and, depending on the value of L, reach an equilibrium configuration with a 2-fold symmetry or break up into several drops with a 2- or 3-fold symmetry. The mechanism of breakup is also described
Resumo:
In this contribution we simulate numerically the evolution of a viscous fluid drop rotating about a fixed axis at constant angular velocity ? or constant angular momentum L, surrounded by another viscous fluid. The problem is considered in the limit of large Ekman number and small Reynolds number. In the lecture we will describe the numerical method we have used to solve the PDE system that describes the evolution of the drop (3D boundary element method). We will also present the results we have obtained, paying special attention to the stability/instability of the equilibrium shapes.
Resumo:
The current I to a cylindrical Langmuir probe with a bias Φp satisfying β≡eΦp/mec2∼O(1) is discussed. The probe is considered at rest in an unmagnetized plasma composed of electrons and ions with temperatureskTe∼kTi≪mec2. For small enough radius, the probe collects the relativistic orbital-motion-limited (OML) current I OML , which is shown to be larger than the non-relativistic result; the OML current is proportional to β1/2 and β3/2 in the limits β≪1 and β≫1, respectively. Unlike the non-relativistic case, the electron density can exceed the unperturbed density value. An asymptotic theory allowed to compute the maximum radius of the probe to collect OML current, the sheath radius for probe radius well below maximum and how the ratio I/I OML drops below unity when the maximum radius is exceeded. A numerical algorithm that solves the Vlasov-Poisson system was implemented and density and potential profiles presented. The results and their implications in a possible mission to Jupiter with electrodynamic bare tethers are discussed density value. An asymptotic theory allowed to compute the maximum radius of the probe to collect OML current, the sheath radius for probe radius well below maximum and how the ratio I/IOML drops below unity when the maximum radius is exceeded. A numerical algorithm that solves the Vlasov-Poisson system was implemented and density and potential profiles presented. The results and their implications in a possible mission to Jupiter with electrodynamic bare tethers are discussed.
Resumo:
Structural Health Monitoring (SHM) requires integrated "all in one" electronic devices capable of performing analysis of structural integrity and on-board damage detection in aircraft?s structures. PAMELA III (Phased Array Monitoring for Enhanced Life Assessment, version III) SHM embedded system is an example of this device type. This equipment is capable of generating excitation signals to be applied to an array of integrated piezoelectric Phased Array (PhA) transducers stuck to aircraft structure, acquiring the response signals, and carrying out the advanced signal processing to obtain SHM maps. PAMELA III is connected with a host computer in order to receive the configuration parameters and sending the obtained SHM maps, alarms and so on. This host can communicate with PAMELA III through an Ethernet interface. To avoid the use of wires where necessary, it is possible to add Wi-Fi capabilities to PAMELA III, connecting a Wi-Fi node working as a bridge, and to establish a wireless communication between PAMELA III and the host. However, in a real aircraft scenario, several PAMELA III devices must work together inside closed structures. In this situation, it is not possible for all PAMELA III devices to establish a wireless communication directly with the host, due to the signal attenuation caused by the different obstacles of the aircraft structure. To provide communication among all PAMELA III devices and the host, a wireless mesh network (WMN) system has been implemented inside a closed aluminum wingbox. In a WMN, as long as a node is connected to at least one other node, it will have full connectivity to the entire network because each mesh node forwards packets to other nodes in the network as required. Mesh protocols automatically determine the best route through the network and can dynamically reconfigure the network if a link drops out. The advantages and disadvantages on the use of a wireless mesh network system inside closed aerospace structures are discussed.
Resumo:
Impact testing with an instrumented free-fallingmass (50.4 g) device was applied to three varities of pears and two varieties of apples, forincreasing ripeness stages and impact energy (2 to 20 cm drops). Impact parameters were studied in relation to bruise and to ripeness, establishing relations between them and with the different characteristics of the fruits.
Resumo:
Telecommunications networks have been always expanding and thanks to it, new services have appeared. The old mechanisms for carrying packets have become obsolete due to the new service requirements, which have begun working in real time. Real time traffic requires strict service guarantees. When this traffic is sent through the network, enough resources must be given in order to avoid delays and information losses. When browsing through the Internet and requesting web pages, data must be sent from a server to the user. If during the transmission there is any packet drop, the packet is sent again. For the end user, it does not matter if the webpage loads in one or two seconds more. But if the user is maintaining a conversation with a VoIP program, such as Skype, one or two seconds of delay in the conversation may be catastrophic, and none of them can understand the other. In order to provide support for this new services, the networks have to evolve. For this purpose MPLS and QoS were developed. MPLS is a packet carrying mechanism used in high performance telecommunication networks which directs and carries data using pre-established paths. Now, packets are forwarded on the basis of labels, making this process faster than routing the packets with the IP addresses. MPLS also supports Traffic Engineering (TE). This refers to the process of selecting the best paths for data traffic in order to balance the traffic load between the different links. In a network with multiple paths, routing algorithms calculate the shortest one, and most of the times all traffic is directed through it, causing overload and packet drops, without distributing the packets in the other paths that the network offers and do not have any traffic. But this is not enough in order to provide the real time traffic the guarantees it needs. In fact, those mechanisms improve the network, but they do not make changes in how the traffic is treated. That is why Quality of Service (QoS) was developed. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. Traffic is distributed into different classes and each of them is treated differently, according to its Service Level Agreement (SLA). Traffic with the highest priority will have the preference over lower classes, but this does not mean it will monopolize all the resources. In order to achieve this goal, a set policies are defined to control and alter how the traffic flows. Possibilities are endless, and it depends in how the network must be structured. By using those mechanisms it is possible to provide the necessary guarantees to the real-time traffic, distributing it between categories inside the network and offering the best service for both real time data and non real time data. Las Redes de Telecomunicaciones siempre han estado en expansión y han propiciado la aparición de nuevos servicios. Los viejos mecanismos para transportar paquetes se han quedado obsoletos debido a las exigencias de los nuevos servicios, que han comenzado a operar en tiempo real. El tráfico en tiempo real requiere de unas estrictas garantías de servicio. Cuando este tráfico se envía a través de la red, necesita disponer de suficientes recursos para evitar retrasos y pérdidas de información. Cuando se navega por la red y se solicitan páginas web, los datos viajan desde un servidor hasta el usuario. Si durante la transmisión se pierde algún paquete, éste se vuelve a mandar de nuevo. Para el usuario final, no importa si la página tarda uno o dos segundos más en cargar. Ahora bien, si el usuario está manteniendo una conversación usando algún programa de VoIP (como por ejemplo Skype) uno o dos segundos de retardo en la conversación podrían ser catastróficos, y ninguno de los interlocutores sería capaz de entender al otro. Para poder dar soporte a estos nuevos servicios, las redes deben evolucionar. Para este propósito se han concebido MPLS y QoS MPLS es un mecanismo de transporte de paquetes que se usa en redes de telecomunicaciones de alto rendimiento que dirige y transporta los datos de acuerdo a caminos preestablecidos. Ahora los paquetes se encaminan en función de unas etiquetas, lo cual hace que sea mucho más rápido que encaminar los paquetes usando las direcciones IP. MPLS también soporta Ingeniería de Tráfico (TE). Consiste en seleccionar los mejores caminos para el tráfico de datos con el objetivo de balancear la carga entre los diferentes enlaces. En una red con múltiples caminos, los algoritmos de enrutamiento actuales calculan el camino más corto, y muchas veces el tráfico se dirige sólo por éste, saturando el canal, mientras que otras rutas se quedan completamente desocupadas. Ahora bien, esto no es suficiente para ofrecer al tráfico en tiempo real las garantías que necesita. De hecho, estos mecanismos mejoran la red, pero no realizan cambios a la hora de tratar el tráfico. Por esto es por lo que se ha desarrollado el concepto de Calidad de Servicio (QoS). La calidad de servicio es la capacidad para ofrecer diferentes prioridades a las diferentes aplicaciones, usuarios o flujos de datos, y para garantizar un cierto nivel de rendimiento en un flujo de datos. El tráfico se distribuye en diferentes clases y cada una de ellas se trata de forma diferente, de acuerdo a las especificaciones que se indiquen en su Contrato de Tráfico (SLA). EL tráfico con mayor prioridad tendrá preferencia sobre el resto, pero esto no significa que acapare la totalidad de los recursos. Para poder alcanzar estos objetivos se definen una serie de políticas para controlar y alterar el comportamiento del tráfico. Las posibilidades son inmensas dependiendo de cómo se quiera estructurar la red. Usando estos mecanismos se pueden proporcionar las garantías necesarias al tráfico en tiempo real, distribuyéndolo en categorías dentro de la red y ofreciendo el mejor servicio posible tanto a los datos en tiempo real como a los que no lo son.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.
Resumo:
El presente Proyecto Fin de Carrera consiste en un estudio de los accesos a red que utilizan los servicios a los que están adscritos los usuarios de servicios de teleasistencia, planteando al final del mismo un modelo de previsión de caídas que permita que ese acceso a red no sea un problema para la prestación del servicio. Para poder llegar a los objetivos anteriormente descritos, iniciaremos este documento presentando qué se entiende actualmente como servicios de telemedicina y teleasistencia. Prestaremos atención a los actores que intervienen, usos y beneficios que tienen tanto para los pacientes como para las administraciones públicas. Una vez sepamos en qué consisten, centraremos la atención en las redes de acceso que se utilizan para prestar los servicios de telemedicina, con sus ventajas y desventajas. Puesto que no todos los servicios tienen los mismos requisitos generales de fiabilidad o velocidad de transmisión, veremos cómo se puede garantizar las necesidades de cada tipo de servicio por parte del proveedor de red. El siguiente paso para llegar a establecer el modelo de previsión de caídas será conocer las necesidades técnicas y de los actores para prestar un servicio de teleasistencia en el hogar de un paciente. Esto incluirá estudiar qué equipos se necesitan, cómo gestionarlos y cómo marcar el tráfico para que el operador de red sepa cómo tratarlo según el servicio de teleasistencia que se está utilizando, llevando a generar un modelo de supervisión de enlaces de teleasistencia. Llegados a este punto estaremos ya preparados para establecer un modelo de previsión de caídas de la conexión, describiendo la lógica que se necesite para ello, y poniéndolo en práctica con dos ejemplo concretos: un servicio de telemonitorización domiciliaria y otro servicio de telemonitorización ambulatoria. Para finalizar, realizaremos una recapitulación sobre lo estudiado en este documento y realizaremos una serie de recomendaciones. ABSTRACT. This Thesis is a study of the access network to be used with services assigned to patients that are users of telecare services. In the last chapter we will describe a fall forecasting model that allows the access network to not be an issue for the service. For achieving the objectives described above, this paper will begin with the presentation of what is now understood as telemedicine and telecare services. We pay attention to the actors involved, uses and benefits that they have both for patients and for public administrations. Once we know what telecare means and what requisites they have, we will focus on access networks which are used to provide telemedicine services, with their advantages and disadvantages. Since not all services have the same general requirements of reliability and transmission speed, we will try to see how you can ensure the needs of each type of service from the network provider's point of view. The next step is to establish that the forecasting model of falls will meet the technical needs and actors to provide telecare service in the home of a patient. This will include a study of what equipment is needed, how to manage and how to mark traffic for the network operator knowing how to treat it according to the telecare service being used, and this will lead us to the creation of a model of telecare link monitoring. At this point we are already prepared to establish a forecasting model of connection drops, describing the logic that is needed for this, and putting it into practice with two concrete examples: telemonitoring service and an ambulatory telemonitoring service. Finally, we will have a recap on what has been studied in this paper and will make a series of recommendations.
Resumo:
La concentración fotovoltaica (CPV) es una de las formas más prometedoras de reducir el coste de la energía proveniente del sol. Esto es posible gracias a células solares de alta eficiencia y a una significativa reducción del tamaño de la misma, que está fabricada con costosos materiales semiconductores. Ambos aspectos están íntimamente ligados ya que las altas eficiencias solamente son posibles con materiales y tecnologías de célula caros, lo que forzosamente conlleva una reducción del tamaño de la célula si se quiere lograr un sistema rentable. La reducción en el tamaño de las células requiere que la luz proveniente del sol ha de ser redirigida (es decir, concentrada) hacia la posición de la célula. Esto se logra colocando un concentrador óptico encima de la célula. Estos concentradores para CPV están formados por diferentes elementos ópticos fabricados en materiales baratos, con el fin de reducir los costes de producción. El marco óptimo para el diseño de concentradores es la óptica anidólica u óptica nonimaging. La óptica nonimaging fue desarrollada por primera vez en la década de los años sesenta y ha ido evolucionando significativamente desde entonces. El objetivo de los diseños nonimaging es la transferencia eficiente de energía entre la fuente y el receptor (sol y célula respectivamente, en el caso de la CPV), sin tener en cuenta la formación de imagen. Los sistemas nonimaging suelen ser simples, están compuestos de un menor número de superficies que los sistemas formadores de imagen y son más tolerantes a errores de fabricación. Esto hace de los sistemas nonimaging una herramienta fundamental, no sólo en el diseño de concentradores fotovoltaicos, sino también en el diseño de otras aplicaciones como iluminación, proyección y comunicaciones inalámbricas ópticas. Los concentradores ópticos nonimaging son adecuados para aplicaciones CPV porque el objetivo no es la reproducción de una imagen exacta del sol (como sería el caso de las ópticas formadoras de imagen), sino simplemente la colección de su energía sobre la célula solar. Los concentradores para CPV pueden presentar muy diferentes arquitecturas y elementos ópticos, dando lugar a una gran variedad de posibles diseños. El primer elemento óptico que es atravesado por la luz del sol se llama Elemento Óptico Primario (POE en su nomenclatura anglosajona) y es el elemento más determinante a la hora de definir la forma y las propiedades del concentrador. El POE puede ser refractivo (lente) o reflexivo (espejo). Esta tesis se centra en los sistemas CPV que presentan lentes de Fresnel como POE, que son lentes refractivas delgadas y de bajo coste de producción que son capaces de concentrar la luz solar. El capítulo 1 expone una breve introducción a la óptica geométrica y no formadora de imagen (nonimaging), explicando sus fundamentos y conceptos básicos. Tras ello, la integración Köhler es presentada en detalle, explicando sus principios, válidos tanto para aplicaciones CPV como para iluminación. Una introducción a los conceptos fundamentales de CPV también ha sido incluida en este capítulo, donde se analizan las propiedades de las células solares multiunión y de los concentradores ópticos empleados en los sistemas CPV. El capítulo se cierra con una descripción de las tecnologías existentes empleadas para la fabricación de elementos ópticos que componen los concentradores. El capítulo 2 se centra principalmente en el diseño y desarrollo de los tres concentradores ópticos avanzados Fresnel Köhler que se presentan en esta tesis: Fresnel-Köhler (FK), Fresnel-Köhler curvo (DFK) y Fresnel-Köhler con cavidad (CFK). Todos ellos llevan a cabo integración Köhler y presentan una lente de Fresnel como su elemento óptico primario. Cada uno de estos concentradores CPV presenta sus propias propiedades y su propio procedimiento de diseño. Además, presentan todas las características que todo concentrador ha de tener: elevado factor de concentración, alta tolerancia de fabricación, alta eficiencia óptica, irradiancia uniforme sobre la superficie de la célula y bajo coste de producción. Los concentradores FK y DFK presentan una configuración de cuatro sectores para lograr la integración Köhler. Esto quiere decir que POE y SOE se dividen en cuatro sectores simétricos cada uno, y cada sector del POE trabaja conjuntamente con su correspondiente sector de SOE. La principal diferencia entre los dos concentradores es que el POE del FK es una lente de Fresnel plana, mientras que una lente curva de Fresnel es empleada como POE del DFK. El concentrador CFK incluye una cavidad de confinamiento externo integrada, que es un elemento óptico capaz de recuperar los rayos reflejados por la superficie de la célula con el fin de ser reabsorbidos por la misma. Por tanto, se aumenta la absorción de la luz, lo que implica un aumento en la eficiencia del módulo. Además, este capítulo también explica un método de diseño alternativo para los elementos faceteados, especialmente adecuado para las lentes curvas como el POE del DFK. El capítulo 3 se centra en la caracterización y medidas experimentales de los concentradores ópticos presentados en el capítulo 2, y describe sus procedimientos. Estos procedimientos son en general aplicables a cualquier concentrador basado en una lente de Fresnel, e incluyen tres tipos principales de medidas experimentales: eficiencia eléctrica, ángulo de aceptancia y uniformidad de la irradiancia en el plano de la célula. Los resultados que se muestran a lo largo de este capítulo validarán a través de medidas a sol real las características avanzadas que presentan los concentradores Köhler, y que se demuestran en el capítulo 2 mediante simulaciones de rayos. Cada concentrador (FK, DFK y CFK) está diseñado y optimizado teniendo en cuenta condiciones de operación realistas. Su rendimiento se modela de forma exhaustiva mediante el trazado de rayos en combinación con modelos distribuidos para la célula. La tolerancia es un asunto crítico de cara al proceso de fabricación, y ha de ser máxima para obtener sistemas de producción en masa rentables. Concentradores con tolerancias limitadas generan bajadas significativas de eficiencia a nivel de array, causadas por el desajuste de corrientes entre los diferentes módulos (principalmente debido a errores de alineación en la fabricación). En este sentido, la sección 3.5 presenta dos métodos matemáticos que estiman estas pérdidas por desajuste a nivel de array mediante un análisis de sus curvas I-V, y por tanto siendo innecesarias las medidas a nivel de mono-módulo. El capítulo 3 también describe la caracterización indoor de los elementos ópticos que componen los concentradores, es decir, de las lentes de Fresnel que actúan como POE y de los secundarios free-form. El objetivo de esta caracterización es el de evaluar los adecuados perfiles de las superficies y las transmisiones ópticas de los diferentes elementos analizados, y así hacer que el rendimiento del módulo sea el esperado. Esta tesis la cierra el capítulo 4, en el que la integración Köhler se presenta como una buena alternativa para obtener distribuciones uniformes en aplicaciones de iluminación de estado sólido (iluminación con LED), siendo particularmente eficaz cuando se requiere adicionalmente una buena mezcla de colores. En este capítulo esto se muestra a través del ejemplo particular de un concentrador DFK, el cual se ha utilizado para aplicaciones CPV en los capítulos anteriores. Otra alternativa para lograr mezclas cromáticas apropiadas está basada en un método ya conocido (deflexiones anómalas), y también se ha utilizado aquí para diseñar una lente TIR aplanética delgada. Esta lente cumple la conservación de étendue, asegurando así que no hay bloqueo ni dilución de luz simultáneamente. Ambos enfoques presentan claras ventajas sobre las técnicas clásicas empleadas en iluminación para obtener distribuciones de iluminación uniforme: difusores y mezcla caleidoscópica mediante guías de luz. ABSTRACT Concentrating Photovoltaics (CPV) is one of the most promising ways of reducing the cost of energy collected from the sun. This is possible thanks to both, very high-efficiency solar cells and a large decrease in the size of cells, which are made of costly semiconductor materials. Both issues are closely linked since high efficiency values are only possible with expensive cell materials and technologies, implying a compulsory area reduction if cost-effectiveness is desired. The reduction in the cell size requires that light coming from the sun must be redirected (i.e. concentrated) towards the cell position. This is achieved by placing an optical concentrator system on top of the cell. These CPV concentrators consist of different optical elements manufactured on cheap materials in order to maintain low production costs. The optimal framework for the design of concentrators is nonimaging optics. Nonimaging optics was first developed in the 60s decade and has been largely developed ever since. The aim of nonimaging devices is the efficient transfer of light power between the source and the receiver (sun and cell respectively in the case of CPV), disregarding image formation. Nonimaging systems are usually simple, comprised of fewer surfaces than imaging systems and are more tolerant to manufacturing errors. This renders nonimaging optics a fundamental tool, not only in the design of photovoltaic concentrators, but also in the design of other applications as illumination, projection and wireless optical communications. Nonimaging optical concentrators are well suited for CPV applications because the goal is not the reproduction of an exact image of the sun (as imaging optics would provide), but simply the collection of its energy on the solar cell. Concentrators for CPV may present very different architectures and optical elements, resulting in a vast variety of possible designs. The first optical element that sunlight goes through is called the Primary Optical Element (POE) and is the most determinant element in order to define the shape and properties of the whole concentrator. The POE can be either refractive (lens) or reflective (mirror). This thesis focuses on CPV systems based on Fresnel lenses as POE, which are thin and inexpensive refractive lenses able to concentrate sunlight. Chapter 1 exposes a short introduction to geometrical and nonimaging optics, explaining their fundamentals and basic concepts. Then, the Köhler integration is presented in detail, explaining its principles, valid for both applications: CPV and illumination. An introduction to CPV fundamental concepts is also included in this chapter, analyzing the properties of multijunction solar cells and optical concentrators employed in CPV systems. The chapter is closed with a description of the existing technologies employed for the manufacture of optical elements composing the concentrator. Chapter 2 is mainly devoted to the design and development of the three advanced Fresnel Köhler optical concentrators presented in this thesis work: Fresnel-Köhler (FK), Dome-shaped Fresnel-Köhler (DFK) and Cavity Fresnel-Köhler (CFK). They all perform Köhler integration and comprise a Fresnel lens as their Primary Optical Element. Each one of these CPV concentrators presents its own characteristics, properties and its own design procedure. Their performances include all the key issues in a concentrator: high concentration factor, large tolerances, high optical efficiency, uniform irradiance on the cell surface and low production cost. The FK and DFK concentrators present a 4-fold configuration in order to perform the Köhler integration. This means that POE and SOE are divided into four symmetric sectors each one, working each POE sector with its corresponding SOE sector by pairs. The main difference between both concentrators is that the POE of the FK is a flat Fresnel lens, while a dome-shaped (curved) Fresnel lens performs as the DFK’s POE. The CFK concentrator includes an integrated external confinement cavity, which is an optical element able to recover rays reflected by the cell surface in order to be re-absorbed by the cell. It increases the light absorption, entailing an increase in the efficiency of the module. Additionally, an alternative design method for faceted elements will also be explained, especially suitable for dome-shaped lenses as the POE of the DFK. Chapter 3 focuses on the characterization and experimental measurements of the optical concentrators presented in Chapter 2, describing their procedures. These procedures are in general applicable to any Fresnel-based concentrator as well and include three main types of experimental measurements: electrical efficiency, acceptance angle and irradiance uniformity at the solar cell plane. The results shown along this chapter will validate through outdoor measurements under real sun operation the advanced characteristics presented by the Köhler concentrators, which are demonstrated in Chapter 2 through raytrace simulation: high optical efficiency, large acceptance angle, insensitivity to manufacturing tolerances and very good irradiance uniformity on the cell surface. Each concentrator (FK, DFK and CFK) is designed and optimized looking at realistic performance characteristics. Their performances are modeled exhaustively using ray tracing combined with cell modeling, taking into account the major relevant factors. The tolerance is a critical issue when coming to the manufacturing process in order to obtain cost-effective mass-production systems. Concentrators with tight tolerances result in significant efficiency drops at array level caused by current mismatch among different modules (mainly due to manufacturing alignment errors). In this sense, Section 3.5 presents two mathematical methods that estimate these mismatch losses for a given array just by analyzing its full-array I-V curve, hence being unnecessary any single mono-module measurement. Chapter 3 also describes the indoor characterization of the optical elements composing the concentrators, i.e. the Fresnel lenses acting as POEs and the free-form SOEs. The aim of this characterization is to assess the proper surface profiles and optical transmissions of the different elements analyzed, so they will allow for the expected module performance. This thesis is closed by Chapter 4, in which Köhler integration is presented as a good approach to obtain uniform distributions in Solid State Lighting applications (i.e. illumination with LEDs), being particularly effective when dealing with color mixing requirements. This chapter shows it through the particular example of a DFK concentrator, which has been used for CPV applications in the previous chapters. An alternative known method for color mixing purposes (anomalous deflections) has also been used to design a thin aplanatic TIR lens. This lens fulfills conservation of étendue, thus ensuring no light blocking and no light dilution at the same time. Both approaches present clear advantages over the classical techniques employed in lighting to obtain uniform illumination distributions: diffusers and kaleidoscopic lightpipe mixing.
Resumo:
Ultrasonic sound velocity measurements with hand-held equipment remain due to their simplicity among the most used methods for non-destructive grading of sawn woods, yet a dedicated normalization effort with respect to strength classes for Spanish species is still required. As part of an ongoing project with the aim of definition of standard testing methods, the effect of the dimensions of commonly tested Scots pine (Pinus sylvestris L.) timbers and equipment testing frequency on ultrasonic velocity were investigated. A dedicated full-wave finite-difference time-domain software allowed simulation of pulse propagation through timbers of representative length and section combinations. Sound velocity measurements vL were performed along the grain with the indirect method at 22 kHz and 45 kHz for grids of measurement points at specific distances. For sample sections larger than the cross-sectional wavelength ?RT, the simulated sound velocity vL converges to vL = (CL/?)0.5. For smaller square sections the sound velocity drops down to vL = (EL/?)0.5, where CL, EL and ? are the stiffness, E-modul and density, respectively. The experiments confirm a linear regression between time of flight and measurement distance even at less than two wavelength menor que2?L distance, the fitted sound speed values increased by 15% between the two tested frequencies.
Resumo:
The outstanding problem for useful applications of electrodynamic tethers is obtaining sufficient electron current from the ionospheric plasma. Bare tether collectors, in which the conducting tether itself, left uninsulated over kilometers of its length, acts as the collecting anode, promise to attain currents of 10 A or more from reasonably sized systems. Current collection by a bare tether is also relatively insensitive to drops in electron density, which are regularly encountered on each revolution of an orbit. This makes nighttime operation feasible. We show how the bare tether's high efficiency of current collection and ability to adjust to density variations follow from the orbital motion limited collection law of thin cylinders. We consider both upwardly deployed (power generation mode) and downwardly deployed (reboost mode) tethers, and present results that indicate how bare tether systems would perform as their magnetic and plasma environment varies in low earth orbit.
Resumo:
This paper describes the authors? experience with static analysis of both WCET and stack usage of a satellite on-board software subsystem. The work is a continuation of a previous case study that used a dynamic WCET analysis tool on an earlier version of the same software system. In particular, the AbsInt aiT tool has been evaluated by analysing both C and Ada code generated by Simulink within the UPMSat-2 project. Some aspects of the aiT tool, specifically those dealing with SPARC register windows, are compared to another static analysis tool, Bound-T. The results of the analysis are discussed, and some conclusions on the use of static WCET analysis tools on the SPARC architecture are commented in the paper.
Resumo:
The collection of electrons from the ionosphere is the major problem facing high-power electrodynamic tethers. This article discusses a simple electron-collection concept which is free of most of the physical uncertainties associated with plasma contactors in the rarefied, magnetized environment of an orbiting tether. The idea is to leave exposed a fraction of the tether length near its anodic end, such that, when a positive bias develops locally with respect to the ambient plasma, and for a tether radius small compared with both thermal gyroradius and Debye length, electrons are collected in an orbital-motion-limited regime. It is shown that large currents can be drawn in this way with only moderate voltage drops. The concept is illustrated through a discussion of performance characteristics for generators and thrusters.
Resumo:
Plant trichomes play important protective functions and may have a major influence on leaf surface wettability. With the aim of gaining insight into trichome structure, composition and function in relation to water-plant surface interactions, we analyzed the adaxial and abaxial leaf surface of Quercus ilex L. (holm oak) as model. By measuring the leaf water potential 24 h after the deposition of water drops on to abaxial and adaxial surfaces, evidence for water penetration through the upper leaf side was gained in young and mature leaves. The structure and chemical composition of the abaxial (always present) and adaxial (occurring only in young leaves) trichomes were analyzed by various microscopic and analytical procedures. The adaxial surfaces were wettable and had a high degree of water drop adhesion in contrast to the highly unwettable and water repellent abaxial holm oak leaf sides. The surface free energy, polarity and solubility parameter decreased with leaf age, with generally higher values determined for the abaxial sides. All holm oak leaf trichomes were covered with a cuticle. The abaxial trichomes were composed of 8% soluble waxes, 49% cutin, and 43% polysaccharides. For the adaxial side, it is concluded that trichomes and the scars after trichome shedding contribute to water uptake, while the abaxial leaf side is highly hydrophobic due to its high degree of pubescence and different trichome structure, composition and density. Results are interpreted in terms of water-plant surface interactions, plant surface physical-chemistry, and plant ecophysiology.
Resumo:
We review previously published results, and present new results, on the way current to a cylindrical probe drops below the orbital-motion-limited (OML) value for probe cross-sections too large or concave. Results on size and shape effects arise from unrelated behavior in the near and far potential field, and apply to a general cross-section, which can be characterised by radius Req and perimeter peij of equivalent circles. These results are used to discuss collection interference among two or more parallel bare tethers when brought from far away to contact.