14 resultados para Polygons of Equal Perimeter
em Universidad Politécnica de Madrid
Resumo:
The current space environment, consisting of manmade debris and micrometeoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Bare electrodynamic tethers can provide an efficient mechanism for rapid deorbiting of satellites from low Earth orbit at end of life. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. The rates of fatal impact of orbital debris on round and tape tethers of equal length and mass, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 validates the analytical result and shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because deorbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high.
3-D modeling of perimeter recombination in GaAs diodes and its influence on concentrator solar cells
Resumo:
This paper describes a complete modelling of the perimeter recombination of GaAs diodes which solves most unknowns and suppresses the limitations of previous models. Because of the three dimensional nature of the implemented model, it is able to simulate real devices. GaAs diodes on two epiwafers with different base doping levels, sizes and geometries, namely square and circular are manufactured. The validation of the model is achieved by fitting the experimental measurements of the dark IV curve of the manufactured GaAs diodes. A comprehensive 3-D description of the occurring phenomena affecting the perimeter recombination is supplied with the help of the model. Finally, the model is applied to concentrator GaAs solar cells to assess the impact of their doping level, size and geometry on the perimeter recombination.
Resumo:
A Space tether is a thin, multi-kilometers long conductive wire, joining a satellite and some opposite end mass, and keeping vertical in orbit by the gravity-gradient. The ambient plasma, being highly conductive, is equipotential in its own co-moving frame. In the tether frame, in relative motion however, there is in the plasma a motional electric field of order of 100 V/km, product of (near) orbital velocity and geomagnetic field. The electromotive force established over the tether length allows plasma contactor devices to collect electrons at one polarized-positive (anodic) end and eject electrons at the opposite end, setting up a current along a standard, fully insulated tether. The Lorentz force exerted on the current by the geomagnetic field itself is always drag; this relies on just thermodynamics, like air drag. The bare tether concept, introduced in 1992 at the Universidad Politécnica de Madrid (UPM), takes away the insulation and has electrons collected over the tether segment coming out polarized positive; the concept rests on 2D (Langmuir probe) current-collection in plasmas being greatly more efficient than 3D collection. A Plasma Contactor ejects electrons at the cathodic end. A bare tether with a thin-tape cross section has much greater perimeter and de-orbits much faster than a (corresponding) round bare tether of equal length and mass. Further, tethers being long and thin, they are prone to cuts by abundant small space debris, but BETs has shown that the tape has a probability of being cut per unit time smaller by more than one order of magnitude than the corresponding round tether (debris comparable to its width are much less abundant than debris comparable to the radius of the corresponding round tether). Also, the tape collects much more current, and de-orbits much faster, than a corresponding multi-line “tape” made of thin round wires cross-connected to survive debris cuts. Tethers use a dissipative mechanism quite different from air drag and can de-orbit in just a few months; also, tape tethers are much lighter than round tethers of equal length and perimeter, which can capture equal current. The 3 disparate tape dimensions allow easily scalable design. Switching the cathodic Contactor off-on allows maneuvering to avoid catastrophic collisions with big tracked debris. Lorentz braking is as reliable as air drag. Tethers are still reasonably effective at high inclinations, where the motional field is small, because the geomagnetic field is not just a dipole along the Earth polar axis. BETs is the EC FP7/Space Project 262972, financed in about 1.8 million euros, from 1 November 2010 to 31 January 2014, and carrying out RTD work on de-orbiting space debris. Coordinated by UPM, it has partners Università di Padova, ONERA-Toulouse, Colorado State University, SME Emxys, DLR–Bremen, and Fundación Tecnalia. BETs work involves 1) Designing, building, and ground-testing basic hardware subsystems Cathodic Plasma Contactor, Tether Deployment Mechanism, Power Control Module, and Tape with crosswise and lengthwise structure. 2) Testing current collection and verifying tether dynamical stability. 3) Preliminary design of tape dimensions for a generic mission, conducive to low system-to-satellite mass ratio and probability of cut by small debris, and ohmic-effects regime of tether current for fast de-orbiting. Reaching TRL 4-5, BETs appears ready for in-orbit demostration.
Resumo:
n this article, a tool for simulating the channel impulse response for indoor visible light communications using 3D computer-aided design (CAD) models is presented. The simulation tool is based on a previous Monte Carlo ray-tracing algorithm for indoor infrared channel estimation, but including wavelength response evaluation. The 3D scene, or the simulation environment, can be defined using any CAD software in which the user specifies, in addition to the setting geometry, the reflection characteristics of the surface materials as well as the structures of the emitters and receivers involved in the simulation. Also, in an effort to improve the computational efficiency, two optimizations are proposed. The first one consists of dividing the setting into cubic regions of equal size, which offers a calculation improvement of approximately 50% compared to not dividing the 3D scene into sub-regions. The second one involves the parallelization of the simulation algorithm, which provides a computational speed-up proportional to the number of processors used.
Resumo:
El entorno espacial actual hay un gran numero de micro-meteoritos y basura espacial generada por el hombre, lo cual plantea un riesgo para la seguridad de las operaciones en el espacio. La situación se agrava continuamente a causa de las colisiones de basura espacial en órbita, y los nuevos lanzamientos de satélites. Una parte significativa de esta basura son satélites muertos, y fragmentos de satélites resultantes de explosiones y colisiones de objetos en órbita. La mitigación de este problema se ha convertido en un tema de preocupación prioritario para todas las instituciones que participan en operaciones espaciales. Entre las soluciones existentes, las amarras electrodinámicas (EDT) proporcionan un eficiente dispositivo para el rápido de-orbitado de los satélites en órbita terrestre baja (LEO), al final de su vida útil. El campo de investigación de las amarras electrodinámicas (EDT) ha sido muy fructífero desde los años 70. Gracias a estudios teóricos, y a misiones para la demostración del funcionamiento de las amarras en órbita, esta tecnología se ha desarrollado muy rápidamente en las últimas décadas. Durante este período de investigación, se han identificado y superado múltiples problemas técnicos de diversa índole. Gran parte del funcionamiento básico del sistema EDT depende de su capacidad de supervivencia ante los micro-meteoritos y la basura espacial. Una amarra puede ser cortada completamente por una partícula cuando ésta tiene un diámetro mínimo. En caso de corte debido al impacto de partículas, una amarra en sí misma, podría ser un riesgo para otros satélites en funcionamiento. Por desgracia, tras varias demostraciones en órbita, no se ha podido concluir que este problema sea importante para el funcionamiento del sistema. En esta tesis, se presenta un análisis teórico de la capacidad de supervivencia de las amarras en el espacio. Este estudio demuestra las ventajas de las amarras de sección rectangular (cinta), en cuanto a la probabilidad de supervivencia durante la misión, frente a las amarras convencionales (cables de sección circular). Debido a su particular geometría (longitud mucho mayor que la sección transversal), una amarra puede tener un riesgo relativamente alto de ser cortado por un único impacto con una partícula de pequeñas dimensiones. Un cálculo analítico de la tasa de impactos fatales para una amarra cilindrica y de tipo cinta de igual longitud y masa, considerando el flujo de partículas de basura espacial del modelo ORDEM2000 de la NASA, muestra mayor probabilidad de supervivencia para las cintas. Dicho análisis ha sido comparado con un cálculo numérico empleando los modelos de flujo el ORDEM2000 y el MASTER2005 de ESA. Además se muestra que, para igual tiempo en órbita, una cinta tiene una probabilidad de supervivencia un orden y medio de magnitud mayor que una amarra cilindrica con igual masa y longitud. Por otra parte, de-orbitar una cinta desde una cierta altitud, es mucho más rápido, debido a su mayor perímetro que le permite capturar más corriente. Este es un factor adicional que incrementa la probabilidad de supervivencia de la cinta, al estar menos tiempo expuesta a los posibles impactos de basura espacial. Por este motivo, se puede afirmar finalmente y en sentido práctico, que la capacidad de supervivencia de la cinta es bastante alta, en comparación con la de la amarra cilindrica. El segundo objetivo de este trabajo, consiste en la elaboración de un modelo analítico, mejorando la aproximación del flujo de ORDEM2000 y MASTER2009, que permite calcular con precisión, la tasa de impacto fatal al año para una cinta en un rango de altitudes e inclinaciones, en lugar de unas condiciones particulares. Se obtiene el numero de corte por un cierto tiempo en función de la geometría de la cinta y propiedades de la órbita. Para las mismas condiciones, el modelo analítico, se compara con los resultados obtenidos del análisis numérico. Este modelo escalable ha sido esencial para la optimización del diseño de la amarra para las misiones de de-orbitado de los satélites, variando la masa del satélite y la altitud inicial de la órbita. El modelo de supervivencia se ha utilizado para construir una función objetivo con el fin de optimizar el diseño de amarras. La función objectivo es el producto del cociente entre la masa de la amarra y la del satélite y el numero de corte por un cierto tiempo. Combinando el modelo de supervivencia con una ecuación dinámica de la amarra donde aparece la fuerza de Lorentz, se elimina el tiempo y se escribe la función objetivo como función de la geometría de la cinta y las propietades de la órbita. Este modelo de optimización, condujo al desarrollo de un software, que esta en proceso de registro por parte de la UPM. La etapa final de este estudio, consiste en la estimación del número de impactos fatales, en una cinta, utilizando por primera vez una ecuación de límite balístico experimental. Esta ecuación ha sido desarollada para cintas, y permite representar los efectos tanto de la velocidad de impacto como el ángulo de impacto. Los resultados obtenidos demuestran que la cinta es altamente resistente a los impactos de basura espacial, y para una cinta con una sección transversal definida, el número de impactos críticos debidos a partículas no rastreables es significativamente menor. ABSTRACT The current space environment, consisting of man-made debris and tiny meteoroids, poses a risk to safe operations in space, and the situation is continuously deteriorating due to in-orbit debris collisions and to new satellite launches. Among these debris a significant portion is due to dead satellites and fragments of satellites resulted from explosions and in-orbit collisions. Mitigation of space debris has become an issue of first concern for all the institutions involved in space operations. Bare electrodynamic tethers (EDT) can provide an efficient mechanism for rapid de-orbiting of defunct satellites from low Earth orbit (LEO) at end of life. The research on EDT has been a fruitful field since the 70’s. Thanks to both theoretical studies and in orbit demonstration missions, this technology has been developed very fast in the following decades. During this period, several technical issues were identified and overcome. The core functionality of EDT system greatly depends on their survivability to the micrometeoroids and orbital debris, and a tether can become itself a kind of debris for other operating satellites in case of cutoff due to particle impact; however, this very issue is still inconclusive and conflicting after having a number of space demonstrations. A tether can be completely cut by debris having some minimal diameter. This thesis presents a theoretical analysis of the survivability of tethers in space. The study demonstrates the advantages of tape tethers over conventional round wires particularly on the survivability during the mission. Because of its particular geometry (length very much larger than cross-sectional dimensions), a tether may have a relatively high risk of being severed by the single impact of small debris. As a first approach to the problem, survival probability has been compared for a round and a tape tether of equal mass and length. The rates of fatal impact of orbital debris on round and tape tether, evaluated with an analytical approximation to debris flux modeled by NASA’s ORDEM2000, shows much higher survival probability for tapes. A comparative numerical analysis using debris flux model ORDEM2000 and ESA’s MASTER2005 shows good agreement with the analytical result. It also shows that, for a given time in orbit, a tape has a probability of survival of about one and a half orders of magnitude higher than a round tether of equal mass and length. Because de-orbiting from a given altitude is much faster for the tape due to its larger perimeter, its probability of survival in a practical sense is quite high. As the next step, an analytical model derived in this work allows to calculate accurately the fatal impact rate per year for a tape tether. The model uses power laws for debris-size ranges, in both ORDEM2000 and MASTER2009 debris flux models, to calculate tape tether survivability at different LEO altitudes. The analytical model, which depends on tape dimensions (width, thickness) and orbital parameters (inclinations, altitudes) is then compared with fully numerical results for different orbit inclinations, altitudes and tape width for both ORDEM2000 and MASTER2009 flux data. This scalable model not only estimates the fatal impact count but has proved essential in optimizing tether design for satellite de-orbit missions varying satellite mass and initial orbital altitude and inclination. Within the frame of this dissertation, a simple analysis has been finally presented, showing the scalable property of tape tether, thanks to the survivability model developed, that allows analyze and compare de-orbit performance for a large range of satellite mass and orbit properties. The work explicitly shows the product of tether-to-satellite mass-ratio and fatal impact count as a function of tether geometry and orbital parameters. Combining the tether dynamic equation involving Lorentz drag with space debris impact survivability model, eliminates time from the expression. Hence the product, is independent of tether de-orbit history and just depends on mission constraints and tether length, width and thickness. This optimization model finally led to the development of a friendly software tool named BETsMA, currently in process of registration by UPM. For the final step, an estimation of fatal impact rate on a tape tether has been done, using for the first time an experimental ballistic limit equation that was derived for tapes and accounts for the effects of both the impact velocity and impact angle. It is shown that tape tethers are highly resistant to space debris impacts and considering a tape tether with a defined cross section, the number of critical events due to impact with non-trackable debris is always significantly low.
Resumo:
El hormigón autocompactante (HAC) es una nueva tipología de hormigón o material compuesto base cemento que se caracteriza por ser capaz de fluir en el interior del encofrado o molde, llenándolo de forma natural, pasando entre las barras de armadura y consolidándose únicamente bajo la acción de su peso propio, sin ayuda de medios de compactación externos, y sin que se produzca segregación de sus componentes. Debido a sus propiedades frescas (capacidad de relleno, capacidad de paso, y resistencia a la segregación), el HAC contribuye de forma significativa a mejorar la calidad de las estructuras así como a abrir nuevos campos de aplicación del hormigón. Por otra parte, la utilidad del hormigón reforzado con fibras de acero (HRFA) es hoy en día incuestionable debido a la mejora significativa de sus propiedades mecánicas tales como resistencia a tracción, tenacidad, resistencia al impacto o su capacidad para absorber energía. Comparado con el HRFA, el hormigón autocompactante reforzado con fibras de acero (HACRFA) presenta como ventaja una mayor fluidez y cohesión ofreciendo, además de unas buenas propiedades mecánicas, importantes ventajas en relación con su puesta en obra. El objetivo global de esta tesis doctoral es el desarrollo de nuevas soluciones estructurales utilizando materiales compuestos base cemento autocompactantes reforzados con fibras de acero. La tesis presenta una nueva forma de resolver el problema basándose en el concepto de los materiales gradiente funcionales (MGF) o materiales con función gradiente (MFG) con el fin de distribuir de forma eficiente las fibras en la sección estructural. Para ello, parte del HAC se sustituye por HACRFA formando capas que presentan una transición gradual entre las mismas con el fin de obtener secciones robustas y exentas de tensiones entre capas con el fin de aplicar el concepto “MGF-laminados” a elementos estructurales tales como vigas, columnas, losas, etc. El proceso incluye asimismo el propio método de fabricación que, basado en la tecnología HAC, permite el desarrollo de interfases delgadas y robustas entre capas (1-3 mm) gracias a las propiedades reológicas del material. Para alcanzar dichos objetivos se ha llevado a cabo un amplio programa experimental cuyas etapas principales son las siguientes: • Definir y desarrollar un método de diseño que permita caracterizar de forma adecuada las propiedades mecánicas de la “interfase”. Esta primera fase experimental incluye: o las consideraciones generales del propio método de fabricación basado en el concepto de fabricación de materiales gradiente funcionales denominado “reología y gravedad”, o las consideraciones específicas del método de caracterización, o la caracterización de la “interfase”. • Estudiar el comportamiento mecánico sobre elementos estructurales, utilizando distintas configuraciones de MGF-laminado frente a acciones tanto estáticas como dinámicas con el fin de comprobar la viabilidad del material para ser usado en elementos estructurales tales como vigas, placas, pilares, etc. Los resultados indican la viabilidad de la metodología de fabricación adoptada, así como, las ventajas tanto estructurales como en reducción de costes de las soluciones laminadas propuestas. Es importante destacar la mejora en términos de resistencia a flexión, compresión o impacto del hormigón autocompactante gradiente funcional en comparación con soluciones de HACRFA monolíticos inclusos con un volumen neto de fibras (Vf) doble o superior. Self-compacting concrete (SCC) is an important advance in the concrete technology in the last decades. It is a new type of high performance concrete with the ability of flowing under its own weight and without the need of vibrations. Due to its specific fresh or rheological properties, such as filling ability, passing ability and segregation resistance, SCC may contribute to a significant improvement of the quality of concrete structures and open up new field for the application of concrete. On the other hand, the usefulness of steel fibre-reinforced concrete (SFRC) in civil engineering applications is unquestionable. SFRC can improve significantly the hardened mechanical properties such as tensile strength, impact resistance, toughness and energy absorption capacity. Compared to SFRC, self-compacting steel fibre-reinforced concrete (SCSFRC) is a relatively new type of concrete with high flowability and good cohesiveness. SCSFRC offers very attractive economical and technical benefits thanks to SCC rheological properties, which can be further extended, when combined with SFRC for improving their mechanical characteristics. However, for the different concrete structural elements, a single concrete mix is selected without an attempt to adapt the diverse fibre-reinforced concretes to the stress-strain sectional properly. This thesis focused on the development of high performance cement-based structural composites made of SCC with and without steel fibres, and their applications for enhanced mechanical properties in front of different types of load and pattern configurations. It presents a new direction for tackling the mechanical problem. The approach adopted is based on the concept of functionally graded cementitious composite (FGCC) where part of the plain SCC is strategically replaced by SCSFRC in order to obtain laminated functionally graded self-compacting cementitious composites, laminated-FGSCC, in single structural elements as beams, columns, slabs, etc. The approach also involves a most suitable casting method, which uses SCC technology to eliminate the potential sharp interlayer while easily forming a robust and regular reproducible graded interlayer of 1-3 mm by controlling the rheology of the mixes and using gravity at the same time to encourage the use of the powerful concept for designing more performance suitable and cost-efficient structural systems. To reach the challenging aim, a wide experimental programme has been carried out involving two main steps: • The definition and development of a novel methodology designed for the characterization of the main parameter associated to the interface- or laminated-FGSCC solutions: the graded interlayer. Work of this first part includes: o the design considerations of the innovative (in the field of concrete) production method based on “rheology and gravity” for producing FG-SCSFRC or as named in the thesis FGSCC, casting process and elements, o the design of a specific testing methodology, o the characterization of the interface-FGSCC by using the so designed testing methodology. • The characterization of the different medium size FGSCC samples under different static and dynamic loads patterns for exploring their possibilities to be used for structural elements as beams, columns, slabs, etc. The results revealed the efficiency of the manufacturing methodology, which allow creating robust structural sections, as well as the feasibility and cost effectiveness of the proposed FGSCC solutions for different structural uses. It is noticeable to say the improvement in terms of flexural, compressive or impact loads’ responses of the different FGSCC in front of equal strength class SCSFRC bulk elements with at least the double of overall net fibre volume fraction (Vf).
Resumo:
We introduce an easily computable topological measure which locates the effective crossover between segregation and integration in a modular network. Segregation corresponds to the degree of network modularity, while integration is expressed in terms of the algebraic connectivity of an associated hypergraph. The rigorous treatment of the simplified case of cliques of equal size that are gradually rewired until they become completely merged, allows us to show that this topological crossover can be made to coincide with a dynamical crossover from cluster to global synchronization of a system of coupled phase oscillators. The dynamical crossover is signaled by a peak in the product of the measures of intracluster and global synchronization, which we propose as a dynamical measure of complexity. This quantity is much easier to compute than the entropy (of the average frequencies of the oscillators), and displays a behavior which closely mimics that of the dynamical complexity index based on the latter. The proposed topological measure simultaneously provides information on the dynamical behavior, sheds light on the interplay between modularity and total integration, and shows how this affects the capability of the network to perform both local and distributed dynamical tasks.
Resumo:
Este proyecto está desarrollado sobre la seguridad de redes, y más concretamente en la seguridad perimetral. Para mostrar esto se hará una definición teórico-práctica de un sistema de seguridad perimetral. Para ello se ha desglosado el contenido en dos partes fundamentales, la primera incide en la base teórica relativa a la seguridad perimetral y los elementos más importantes que intervienen en ella, y la segunda parte, que es la implantación de un sistema de seguridad perimetral habitual en un entorno empresarial. En la primera parte se exponen los elementos más importantes de la seguridad perimetral, incidiendo en elementos como pueden ser cortafuegos, IDS/IPS, antivirus, proxies, radius, gestores de ancho de banda, etc. Sobre cada uno de ellos se explica su funcionamiento y posible configuración. La segunda parte y más extensa a la vez que práctica, comprende todo el diseño, implantación y gestión de un sistema de seguridad perimetral típico, es decir, el que sería de aplicación para la mayoría de las empresas actuales. En esta segunda parte se encontrarán primeramente las necesidades del cliente y situación actual en lo que a seguridad se refiere, con los cuales se diseñará la arquitectura de red. Para comenzar será necesario definir formalmente unos requisitos previos, para satisfacer estos requisitos se diseñará el mapa de red con los elementos específicos seleccionados. La elección de estos elementos se hará en base a un estudio de mercado para escoger las mejores soluciones de cada fabricante y que más se adecúen a los requisitos del cliente. Una vez ejecutada la implementación, se diseñará un plan de pruebas, realizando las pruebas de casos de uso de los diferentes elementos de seguridad para asegurar su correcto funcionamiento. El siguiente paso, una vez verificado que todos los elementos funcionan de forma correcta, será diseñar un plan de gestión de la plataforma, en el que se detallan las rutinas a seguir en cada elemento para conseguir que su funcionamiento sea óptimo y eficiente. A continuación se diseña una metodología de gestión, en las que se indican los procedimientos de actuación frente a determinadas incidencias de seguridad, como pueden ser fallos en elementos de red, detección de vulnerabilidades, detección de ataques, cambios en políticas de seguridad, etc. Finalmente se detallarán las conclusiones que se obtienen de la realización del presente proyecto. ABSTRACT. This project is based on network security, specifically on security perimeter. To show this, a theoretical and practical definition of a perimeter security system will be done. This content has been broken down into two main parts. The first part is about the theoretical basis on perimeter security and the most important elements that it involves, and the second part is the implementation of a common perimeter security system in a business environment. The first part presents the most important elements of perimeter security, focusing on elements such as firewalls, IDS / IPS, antivirus, proxies, radius, bandwidth managers, etc... The operation and possible configuration of each one will be explained. The second part is larger and more practical. It includes all the design, implementation and management of a typical perimeter security system which could be applied in most businesses nowadays. The current status as far as security is concerned, and the customer needs will be found in this second part. With this information the network architecture will be designed. In the first place, it would be necessary to define formally a prerequisite. To satisfy these requirements the network map will be designed with the specific elements selected. The selection of these elements will be based on a market research to choose the best solutions for each manufacturer and are most suited to customer requirements. After running the implementation, a test plan will be designed by testing each one of the different uses of all the security elements to ensure the correct operation. In the next phase, once the proper work of all the elements has been verified, a management plan platform will be designed. It will contain the details of the routines to follow in each item to make them work optimally and efficiently. Then, a management methodology will be designed, which provides the procedures for action against certain security issues, such as network elements failures, exploit detection, attack detection, security policy changes, etc.. Finally, the conclusions obtained from the implementation of this project will be detailed.
Resumo:
We present a novel analysis for relating the sizes of terms and subterms occurring at diferent argument positions in logic predicates. We extend and enrich the concept of sized type as a representation that incorporates structural (shape) information and allows expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. For example, expressing bounds on the length of lists of numbers, together with bounds on the values of all of their elements. The analysis is developed using abstract interpretation and the novel abstract operations are based on setting up and solving recurrence relations between sized types. It has been integrated, together with novel resource usage and cardinality analyses, in the abstract interpretation framework in the Ciao preprocessor, CiaoPP, in order to assess both the accuracy of the new size analysis and its usefulness in the resource usage estimation application. We show that the proposed sized types are a substantial improvement over the previous size analyses present in CiaoPP, and also benefit the resource analysis considerably, allowing the inference of equal or better bounds than comparable state of the art systems.
Resumo:
An in vitro experiment was carried out using the Hohenheim gas production technique to evaluate 24-h gas production, apparently and truly degraded dry matter (DM), partitioning factor (PF), short chain fatty acids, crude protein (CP) and carbohydrate (CHO) fractionation of grass and multipurpose tree species (MPTS) foliage diets. Four grasses and three MPTS were used to formulate 12 diets of equal mixtures (0.5:0.5 on DM basis) of each grass with each MPTS. In vitro gas production was terminated after 24 h for each diet. True DM degradability was measured from incubated samples and combined with gas volume to estimate PF. Diets had greater (P<0.001) CP (102–183 g/kg DM) content than sole grasses (66–131 g/kg DM) and lower (P<0.001) concentrations of fibre fractions. Contrary to in vitro apparently degraded DM, in vitro truly degraded DM coefficient was greater (P<0.001) in diets (0.63–0.77) than in sole grasses (0.48–0.68). The PF was on average higher in diets than in sole grasses. The proportion of potentially degradable CP fractions (A1, B1, B2 and B3, based on the Cornell Net Carbohydrate and Protein System) in the diets ranged from 971 to 989 g/kg CP. Crude protein fractions, A and B2 were greater in diets but B1 and B3 fractions were less in diets than in sole grasses. A similar trend was also observed in the CHO fractions. Results showed that the nutritive value of the four grasses was improved when MPTS leaves were incorporated into the diet and this could ensure higher productivity of the animals.
Resumo:
La tesis “1950 En torno al Museo Louisiana 1970” analiza varias obras relacionadas con el espacio doméstico, que se realizaron entre 1950 y 1970 en Dinamarca, un periodo de esplendor de la Arquitectura Moderna. Tras el aislamiento y restricciones del conflicto bélico que asoló Europa, los jóvenes arquitectos daneses, estaban deseosos por experimentar nuevas ideas de procedencia internacional, favorecidos por diferentes circunstancias, encuentran el mejor campo de ensayo en el espacio doméstico. La mejor arquitectura doméstica en Dinamarca, de aquel periodo, debe entenderse como un sistema compuesto por diferentes autores, que tienen en común muchas más similitudes que diferencias, se complementan unos a otros. Para la comprensión y el entendimiento de ello se hace necesario el estudio de varias figuras y edificios, que completen este sistema cuya investigación está escasamente desarrollada. La tesis propone un viaje para conocer los nombres de algunos de sus protagonistas, que mostraron con su trabajo, que tradición y vanguardia no estarán reñidas. El objetivo es desvelar las claves de la Modernidad Danesa, reconocer, descubrir y recuperar el legado de algunos de sus protagonistas en el ámbito doméstico, cuya lección se considera de total actualidad. Una arquitectura que asume las aportaciones extranjeras con moderación y crítica, cuya íntima relación con la tradición arquitectónica y la artesanía propias, será una de sus notas especiales. Del estudio contrastado de varios proyectos y versiones, se obtienen valores comunes entre sus autores, al igual que se descubren sus afinidades o diferencias respecto a los mismos asuntos; que permitirán comprender sus actuaciones según las referencias e influencias, y definir las variables que configuran sus espacios arquitectónicos. La línea de conexión entre los edificios elegidos será su particular relación con la naturaleza y el lugar en que se integran. La fachada, lugar donde se negociará la relación entre el interior y el paisaje, será un elemento entendido de un modo diferente en cada uno de ellos, una relación que se extenderá en todas ellas, más allá de su perímetro. La investigación se ha estructurado en seis capítulos, que van precedidos de una Introducción. En el capítulo primero, se estudian y se señalan los antecedentes, las figuras y edificios más relevantes de la Tradición Danesa, para la comprensión y el esclarecimiento de algunas de las claves de su Modernidad en el campo de la Arquitectura, que se produce con una clara intención de encontrar su propia identidad y expresión. Esta Modernidad floreciente se caracteriza por la asimilación de otras culturas extranjeras desde la moderación, con un punto de vista crítico, y encuentra sus raíces ancladas a la tradición arquitectónica y la artesanía propia, que fragua en la aparición de un ideal común con enorme personalidad y que hoy se valora como una auténtica aportación de una cultura considerada entonces periférica. Se mostrará el debate y el camino seguido por las generaciones anteriores, a las obras análizadas. Las sensibilidades por lo vernáculo y lo clásico, que aparentemente son contradictorias, dominaran el debate con la misma veracidad y respetabilidad. La llamada tercera generación por Sigfried Giedion reanudará la práctica entre lo clásico y lo vernáculo, apoyados en el espíritu del trabajo artesanal y de la tradición, con el objetivo de conocer del acto arquitectónico su “la verdad” y “la esencia original”. El capítulo segundo, analiza la casa Varming, de 1953, situada en un área residencial de Gentofte, por Eva y Nils Koppel, que reinterpreta la visión de Asplund de un paisaje interior continuación del exterior, donde rompen la caja de ladrillo macizo convencional propia de los años 30. Es el ejemplo más poderoso de la unión de tradición e innovación en su obra residencial. Sus formas sobrias entre el Funcionalismo Danés y la Modernidad se singularizarán por su abstracción y volúmenes limpios que acentúan el efecto de su geometría, prismática y brutalista. El desplazamiento de los cuerpos que lo componen, unos sobre otros, generan un ritmo, que se producirá a otras escalas, ello unido a las variaciones de sus formas y a la elección de sus materiales, ladrillo y madera, le confieren a la casa un carácter orgánico. El edificio se ancla a la tierra resolviéndose en diferentes niveles tras el estudio del lugar y su topografía. El resultado es una versión construida del paisaje, en la cual el edificio da forma al lugar y ensalza la experiencia del escenario natural. La unidad de las estructuras primitivas, parece estar presente. Constituye un ejemplo de la “La idea de Promenade de Asplund”, el proyecto ofrece diferentes recorridos, permitiendo su propia vivencia de la casa, que ofrece la posibilidad vital de decidir. El capítulo tercero trata sobre el pabellón de invitados de Niels Bohr de 1957, situado un área boscosa, en Tisvilde Hegn, fue el primer edificio del arquitecto danés Vilhelm Wohlert. Arraigado a la Tradición Danesa, representa una renovación basada en la absorción de influencias extranjeras: la Arquitectura Americana y la Tradición Japonesa. La caja de madera, posada sobre un terreno horizontal, tiene el carácter sensible de un organismo vivo, siempre cambiante según las variaciones de luz del día o temperatura. Cuando se abre, crea una prolongación del espacio interior, que se extiende a la naturaleza circundante, y se expande hacia el espacio exterior, permitiendo su movilización. Se establece una arquitectura de flujos. Hay un interés por la materia, su textura y el efecto emocional que emana. Las proporciones y dimensiones del edificio están reguladas por un módulo, que se ajusta a la medida del hombre, destacando la gran unidad del edificio. La llave se su efecto estético está en su armonía y equilibrio, que transmiten serenidad y belleza. El encuentro con la naturaleza es la lección más básica del proyecto, donde un mundo de relaciones es amable al ser humano. El capítulo cuarto, analiza el proyecto del Museo Louisiana de 1958, en Humlebæk, primer proyecto de la pareja de arquitectos daneses Jørgen Bo y Vilhelm Wohlert. La experiencia en California de Wohlert donde será visitado por Bo, será trascendental para el desarrollo de Louisiana, donde la identidad Danesa se fusiona con la asimilación de otras culturas, la arquitectura de Frank Lloyd Wright, la del área de la Bahía y la Tradición Japonesa principalmente. La idea del proyecto es la de una obra de arte integral: arquitectura, arte y paisaje, coexistirían en un mismo lugar. Diferentes recursos realzarán su carácter residencial, como el uso de los materiales propios de un entorno doméstico, la realización a la escala del hombre, el modo de usar la iluminación. Cubiertas planas que muestran su artificialidad, parecen flotar sobre galerías acristaladas, acentuarán la fuerza del plano horizontal y establecerán un recorrido en zig-zag de marcado ritmo acompasado. Ritmo que tiene que ver con la encarnación del pulso de la naturaleza, que se acompaña de juegos de luz, y de otras vibraciones materiales a diferentes escalas, imagen, que encuentra una analogía semejante en la cultura japonesa. Todo se coordina con la trama estructural, que conlleva a una construcción y proporción disciplinada. Louisiana atiende al principio de crecimiento de la naturaleza, con la que su conexión es profunda. Hay un dinamismo expresado por el despliegue del edificio, que evoca a algunos proyectos de la Tradición Japonesa. Los blancos muros tienen su propia identidad como formas en sí mismas, avanzan prolongándose fuera de la línea del vidrio, se mueven libremente siguiendo el orden estructural, acompañando al espacio que fluye, en contacto directo con la naturaleza que está en un continuo estado de flujos. Se da todo un mundo de relaciones, donde existe un dialogo entre el paisaje, arte y arquitectura. El capítulo quinto, se dedica a analizar la segunda casa del arquitecto danés Halldor Gunnløgsson, de 1959. Evoca a la Arquitectura Japonesa y Americana, pero es principalmente resultado de una fuerte voluntad y disciplina artística personal. La cubierta, plana, suspendida sobre una gran plataforma pavimentada, que continúa la sección del terreno construyendo de lugar, tiene una gran presencia y arroja una profunda sombra bajo ella. En el interior un espacio único, que se puede dividir eventualmente, discurre en torno a un cuerpo central. El espacio libre fluye, extendiéndose a través de la transparencia de sus ventanales a dos espacios contrapuestos: un patio ajardinado íntimo, que inspira calma y sosiego; y la naturaleza salvaje del mar que proyecta el color del cielo, ambos en constante estado de cambio. El proyecto se elabora de un modo rigurosamente formal, existiendo al mismo tiempo un perfecto equilibrio entre la abstracción de su estructura y su programa. La estructura de madera cuyo orden se extiende más allá de los límites de su perímetro, está formada por pórticos completos como elementos libres, queda expuesta, guardando una estrecha relación con el concepto de modernidad de Mies, equivalente a la arquitectura clásica. La preocupación por el efecto estético es máxima, nada es improvisado. Pero además la combinación de materiales y el juego de las texturas hay una cualidad táctil, cierto erotismo, que flota alrededor de ella. La precisión constructiva y su refinamiento se acercan a Mies. La experiencia del espacio arquitectónico es una vivencia global. La influencia de la arquitectura japonesa, es más conceptual que formal, revelada en un respeto por la naturaleza, la búsqueda del refinamiento a través de la moderación, la eliminación de los objetos innecesarios que distraen de la experiencia del lugar y la preocupación por la luz y la sombra, donde se establece cierto paralelismo con el oscuro mundo del invierno nórdico. Hay un entendimiento de que el espacio, en lugar de ser un objeto inmaterial definido por superficies materiales se entiende como interacciones dinámicas. El capítulo sexto. Propone un viaje para conocer algunas de las viviendas unifamiliares más interesantes que se construyeron en el periodo, que forman parte del sistema investigado. Del estudio comparado y orientado en varios temas, se obtienen diversa conclusiones propias del sistema estudiado. La maestría de la sustancia y la forma será una característica distintiva en Dinamarca, se demuestra que hay un acercamiento a la cultura de Oriente, conceptual y formal, y unos intereses comunes con cierta arquitectura Americana. Su lección nos sensibiliza hacia un sentido fortalecido de proporción, escala, materialidad, textura y peso, densidad del espacio, se valora lo táctil y lo visual, hay una sensibilidad hacia la naturaleza, hacia lo humano, hacia el paisaje, la integridad de la obra. ABSTRACT The thesis “1950 around the Louisiana Museum 1970” analyses several works related to domestic space, which were carried out between 1950 and 1970 in Denmark, a golden age of modern architecture. After the isolation and limitations brought about by the war that blighted Europe, young Danish architects were keen to experiment with ideas of an international origin, encouraged by different circumstances. They find the best field of rehearsal to be the domestic space. The best architecture of that period in Denmark should be understood as a system composed of different authors, who have in common with each other many more similarities than differences, thus complimenting each other. In the interests of understanding, the study of a range of figures and buildings is necessary so that this system, the research of which is still in its infancy, can be completed. The thesis proposes a journey of discovery through the names of some of those protagonists who were showcased through their work so that tradition and avant- garde could go hand in hand. The objective is to unveil the keys to Danish Modernity; to recognise, discover and revive the legacy of some of its protagonists in the domestic field whose lessons are seen as entirely of the present. For an architect, the taking on of modern contributions with both moderation and caution, with its intimate relationship with architectural tradition and its own craft, will be one of his hallmarks. With the study set against several projects and versions, one can derive common values among their authors. In the same way their affinities and differences in respect of the same issue emerge. This will allow an understanding of their measures in line with references and influences and enable the defining of the variables of their architectural spaces. The common line between the buildings selected will be their particular relationship with nature and the space with which they integrate. The façade, the place where the relationship between the interior and the landscape would be negotiated, wouldl be the discriminating element in a distinct way for each one of them. It is through each of these facades that this relationship would extend, and far beyond their physical perimeter. The investigation has been structured into six chapters, preceded by an introduction. The first chapter outlines and analyses the backgrounds, figures and buildings most relevant to the Danish Tradition. This is to facilitate the understanding and elucidation of some of the keys to its modernity in the field of architecture, which came about with the clear intention to discover its own identity and expression. This thriving modernity is characterized by its moderate assimilation with foreign cultures with a critical eye, and finds its roots anchored in architectural tradition and its own handcraft. It is forged in the emergence of a common ideal of enormous personality which today has come to be valued as an authentic contribution to the sphere from a culture that was formerly seen as on the peripheries. What will be demonstrated is the path taken by previous generations to these works and the debate that surrounds them. The sensibilities for both the vernacular and the classic, which at first glance may seem contradictory, will dominate the debate with the same veracity and respectability. The so-called third generation of Sigfried Giedion will revive the process between the classic and the vernacular, supported in spirit by the handcraft work and by tradition, with the objective of discovering the “truth” and the “original essence” of the architectural act. The second chapter analyzes the Varming house, built by Eva and Nils Koppel 1953, which is situated in a residential area of Gentofte. This reinterprets Asplund’s vision of an interior landscape extending to the exterior, where we see a break with the conventional sturdy brick shell of the 1930s. It is the most powerful example of the union of tradition and innovation in his their residential work. Their sober forms caught between Danish Functionalism and modernity are characterized by their abstraction and clean shapes which accentuate their prismatic and brutal geometry, The displacement of the parts of which they are composed, one over the other, generate a rhythm. This is produced to varying scales and is closely linked to its forms and the selection of materials – brick and wood – that confer an organic character to the house. The building is anchored to the earth, finding solution at different levels through the study of place and topography. The result is an adaption constructed out of the landscape, in which the building gives form to the place and celebrates the experience of the natural setting. The unity of primitive structures appears to be present. It constitutes an example of “Asplund’s Promenade Idea”. Different routes of exploration within are available to the visitor, allowing for one’s own personal experience of the house, allowing in turn for the vital chance to decide. The third chapter deals with Niels Bohr’s guest pavilion. Built in 1957, it is situated in a wooded area of Tisvilde Hegn and was the architect Vilhelm Wohlert’s first building. Rooted in the Danish Tradition, it represents a renewal based on the absorption of foreign influences: American architecture and the Japanese tradition. The wooden box, perched atop a horizontal terrain, possesses the sensitive character of the living organism, ever-changing in accordance with the variations in daylight and temperature. When opened up, it creates an elongation of the interior space which extends into the surrounding nature and it expands towards the exterior space, allowing for its mobilisation. It establishes an architecture of flux. There is interest in the material, its texture and the emotional effect it inspires. The building’s proportions and dimensions are regulated by a module, which is adjusted by hand, bringing out the great unity of the building. The key to its aesthetic effect is its harmony and equilibrium, which convey serenity and beauty. The meeting with nature is the most fundamental lesson of the project, where a world of relationships softens the personality of the human being. The fourth chapter analyzes the Louisiana Museum project of 1958 in 1958. It was the first project of the Danish architects Jørgen Bo and Vilhelm Wohlert. Wohlert’s experience in California where he was visited by Bo would be essential to the development of Louisiana, where the Danish identity is fused in assimilation with other cultures, the architecture of Frank Lloyd Wright, that of the Bahía area and principally the Japanese tradition. The idea of the project was for an integrated work of art: architecture, art and landscape, which would coexist in the same space. A range of different resources would realize the residential character, such as the use of materials taken from a domestic environment, the attainment of human scale and the manner in which light was used. Flat roof plans that show their artificiality and appear to float over glassed galleries. They accentuate the strength of the horizontal plan and establish a zigzag route of marked and measured rhythm. It is a rhythm that has to do with the incarnation of nature’s pulse, which is accompanied with plays of light, as well as material vibrations of different scales, imagery which uncovers a parallel analogy with Japanese culture. Everything is coordinated along a structural frame, which involves a disciplined construction and proportion. Louisiana cherishes nature’s principle of growth, to which its connection is profound. Here is a dynamism expressed through the disposition of the building, which evokes in some projects the Japanese tradition. The white walls possess their own identity as forms in their own right. They advance, extending beyond the line of glass, moving freely along the structural line, accompanying a space that flows and in direct contact with nature that is itself in a constant state of flux. It creates a world of relationships, where dialogue exists between the landscape, art and architecture. The fifth chapter is dedicated to analyzing the Danish architect Halldor Gunnløgsson’s second house, built in 1959. It evokes both Japanese and American architecture but is principally the result of a strong will and personal artistic discipline. The flat roof suspended above a large paved platform – itself continuing the constructed terrain of the place – has great presence and casts a heavy shadow beneath. In the interior, a single space, which can at length be divided and which flows around a central space. The space flows freely, extending through the transparency of its windows which give out onto two contrasting locations: an intimate garden patio, inspiring calm and tranquillity, and the wild nature of the sea which projects the colour of sky, both in a constant state of change. The project is realized in a rigorously formal manner. A perfect balance exists between the abstraction of his structure and his project. The wooden structure, whose order extends beyond the limits of its perimeter, is formed of complete porticos of free elements. It remains exposed, maintaining a close relationship with Mies’ concept of modernity, analogous to classical architecture. The preoccupation with the aesthetic effect is paramount and nothing is improvised. But in addition to this - the combination of materials and the play of textures - there is a tactile quality, a certain eroticism, which lingers all about. The constructive precision and its refinement are close to Mies. The experience of the architectural space is universal. The influence of Japanese architecture, more conceptual than formal, is revealed in a respect for nature. It can be seen in the search for refinement through moderation, the elimination of the superfluous object that distract from the experience of place and the preoccupation with light and shade, where a certain parallel with the dark world of the Nordic winter is established. There is an understanding that space, rather than being an immaterial object defined by material surfaces, extends instead as dynamic interactions. The sixth chapter. This proposes a journey to discover some of the unfamiliar residences of most interest which were constructed in the period, and which form part of the system being investigated. Through the study of comparison and one which is geared towards various themes, diverse conclusions are drawn regarding the system being researched. The expertise in substance and form will be a distinctive characteristic in Denmark, demonstrating an approach to the culture of the Orient, both conceptual and formal, and some common interests in certain American architecture. Its teachings sensitize us to a strengthened sense of proportion, scale, materiality, texture and weight and density of space. It values both the tactile and the visual. There is a sensitivity to nature, to the human, to the landscape and to the integrity of the work.
Resumo:
The fermentation stage is considered to be one of the critical steps in coffee processing due to its impact on the final quality of the product. The objective of this work is to characterise the temperature gradients in a fermentation tank by multi-distributed, low-cost and autonomous wireless sensors (23 semi-passive TurboTag® radio-frequency identifier (RFID) temperature loggers). Spatial interpolation in polar coordinates and an innovative methodology based on phase space diagrams are used. A real coffee fermentation process was supervised in the Cauca region (Colombia) with sensors submerged directly in the fermenting mass, leading to a 4.6 °C temperature range within the fermentation process. Spatial interpolation shows a maximum instant radial temperature gradient of 0.1 °C/cm from the centre to the perimeter of the tank and a vertical temperature gradient of 0.25 °C/cm for sensors with equal polar coordinates. The combination of spatial interpolation and phase space graphs consistently enables the identification of five local behaviours during fermentation (hot and cold spots).
Resumo:
La Energía eléctrica producida mediante tecnología eólica flotante es uno de los recursos más prometedores para reducir la dependencia de energía proveniente de combustibles fósiles. Esta tecnología es de especial interés en países como España, donde la plataforma continental es estrecha y existen pocas áreas para el desarrollo de estructuras fijas. Entre los diferentes conceptos flotantes, esta tesis se ha ocupado de la tipología semisumergible. Estas plataformas pueden experimentar movimientos resonantes en largada y arfada. En largada, dado que el periodo de resonancia es largo estos puede ser inducidos por efectos de segundo orden de deriva lenta que pueden tener una influencia muy significativa en las cargas en los fondeos. En arfada las fuerzas de primer orden pueden inducir grandes movimientos y por tanto la correcta determinación del amortiguamiento es esencial para la analizar la operatividad de la plataforma. Esta tesis ha investigado estos dos efectos, para ello se ha usado como caso base el diseño de una plataforma desarrollada en el proyecto Europeo Hiprwind. La plataforma se compone de 3 columnas cilíndricas unidas mediante montantes estructurales horizontales y diagonales, Los cilindros proporcionan flotabilidad y momentos adrizante. A la base de cada columna se le ha añadido un gran “Heave Plate” o placa de cierre. El diseño es similar a otros diseños previos (Windfloat). Se ha fabricado un modelo a escala de una de las columnas para el estudio detallado del amortiguamiento mediante oscilaciones forzadas. Las dimensiones del modelo (1m diámetro en la placa de cierre) lo hacen, de los conocidos por el candidato, el mayor para el que se han publicado datos. El diseño del cilindro se ha realizado de tal manera que permite la fijación de placas de cierre planas o con refuerzo, ambos modelos se han fabricado y analizado. El modelo con refuerzos es una reproducción exacta del diseño a escala real incluyendo detalles distintivos del mismo, siendo el más importante la placa vertical perimetral. Los ensayos de oscilaciones forzadas se han realizado para un rango de frecuencias, tanto para el disco plano como el reforzado. Se han medido las fuerzas durante los ensayos y se han calculado los coeficientes de amortiguamiento y de masa añadida. Estos coeficientes son necesarios para el cálculo del fondeo mediante simulaciones en el dominio del tiempo. Los coeficientes calculados se han comparado con la literatura existente, con cálculos potenciales y por ultimo con cálculos CFD. Para disponer de información relevante para el diseño estructural de la plataforma se han medido y analizado experimentalmente las presiones en la parte superior e inferior de cada placa de cierre. Para la correcta estimación numérica de las fuerzas de deriva lenta en la plataforma se ha realizado una campaña experimental que incluye ensayos con modelo cautivo de la plataforma completa en olas bicromaticas. Pese a que estos experimentos no reproducen un escenario de oleaje realista, los mismos permiten una verificación del modelo numérico mediante la comparación de fuerzas medidas en el modelo físico y el numérico. Como resultados de esta tesis podemos enumerar las siguientes conclusiones. 1. El amortiguamiento y la masa añadida muestran una pequeña dependencia con la frecuencia pero una gran dependencia con la amplitud del movimiento. siendo coherente con investigaciones existentes. 2. Las medidas con la placa de cierre reforzada con cierre vertical en el borde, muestra un amortiguamiento significativamente menor comparada con la placa plana. Esto implica que para ensayos de canal es necesario incluir estos detalles en el modelo. 3. La masa añadida no muestra grandes variaciones comparando placa plana y placa con refuerzos. 4. Un coeficiente de amortiguamiento del 6% del crítico se puede considerar conservador para el cálculo en el dominio de la frecuencia. Este amortiguamiento es equivalente a un coeficiente de “drag” de 4 en elementos de Morison cuadráticos en las placas de cierre usadas en simulaciones en el dominio del tiempo. 5. Se han encontrado discrepancias en algunos valores de masa añadida y amortiguamiento de la placa plana al comparar con datos publicados. Se han propuesto algunas explicaciones basadas en las diferencias en la relación de espesores, en la distancia a la superficie libre y también relacionadas con efectos de escala. 6. La presión en la placa con refuerzos son similares a las de la placa plana, excepto en la zona del borde donde la placa con refuerzo vertical induce una gran diferencias de presiones entre la cara superior e inferior. 7. La máxima diferencia de presión escala coherentemente con la fuerza equivalente a la aceleración de la masa añadida distribuida sobre la placa. 8. Las masas añadidas calculadas con el código potencial (WADAM) no son suficientemente precisas, Este software no contempla el modelado de placas de pequeño espesor con dipolos, la poca precisión de los resultados aumenta la importancia de este tipo de elementos al realizar simulaciones con códigos potenciales para este tipo de plataformas que incluyen elementos de poco espesor. 9. Respecto al código CFD (Ansys CFX) la precisión de los cálculos es razonable para la placa plana, esta precisión disminuye para la placa con refuerzo vertical en el borde, como era de esperar dado la mayor complejidad del flujo. 10. Respecto al segundo orden, los resultados, en general, muestran que, aunque la tendencia en las fuerzas de segundo orden se captura bien con los códigos numéricos, se observan algunas reducciones en comparación con los datos experimentales. Las diferencias entre simulaciones y datos experimentales son mayores al usar la aproximación de Newman, que usa únicamente resultados de primer orden para el cálculo de las fuerzas de deriva media. 11. Es importante remarcar que las tendencias observadas en los resultados con modelo fijo cambiarn cuando el modelo este libre, el impacto que los errores en las estimaciones de fuerzas segundo orden tienen en el sistema de fondeo dependen de las condiciones ambientales que imponen las cargas ultimas en dichas líneas. En cualquier caso los resultados que se han obtenido en esta investigación confirman que es necesaria y deseable una detallada investigación de los métodos usados en la estimación de las fuerzas no lineales en las turbinas flotantes para que pueda servir de guía en futuros diseños de estos sistemas. Finalmente, el candidato espera que esta investigación pueda beneficiar a la industria eólica offshore en mejorar el diseño hidrodinámico del concepto semisumergible. ABSTRACT Electrical power obtained from floating offshore wind turbines is one of the promising resources which can reduce the fossil fuel energy consumption and cover worldwide energy demands. The concept is the most competitive in countries, such as Spain, where the continental shelf is narrow and does not provide space for fixed structures. Among the different floating structures concepts, this thesis has dealt with the semisubmersible one. Platforms of this kind may experience resonant motions both in surge and heave directions. In surge, since the platform natural period is long, such resonance can be excited with second order slow drift forces and may have substantial influence on mooring loads. In heave, first order forces can induce significant motion, whose damping is a crucial factor for the platform downtime. These two topics have been investigated in this thesis. To this aim, a design developed during HiPRWind EU project, has been selected as reference case study. The platform is composed of three cylindrical legs, linked together by a set of structural braces. The cylinders provide buoyancy and restoring forces and moments. Large circular heave plates have been attached to their bases. The design is similar to other documented in literature (e.g. Windfloat), which implies outcomes could have a general value. A large scale model of one of the legs has been built in order to study heave damping through forced oscillations. The final dimensions of the specimen (one meter diameter discs) make it, to the candidate’s knowledge, the largest for which data has been published. The model design allows for the fitting of either a plain solid heave plate or a flapped reinforced one; both have been built. The latter is a model scale reproduction of the prototype heave plate and includes some distinctive features, the most important being the inclusion of a vertical flap on its perimeter. The forced oscillation tests have been conducted for a range of frequencies and amplitudes, with both the solid plain model and the vertical flap one. Forces have been measured, from which added mass and damping coefficients have been obtained. These are necessary to accurately compute time-domain simulations of mooring design. The coefficients have been compared with literature, and potential flow and CFD predictions. In order to provide information for the structural design of the platform, pressure measurements on the top and bottom side of the heave discs have been recorded and pressure differences analyzed. In addition, in order to conduct a detailed investigation on the numerical estimations of the slow-drift forces of the HiPRWind platform, an experimental campaign involving captive (fixed) model tests of a model of the whole platform in bichromatic waves has been carried out. Although not reproducing the more realistic scenario, these tests allowed a preliminary verification of the numerical model based directly on the forces measured on the structure. The following outcomes can be enumerated: 1. Damping and added mass coefficients show, on one hand, a small dependence with frequency and, on the other hand, a large dependence with the motion amplitude, which is coherent with previously published research. 2. Measurements with the prototype plate, equipped with the vertical flap, show that damping drops significantly when comparing this to the plain one. This implies that, for tank tests of the whole floater and turbine, the prototype plate, equipped with the flap, should be incorporated to the model. 3. Added mass values do not suffer large alterations when comparing the plain plate and the one equipped with a vertical flap. 4. A conservative damping coefficient equal to 6% of the critical damping can be considered adequate for the prototype heave plate for frequency domain analysis. A corresponding drag coefficient equal to 4.0 can be used in time domain simulations to define Morison elements. 5. When comparing to published data, some discrepancies in added mass and damping coefficients for the solid plain plate have been found. Explanations have been suggested, focusing mainly on differences in thickness ratio and distance to the free surface, and eventual scale effects. 6. Pressures on the plate equipped with the vertical flap are similar in magnitude to those of the plain plate, even though substantial differences are present close to the edge, where the flap induces a larger pressure difference in the reinforced case. 7. The maximum pressure difference scales coherently with the force equivalent to the acceleration of the added mass, distributed over the disc surface. 8. Added mass coefficient values predicted with the potential solver (WADAM) are not accurate enough. The used solver does not contemplate modeling thin plates with doublets. The relatively low accuracy of the results highlights the importance of these elements when performing potential flow simulations of offshore platforms which include thin plates. 9. For the full CFD solver (Ansys CFX), the accuracy of the computations is found reasonable for the plain plate. Such accuracy diminishes for the disc equipped with a vertical flap, an expected result considering the greater complexity of the flow. 10. In regards to second order effects, in general, the results showed that, although the main trend in the behavior of the second-order forces is well captured by the numerical predictions, some under prediction of the experimental values is visible. The gap between experimental and numerical results is more pronounced when Newman’s approximation is considered, making use exclusively of the mean drift forces calculated in the first-order solution. 11. It should be observed that the trends observed in the fixed model test may change when the body is free to float, and the impact that eventual errors in the estimation of the second-order forces may have on the mooring system depends on the characteristics of the sea conditions that will ultimately impose the maximum loads on the mooring lines. Nevertheless, the preliminary results obtained in this research do confirm that a more detailed investigation of the methods adopted for the estimation of the nonlinear wave forces on the FOWT would be welcome and may provide some further guidance for the design of such systems. As a final remark, the candidate hopes this research can benefit the offshore wind industry in improving the hydrodynamic design of the semi-submersible concept.
Resumo:
This paper studies the recombination at the perimeter in the subcells that constitute a GaInP/GaAs/Ge lattice-matched triple-junction solar cell. For that, diodes of different sizes and consequently different perimeter/area ratios have been manufactured in single-junction solar cells resembling the subcells in a triple-junction solar cell. It has been found that neither in GaInP nor in Ge solar cells the recombination at the perimeter is significant in devices as small as 500 μm × 500μm(2.5 ⋅ 10 − 3 cm2) in GaInP and 250μm × 250μm (6.25 ⋅ 10 − 4cm2) in Ge. However, in GaAs, the recombination at the perimeter is not negligible at low voltages even in devices as large as 1cm2, and it is the main limiting recombination factor in the open circuit voltage even at high concentrations in solar cells of 250 μm × 250μm (6.25 ⋅ 10 − 4 cm2) or smaller. Therefore, the recombination at the perimeter in GaAs should be taken into account when optimizing triple-junction solar cells.