960 resultados para CRITICAL ASPECTS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research focuses on taxonomy, phylogeny and reproductive ecology of Gentiana lutea. L.. Taxonomic analysis is a critical step in botanical studies, as it is necessary to recognize taxonomical unit. Herbarium specimens were observed to assess the reliability of several subspecies-diagnostic characters. The analysis of G. lutea genetic variability and the comparison with that of the other species of sect. Gentiana were performed to elucidate phylogenetic relationships among G. lutea subspecies and to propose a phylogenetic hypothesis for the evolution and the colonization dynamics of the section. Appropriate scientific information is critical for the assessment of species conservation status and for effective management plans. I carried out field work on five natural populations and performed laboratory analyses on specific critical aspects, with special regard to G. lutea breeding system and type and efficiency of plant-pollinator system. Bracts length is a reliable character to identify subsp. vardjanii, however it is not exclusive, hence to clearly identify subsp. vardjanii, other traits have to be considered. The phylogenetic hypotheses obtained from nuclear and chloroplast data are not congruent. Nuclear markers show a monophyly of sect. Gentiana, a strongly species identity of G. lutea and clear genetic identity of subsp. vardjanii. The little information emerging from plastid markers indicate a weak signal of hybridization and incomplete sorting of ancestral lineages. G. lutea shows a striking variation in intra-floral dichogamy probably evolved to reduce pollen-stigma interference. Although the species is partially self-compatible, pollen vectors are necessary for a successful reproduction, and moreover it shows a strong inbreeding depression. G. lutea is a generalist species: within its spectrum of visitors is possible to recognize "nectar thieves" and pollinators with sedentary or dynamic behaviour. Pollen limitation is frequent and it could be mainly explained by poor pollen quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In una situazione caratterizzata dalla scarsità delle risorse finanziare a disposizione degli enti locali, che rende necessario il contributo dei privati alla realizzazione delle opere pubbliche, e dalla scarsità delle risorse ambientali, che impone di perseguire la sostenibilità degli interventi, la tesi si pone l’obiettivo di rendere le realizzazioni di nuove infrastrutture viarie “attive” rispetto al contesto in cui si collocano, garantendo l’impegno di tutte parti coinvolte. Si tratta di ottenere il contributo dei privati oltre che per le opere di urbanizzazione primaria, funzionali all’insediamento stesso, anche per la realizzazione di infrastrutture viarie non esclusivamente dedicate a questo, ma che sono necessarie per garantirne la sostenibilità. Tale principio, che viene anche denominato “contributo di sostenibilità”, comincia oggi a trovare un’applicazione nelle pratiche urbanistiche, sconta ancora alcune criticità, in quanto i casi sviluppati si basano spesso su considerazioni che si prestano a contenziosi tra operatori privati e pubblica amministrazione. Ponendosi come obiettivo la definizione di una metodologia di supporto alla negoziazione per la determinazione univoca e oggettiva del contributo da chiedere agli attuatori delle trasformazioni per la realizzazione di nuove infrastrutture viarie, ci si è concentrati sullo sviluppo di un metodo operativo basato sull’adozione dei modelli di simulazione del traffico a 4 stadi. La metodologia proposta è stata verificata attraverso l’applicazione ad un caso di studio, che riguarda la realizzazione di un nuovo asse viario al confine tra i comuni di Castel Maggiore ed Argelato. L’asse, indispensabile per garantire l’accessibilità alle nuove aree di trasformazione che interessano quel quadrante, permette anche di risolvere alcune criticità viabilistiche attualmente presenti. Il tema affrontato quindi è quello della determinazione del contributo che ciascuno degli utilizzatori del nuovo asse dovrà versare al fine di consentirne la realizzazione. In conclusione, si formulano alcune considerazioni sull’utilità della metodologia proposta e sulla sua applicabilità a casi analoghi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural hazards affecting industrial installations could directly or indirectly cause an accident or series of accidents with serious consequences for the environment and for human health. Accidents initiated by a natural hazard or disaster which result in the release of hazardous materials are commonly referred to as Natech (Natural Hazard Triggering a Technological Disaster) accidents. The conditions brought about by these kinds of events are particularly problematic, the presence of the natural event increases the probability of exposition and causes consequences more serious than standard technological accidents. Despite a growing body of research and more stringent regulations for the design and operation of industrial activities, Natech accidents remain a threat. This is partly due to the absence of data and dedicated risk-assessment methodologies and tools. Even the Seveso Directives for the control of risks due to major accident hazards do not include any specific impositions regarding the management of Natech risks in the process industries. Among the few available tools there is the European Standard EN 62305, which addresses generic industrial sites, requiring to take into account the possibility of lightning and to select the appropriate protection measures. Since it is intended for generic industrial installations, this tool set the requirements for the design, the construction and the modification of structures, and is thus mainly oriented towards conventional civil building. A first purpose of this project is to study the effects and the consequences on industrial sites of lightning, which is the most common adverse natural phenomenon in Europe. Lightning is the cause of several industrial accidents initiated by natural causes. The industrial sectors most susceptible to accidents triggered by lightning is the petrochemical one, due to the presence of atmospheric tanks (especially floating roof tanks) containing flammable vapors which could be easily ignited by a lightning strike or by lightning secondary effects (as electrostatic and electromagnetic pulses or ground currents). A second purpose of this work is to implement the procedure proposed by the European Standard on a specific kind of industrial plant, i.e. on a chemical factory, in order to highlight the critical aspects of this implementation. A case-study plant handling flammable liquids was selected. The application of the European Standard allowed to estimate the incidence of lightning activity on the total value of the default release frequency suggested by guidelines for atmospheric storage tanks. Though it has become evident that the European Standard does not introduce any parameters explicitly pointing out the amount of dangerous substances which could be ignited or released. Furthermore the parameters that are proposed to describe the characteristics of the structures potentially subjected to lightning strikes are insufficient to take into account the specific features of different chemical equipment commonly present in chemical plants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

I Comuni incarnano idealmente delle piazze in cui il dibattito politico può svilupparsi in assenza di particolari filtri ed intermediazioni, con un rapporto diretto tra cittadini ed istituzioni. Essi costituiscono uno snodo di centrale importanza nell'esercizio della sovranità popolare e, al contempo, sono terreno fertile per la sperimentazione di modelli di partecipazione democratica. Prendendo come punto di vista l'esperienza dei Comuni italiani, si è scelto di focalizzare l'attenzione su uno degli strumenti “istituzionali” – nonché uno tra i più tradizionali – di partecipazione popolare, ovvero il referendum, nelle diverse forme ed accezioni che rientrano nel campo semantico di tale espressione. Questa è generalmente impiegata per indicare tutte quelle votazioni popolari non elettive su questioni politicamente rilevanti, formulate attraverso un quesito con due o più risposte alternative tra loro. L'analisi della disciplina legislativa degli istituti di partecipazione negli enti locali e lo studio delle disposizioni statutarie e regolamentari previste dai singoli Comuni, nonché le informazioni raccolte da alcuni casi di studio, rappresentano, in questo contesto, l'occasione per indagare le caratteristiche peculiari dell'istituto referendario, la sua effettività ed il suo impatto sulla forma di governo. In particolare, si è verificata positivamente la compatibilità del referendum, classificato dalla prevalente dottrina come istituto di democrazia diretta, con le forme attuali di democrazia rappresentativa. Si è tentato, altresì, un accostamento ai concetti di democrazia partecipativa e deliberativa, evidenziando come manchi del tutto, nel procedimento referendario (che pure è dotato di massima inclusività) un momento di confronto “deliberativo”. Il raffronto tra le esperienze riscontrate nei diversi Comuni ha consentito, inoltre, di ricercare le cause di alcuni aspetti critici (scarsa affluenza, mancata trasformazione del voto in decisioni politiche, aumento del conflitto) e, al contempo, di individuarne possibili soluzioni, tracciate sulla scorta delle migliori pratiche rilevate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

«I felt that the time had come to have a fresh look at the European VAT system. There were indeed a number of reasons which in my view justified taking this step»: da queste parole del Commissario Europeo Algirdas Šemeta trae ispirazione tale ricerca che ha l’obiettivo di ripercorrere, in primo luogo, le ragioni che hanno portato alla creazione di una imposta comunitaria plurifase sui consumi, ed in secondo luogo, i motivi per cui oggi è necessario un ripensamento sul tema. Le spinte ammodernatrici provengono anche dagli stessi organismi europei, che sono impegnati da anni in discussioni con gli Stati membri per arrivare alla definizione di una normativa che riesca a disegnare un sistema snello ed efficiente. Il primo importante passo in tale direzione è stato effettuato dalla stessa Commissione europea nel 2010 con l’elaborazione del Libro Verde sul futuro dell’IVA, in cui vengono evidenziati i profili critici del sistema e le possibili proposte di riforma. L’obiettivo di dare origine ad un EU VAT SYSTEM in grado di rendere la tassazione più semplice, efficace, neutrale ed anti frode. In questo lavoro si intendono sottolineare i principali elementi critici della normativa IVA comunitaria, ideando anche le modifiche che potrebbero migliorarli, al fine di creare un’ipotesi normativa capace di essere un modello ispiratore per la modifica del sistema di imposizione indiretta esistente nella Repubblica di San Marino che ad oggi si trova a doversi confrontare con una imposta monofase alle importazioni anch’essa, come l’IVA, oramai in crisi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quando si parla di traduzione, si pensa spesso solo al risultato di un processo, valutato e analizzato come tale. Sembra ci si dimentichi del fatto che, prima di arrivare a quel risultato finale, il traduttore applica più o meno consapevolmente tutta quella serie di strumenti di analisi, critica, correzione, rilettura, riformulazione e modifica che rientrano nell’attività di revisione. A questa prima fase, di cui ogni traduttore fa esperienza nel lavorare alla propria traduzione, segue di norma una fase successiva in cui la traduzione è rivista da un’altra figura professionale della filiera editoriale (di solito, e auspicabilmente, un altro traduttore) e infine altre fasi ancora del processo di lavorazione e pubblicazione del testo tradotto. Oltre a essere un’attività cruciale di ogni attività di traduzione e processo editoriale, la revisione riveste anche un fondamentale ruolo didattico nella formazione dei traduttori. L’idea alla base di questo progetto di ricerca nasce dal bisogno di triangolare riflessioni e dati concreti sulla revisione provenienti dalla ricerca accademica, dalla pratica professionale e dall’esperienza didattico-formativa, in un triplice approccio che informa l’intero progetto, di cui si illustrano i principali obiettivi: • formulare una nuova e chiara definizione sommativa del termine “revisione” da potersi utilizzare nell’ambito della ricerca, della didattica e della prassi professionale; • fornire una panoramica tematica, critica e aggiornata sulla ricerca accademica e non-accademica in materia di revisione; • condurre un’indagine conoscitiva (tramite compilazione di questionari diversificati) sulla pratica professionale della revisione editoriale in Italia, allo scopo di raccogliere dati da traduttori e revisori, fornirne una lettura critica e quindi individuare peculiarità e criticità di questa fase del processo di lavorazione del libro tradotto; • presentare ipotesi di lavoro e suggerimenti su metodi e strumenti da applicare all’insegnamento della revisione in contesti didattici e formativi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Social Housing and energy performance in a study case in Queimados within the Programa Minha Casa Minha Vida: analysis and proposals for improvement. The thesis in based on a personal experience lived in Brazil, working with a firm that deals with the construction of housing, for the population with incomes between 1.600 R$ and 3.100 R$ per month, in the Programa Minha Casa Minha Vida. Thanks to the construction site and contact with the local people, it was possible to attend to the construction phases and to understand the pros and cons of this Program. Working with the company made also possible to know the costs of the construction and to see that they reached the limit budget imposed by the Program (160.000 R$). Between the critical aspects of the program there is the fact that it doesn’t deal with the energy consumptions of buildings. For that reason it was interesting to calculate the energy requirements for cooling- using the software EnergyPlus and Legacy Opens Studio plug-in for Google Sketchup- and, later, to try to propose ideas for improving performances and reduce energy consumption introducing: increase in the wall mass, frame windows and patio doors, exterior blinds, wall shading on the west side. From the analysis of these simulations, considering the decrease of energy requirements for cooling, the decrease of operative and mean radiant temperatures and costs, the most convenient proposal was the exterior curtain. As all these assumptions were too expensive for the program it was analyzed how the behavior of the inhabitants influence energy consumption. Thinking of an intelligent ventilation –opening windows while the outside temperature is lower than the inside one- the reduction of energy requirements is about 27%. These result is really important, if you consider that it is obtained without spending more money.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Intermediaries permeate modern economic exchange. Most classical models on intermediated exchange are driven by information asymmetry and inventory management. These two factors are of reduced significance in modern economies. This makes it necessary to develop models that correspond more closely to modern financial marketplaces. The goal of this dissertation is to propose and examine such models in a game theoretical context. The proposed models are driven by asymmetries in the goals of different market participants. Hedging pressure as one of the most critical aspects in the behavior of commercial entities plays a crucial role. The first market model shows that no equilibrium solution can exist in a market consisting of a commercial buyer, a commercial seller and a non-commercial intermediary. This indicates a clear economic need for non-commercial trading intermediaries: a direct trade from seller to buyer does not result in an equilibrium solution. The second market model has two distinct intermediaries between buyer and seller: a spread trader/market maker and a risk-neutral intermediary. In this model a unique, natural equilibrium solution is identified in which the supply-demand surplus is traded by the risk-neutral intermediary, whilst the market maker trades the remainder from seller to buyer. Since the market maker’s payoff for trading at the identified equilibrium price is zero, this second model does not provide any motivation for the market maker to enter the market. The third market model introduces an explicit transaction fee that enables the market maker to secure a positive payoff. Under certain assumptions on this transaction fee the equilibrium solution of the previous model applies and now also provides a financial motivation for the market maker to enter the market. If the transaction fee violates an upper bound that depends on supply, demand and riskaversity of buyer and seller, the market will be in disequilibrium.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se analizan las concepciones de Domingo Faustino Sarmiento y José Martí acerca del rol letrado en el programa modernizador de América Latina durante la segunda mitad del siglo XIX. Para ello se adopta como eje la divergente significación de las categorías "civilización/barbarie" en los textos Facundo y Nuestra América. Una nueva lectura de estos textos fundacionales de la historia intelectual latinoamericana permite reconocer aspectos críticos muchas veces olvidados por la reconstrucción del discurso letrado en las actuales corrientes de los estudios latinoamericanos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Railway bridges have specific requirements related to safety, which often are critical aspects of design. In this paper the main phenomena are reviewed, namely vertical dynamic effects for impact effect of moving loads and resonance in high-speed, service limit states which affect the safety of running traffic, and lateral dynamic effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo de la presente tesis doctoral es el desarrollo de un nuevo concepto de biosensor óptico sin marcado, basado en una combinación de técnicas de caracterización óptica de interrogación vertical y estructuras sub-micrométricas fabricadas sobre chips de silicio. Las características más importantes de dicho dispositivo son su simplicidad, tanto desde el punto de vista de medida óptica como de introducción de las muestras a medir en el área sensible, aspectos que suelen ser críticos en la mayoría de sensores encontrados en la literatura. Cada uno de los aspectos relacionados con el diseño de un biosensor, que son fundamentalmente cuatro (diseño fotónico, caracterización óptica, fabricación y fluídica/inmovilización química) son desarrollados en detalle en los capítulos correspondientes. En la primera parte de la tesis se hace una introducción al concepto de biosensor, en qué consiste, qué tipos hay y cuáles son los parámetros más comunes usados para cuantificar su comportamiento. Posteriormente se realiza un análisis del estado del arte en la materia, enfocado en particular en el área de biosensores ópticos sin marcado. Se introducen también cuáles son las reacciones bioquímicas a estudiar (inmunoensayos). En la segunda parte se describe en primer lugar cuáles son las técnicas ópticas empleadas en la caracterización: Reflectometría, Elipsometría y Espectrometría; además de los motivos que han llevado a su empleo. Posteriormente se introducen diversos diseños de las denominadas "celdas optofluídicas", que son los dispositivos en los que se va a producir la interacción bioquímica. Se presentan cuatro dispositivos diferentes, y junto con ellos, se proponen diversos métodos de cálculo teórico de la respuesta óptica esperada. Posteriormente se procede al cálculo de la sensibilidad esperada para cada una de las celdas, así como al análisis de los procesos de fabricación de cada una de ellas y su comportamiento fluídico. Una vez analizados todos los aspectos críticos del comportamiento del biosensor, se puede realizar un proceso de optimización de su diseño. Esto se realiza usando un modelo de cálculo simplificado (modelo 1.5-D) que permite la obtención de parámetros como la sensibilidad y el límite de detección de un gran número de dispositivos en un tiempo relativamente reducido. Para este proceso se escogen dos de las celdas optofluídicas propuestas. En la parte final de la tesis se muestran los resultados experimentales obtenidos. En primer lugar, se caracteriza una celda basada en agujeros sub-micrométricos como sensor de índice de refracción, usando para ello diferentes líquidos orgánicos; dichos resultados experimentales presentan una buena correlación con los cálculos teóricos previos, lo que permite validar el modelo conceptual presentado. Finalmente, se realiza un inmunoensayo químico sobre otra de las celdas propuestas (pilares nanométricos de polímero SU-8). Para ello se utiliza el inmunoensayo de albumina de suero bovino (BSA) y su anticuerpo (antiBSA). Se detalla el proceso de obtención de la celda, la funcionalización de la superficie con los bioreceptores (en este caso, BSA) y el proceso de biorreconocimiento. Este proceso permite dar una primera estimación de cuál es el límite de detección esperable para este tipo de sensores en un inmunoensayo estándar. En este caso, se alcanza un valor de 2.3 ng/mL, que es competitivo comparado con otros ensayos similares encontrados en la literatura. La principal conclusión de la tesis es que esta tipología de dispositivos puede ser usada como inmunosensor, y presenta ciertas ventajas respecto a los actualmente existentes. Estas ventajas vienen asociadas, de nuevo, a su simplicidad, tanto a la hora de medir ópticamente, como dentro del proceso de introducción de los bioanalitos en el área sensora (depositando simplemente una gota sobre la micro-nano-estructura). Los cálculos teorícos realizados en los procesos de optimización sugieren a su vez que el comportamiento del sensor, medido en magnitudes como límite de detección biológico puede ser ampliamente mejorado con una mayor compactación de pilares, alcanzandose un valor mínimo de 0.59 ng/mL). The objective of this thesis is to develop a new concept of optical label-free biosensor, based on a combination of vertical interrogation optical techniques and submicron structures fabricated over silicon chips. The most important features of this device are its simplicity, both from the point of view of optical measurement and regarding to the introduction of samples to be measured in the sensing area, which are often critical aspects in the majority of sensors found in the literature. Each of the aspects related to the design of biosensors, which are basically four (photonic design, optical characterization, fabrication and fluid / chemical immobilization) are developed in detail in the relevant chapters. The first part of the thesis consists of an introduction to the concept of biosensor: which elements consists of, existing types and the most common parameters used to quantify its behavior. Subsequently, an analysis of the state of the art in this area is presented, focusing in particular in the area of label free optical biosensors. What are also introduced to study biochemical reactions (immunoassays). The second part describes firstly the optical techniques used in the characterization: reflectometry, ellipsometry and spectrometry; in addition to the reasons that have led to their use. Subsequently several examples of the so-called "optofluidic cells" are introduced, which are the devices where the biochemical interactions take place. Four different devices are presented, and their optical response is calculated by using various methods. Then is exposed the calculation of the expected sensitivity for each of the cells, and the analysis of their fabrication processes and fluidic behavior at the sub-micrometric range. After analyzing all the critical aspects of the biosensor, it can be performed a process of optimization of a particular design. This is done using a simplified calculation model (1.5-D model calculation) that allows obtaining parameters such as sensitivity and the detection limit of a large number of devices in a relatively reduced time. For this process are chosen two different optofluidic cells, from the four previously proposed. The final part of the thesis is the exposition of the obtained experimental results. Firstly, a cell based sub-micrometric holes is characterized as refractive index sensor using different organic fluids, and such experimental results show a good correlation with previous theoretical calculations, allowing to validate the conceptual model presented. Finally, an immunoassay is performed on another typology of cell (SU-8 polymer pillars). This immunoassay uses bovine serum albumin (BSA) and its antibody (antiBSA). The processes for obtaining the cell surface functionalization with the bioreceptors (in this case, BSA) and the biorecognition (antiBSA) are detailed. This immunoassay can give a first estimation of which are the expected limit of detection values for this typology of sensors in a standard immunoassay. In this case, it reaches a value of 2.3 ng/mL, which is competitive with other similar assays found in the literature. The main conclusion of the thesis is that this type of device can be used as immunosensor, and has certain advantages over the existing ones. These advantages are associated again with its simplicity, by the simpler coupling of light and in the process of introduction of bioanalytes into the sensing areas (by depositing a droplet over the micro-nano-structure). Theoretical calculations made in optimizing processes suggest that the sensor Limit of detection can be greatly improved with higher compacting of the lattice of pillars, reaching a minimum value of 0.59 ng/mL).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis estudia el papel de la metáfora como instrumento arquitectónico en el debate y las propuestas para superar la ortodoxia de la arquitectura moderna a partir de mediados del siglo XX. En arquitectura, la utilización del procedimiento que constituye la metáfora se apoya en una consideración semántica de la arquitectura y se ha usado constantemente a lo largo de la historia para desarrollar nuevas propuestas, basándose en la comparación con modelos conocidos. En este trabajo se examina la notable presencia que adquirió este instrumento en las propuestas críticas a la arquitectura moderna ortodoxa, debido a la importancia que tuvieron los aspectos semánticos en el debate sobre la continuidad y vigencia de los postulados de la arquitectura moderna. Con este estudio se ha indagado sobre las razones y la frecuencia con las que aquellos arquitectos que adoptaron una actitud crítica hacia la arquitectura moderna se refirieron a las relaciones metafóricas en sus propuestas alternativas. Para mostrar cómo las metáforas formaron parte de los mecanismos de cambio de la arquitectura en este periodo, recuperando un mayor potencial creativo para abordar los proyectos arquitectónicos, se han estudiado una serie de ejemplos pertinentes y relevantes. En cada uno de los capítulos se han analizado las objeciones más importantes que fueron planteadas frente a la arquitectura moderna por un arquitecto o grupo de arquitectos, seleccionados entre aquéllos que tomaron parte en este debate, y se estudia la inclusión de la metáfora en las alternativas que propusieron en cada caso. Además de una actitud crítica con la arquitectura moderna, todos los arquitectos seleccionados comparten una consideración semántica de la arquitectura y han expuesto sus ideas y posturas tanto en obras proyectadas y construidas, como en escritos o comentarios sobre su manera de entender y proyectar arquitectura. Esta doble producción ha permitido analizar, comparativamente, el papel de la metáfora en sus enfoques y propuestas críticas y en la realización de sus obras como alternativas a las mismas. Al mismo tiempo, la investigación profundiza en el conocimiento de la utilización de la metáfora como herramienta arquitectónica. A través del recorrido por las distintas maneras de entender este instrumento que pueden observarse en algunos de los arquitectos de la segunda mitad del siglo XX, se exponen tanto las posibilidades como los aspectos más críticos de la utilización de la metáfora en arquitectura. La variada utilización de la noción de metáfora en arquitectura se ve reflejada en las diversas consideraciones que hacen de la misma los arquitectos estudiados. La tesis se ha ocupado de distinguir cada uno de estos enfoques, haciéndose eco de la pluralidad con la que puede abordarse el concepto de metáfora de manera general y en arquitectura en particular. Así, algunos arquitectos del Team 10, la utilizan como un instrumento de ampliación y renovación semántica de la arquitectura moderna, que propone una síntesis de lo viejo y lo nuevo. Para Robert Venturi, se trata de un recurso con capacidad de persuadir y recrear, renovando semánticamente la arquitectura. Charles Jencks considera que es un procedimiento semántico esencial de la arquitectura, olvidado por los arquitectos modernos, cuya recuperación supone un rasgo diferencial de la arquitectura posmoderna respecto a la moderna. Para Aldo Rossi, es una manera de materializar las relaciones analógicas que constituyen su propuesta para proyectar una arquitectura de racionalismo exaltado frente a un racionalismo convencional. Peter Eisenman la valora porque inventa otras arquitecturas diferentes a las preconcebidas anteriormente, a pesar de que rechaza su capacidad representativa y expresiva. Rafael Moneo la utiliza para contraponerse al determinismo, como instrumento de innovación de la forma arquitectónica que construye una dinámica entre la contingencia y la necesidad. Finalmente, para Frank Gehry supone un procedimiento creativo y subjetivo con el que enfrentarse tanto a la arquitectura moderna como a la posmoderna con una arquitectura nueva y abierta a referencias inusuales. De esta manera, a través de los distintos capítulos, el estudio pretende componer un mosaico de posturas que manifieste los vínculos y las diferencias existentes entre estos arquitectos con relación a los temas estudiados: crítica y alternativas a la arquitectura moderna, semántica de la arquitectura, metáfora y otros conceptos relacionados. A su vez, la aparición continuada de la metáfora en los diferentes capítulos, y de los temas con los que está relacionada, manifiesta el protagonismo de esta herramienta arquitectónica en las propuestas de evolución y cambio de la arquitectura del periodo estudiado. ABSTRACT This thesis studies the role of the metaphor as an architectural tool in the debate and proposals to overcome the orthodoxy of modern architecture that took place since the middle part of the Twentieth Century. In architecture, the usage of the process which the metaphor constitutes is based in a semantic consideration of architecture and historically it has been used many times to develop new proposals, always based in the comparison with known models. This work examines the significant presence that this tool acquired in the proposals critical with orthodox modern architecture, due to the importance that the semantic aspects had in the debate on the continuity and validity of modern architecture’s postulates. This study also looks into the motives and frequency that those architects which adopted a critical attitude towards modern architecture alluded to the metaphorical relations in their alternative proposals. To demonstrate how during that period metaphors were imbued in the mechanisms of change of architecture, recovering a higher creative potential to approach architectural projects, a series of pertinent and relevant examples are studied. Each chapter examines the most important objections on modern architecture made by an architect or group of architects, selected among those who participated in this debate, and studies the inclusion of metaphor in the alternatives proposed in each case. Besides a critical attitude towards modern architecture, all the selected architects share a semantic consideration of architecture and have exposed their ideas and postures through projected and finalized works as in writings or commentaries on their way of understanding and projecting architecture. This double production allowed to analyse, in a comparatively manner, the role of metaphor in their approaches and critical proposals as in the execution of their works. At the same time, the research conducted further analyses the body of knowledge on the usage of metaphor as an architectural tool. By looking over the different ways some of the architects of the second half of the Twentieth Century understood this tool, both the possibilities and the most critical aspects of the usage of the metaphor in architecture can be exposed. The various usages of the notion of metaphor in architecture are reflected in the multiple considerations done about it by the selected architects. This thesis differentiates each one of those approaches, echoing the plurality with which the concept of metaphor can be addressed both in broader terms and more particularly in architecture. In this sense, some architects of Team 10 used it as a mean of semantic extension and renewal of modern architecture, proposing a synthesis between the old and the new. For Robert Venturi it is a resource with the capacity to persuade and recreate, semantically renewing architecture. Charles Jencks considers it an essential semantic procedure of architecture, forgotten by the modern architects, and which recovery represents a differential trait of post-modern architecture in relation to modern architecture. For Aldo Rossi, is a way of materializing the analogical relations which represent his proposal to project architecture of exalted rationalism as opposed to a more conventional rationalism. Peter Eisenman values it because it invents other architectures different from the preconceived before, even if he refuses its representational and expressive capacity. Rafael Moneo uses it to counter determinism, as an innovation tool of the architectonical form which generates dynamics between contingency and necessity. Finally, for Frank Gehry it supposes a creative and subjective process to face both modern architecture and post-modern architecture with a new architecture open to unusual references. As such, through its different chapters, this study aims to compose a mosaic of postures which expresses the links and the differences between those architects in relation to the topics examined: criticism and alternatives to modern architecture, semantics of architecture, metaphor and other related concepts. At the same time, the continuous presence of the metaphor in the different chapters, and of the issues it relates to, shows the importance of this tool in the proposals of evolution and change of architecture in the period considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Water is fundamental to human life and the availability of freshwater is often a constraint on human welfare and economic development. Consequently, the potential effects of global changes on hydrology and water resources are considered among the most severe and vital ones. Water scarcity is one of the main problems in the rural communities of Central America, as a result of an important degradation of catchment areas and the over-exploitation of aquifers. The present Thesis is focused on two critical aspects of global changes over water resources: (1) the potential effects of climate change on water quantity and (2) the impacts of land cover and land use changes on the hydrological processes and water cycle. Costa Rica is among the few developing countries that have recently achieved a land use transition with a net increase in forest cover. Osa Region in South Pacific Costa Rica is an appealing study site to assess water supply management plans and to measure the effects of deforestation, forest transitions and climate change projections reported in the region. Rural Community Water Supply systems (ASADAS) in Osa are dealing with an increasing demand of freshwater due to the growing population and the change in the way of life in the rural livelihoods. Land cover mosaics which have resulted from the above mentioned processes are characterized by the abandonment of marginal farmland with the spread over these former grasslands of high return crops and the expansion of secondary forests due to reforestation initiatives. These land use changes have a significant impact on runoff generation in priority water-supply catchments in the humid tropics, as evidenced by the analysis of the Tinoco Experimental Catchment in the Southern Pacific area of Costa Rica. The monitoring system assesses the effects of the different land uses on the runoff responses and on the general water cycle of the basin. Runoff responses at plot scale are analyzed for secondary forests, oil palm plantations, forest plantations and grasslands. The Oil palm plantation plot presented the highest runoff coefficient (mean RC=32.6%), twice that measured under grasslands (mean RC=15.3%) and 20-fold greater than in secondary forest (mean RC=1.7%). A Thornthwaite-type water balance is proposed to assess the impact of land cover and climate change scenarios over water availability for rural communities in Osa Region. Climate change projections were obtained by the downscaling of BCM2, CNCM3 and ECHAM5 models. Precipitation and temperature were averaged and conveyed by the A1B, A2 and B1 IPCC climate scenario for 2030, 2060 and 2080. Precipitation simulations exhibit a positive increase during the dry season for the three scenarios and a decrease during the rainy season, with the highest magnitude (up to 25%) by the end of the 21st century under scenario B1. Monthly mean temperature simulations increase for the three scenarios throughout the year with a maximum increase during the dry season of 5% under A1B and A2 scenarios and 4% under B1 scenario. The Thornthwaite-type Water Balance model indicates important decreases of water surplus for the three climate scenarios during the rainy season, with a maximum decrease on May, which under A1B scenario drop up to 20%, under A2 up to 40% and under B1 scenario drop up to almost 60%. Land cover scenarios were created taking into account current land cover dynamics of the region. Land cover scenario 1 projects a deforestation situation, with forests decreasing up to 15% due to urbanization of the upper catchment areas; land cover scenario 2 projects a forest recovery situation where forested areas increase due to grassland abandonment on areas with more than 30% of slope. Deforestation scenario projects an annual water surplus decrease of 15% while the reforestation scenario projects a water surplus increase of almost 25%. This water balance analysis indicates that climate scenarios are equal contributors as land cover scenarios to future water resource estimations.