22 resultados para locational disadvantage

em Universidad Politécnica de Madrid


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growing scarcity, increasing demand and bad management of water resources are causing weighty competition for water and consequently managers are facing more and more pressure in an attempt to satisfy users? requirement. In many regions agriculture is one of the most important users at river basin scale since it concentrates high volumes of water consumption during relatively short periods (irrigation season), with a significant economic, social and environmental impact. The interdisciplinary characteristics of related water resources problems require, as established in the Water Framework Directive 2000/60/EC, an integrated and participative approach to water management and assigns an essential role to economic analysis as a decision support tool. For this reason, a methodology is developed to analyse the economic and environmental implications of water resource management under different scenarios, with a focus on the agricultural sector. This research integrates both economic and hydrologic components in modelling, defining scenarios of water resource management with the goal of preventing critical situations, such as droughts. The model follows the Positive Mathematical Programming (PMP) approach, an innovative methodology successfully used for agricultural policy analysis in the last decade and also applied in several analyses regarding water use in agriculture. This approach has, among others, the very important capability of perfectly calibrating the baseline scenario using a very limited database. However one important disadvantage is its limited capacity to simulate activities non-observed during the reference period but which could be adopted if the scenario changed. To overcome this problem the classical methodology is extended in order to simulate a more realistic farmers? response to new agricultural policies or modified water availability. In this way an economic model has been developed to reproduce the farmers? behaviour within two irrigation districts in the Tiber High Valley. This economic model is then integrated with SIMBAT, an hydrologic model developed for the Tiber basin which allows to simulate the balance between the water volumes available at the Montedoglio dam and the water volumes required by the various irrigation users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las FPAA´s son dispositivos analógicos programables. Estos dispositivos se basan en el uso de condensadores conmutados junto con amplificadores operacionales. Este tipo de tecnología presenta una serie de ventajas, ya que combinan las ventajas de dispositivos digitales, como la reprogramación en función de las variables del entorno que los rodean, con la diferencia de ser dispositivos analógicos, permitiendo la realización de una amplia gama de diseños analógicos en un solo chip. En este proyecto se ha realizado un estudio sobre el funcionamiento de los condensadores conmutados y su uso en el dispositivo AN221E04 del fabricante Anadigm. Una vez descrita la arquitectura del AN221E04 y explicadas las bases del funcionamiento de los condensadores conmutados, utilizando como ejemplo los modelos facilitados por Anadigm, se desarrolla un modelo de amplificador de instrumentación teórico y se describe la metodología para su implementación en un AN221E04 con el software Anadigm Designer 2. Una vez implementado este modelo de amplificador de instrumentación se han efectuado una serie de pruebas con el objetivo de estudiar la capacidad de estos dispositivos. Dichas pruebas ponen de manifiesto que las FPAA´s tienen una serie de ventajas a tener en cuenta a la hora de realizar diseños analógicos. La precisión obtenida por el modelo de amplificador de instrumentación realizado es más que aceptable, llegando a obtener errores de ganancia inferiores al 1% con ganancias de 200V/V sin tener la necesidad de realizar grandes ajustes. En las conclusiones de este estudio se exponen tanto ventajas como inconvenientes de la utilización de FPAA´s en diseños analógicos. La principal ventaja de este uso es el ahorro de costes, ya que una vez desarrollada una plataforma de diseño, la capacidad de reconfiguración permite utilizar dicha plataforma para un amplio abanico de aplicaciones, reduciendo el número de componentes y simplificando las etapas de diseño. Como desventaja, las FPAA´s tienen una serie de limitaciones qué hay que tener en cuenta en ciertos casos pudiendo hacer irrealizable un diseño concreto; como puede ser el valor máximo o mínimo de ganancia. The FPAA's are programmable analog devices. These devices rely on the use of switched capacitors together with operational amplifiers. This type of technology has a number of advantages, because they combine the advantages of digital devices such as the reprogramming function of the variables of the surrounding environment, with the difference being analog devices, allowing the realization of a wide range of designs analog on a single chip. This project has conducted a study on the operation of the switched capacitor and its use in the device AN221E04 from Anadigm. Having described the architecture of AN221E04 and explained the basis for the operation of the switched capacitor, using the example models provided by Anadigm is developing an instrumentation amplifier theory model and describes the methodology for implementation in a AN221E04 with the Anadigm Designer 2 software. Once implemented this instrumentation amplifier model, have made a series of tests in order to study the ability of these devices. These tests show that the FPAA's have a number of advantages to take into account when making analog designs. The accuracy obtained by the instrumentation amplifier model is made more than acceptable, earning gain errors of less than 1% with gains of 200V / V without the need for major adjustments. The conclusions of this study are presented both advantages and disadvantages of using FPAA's in analog designs. The main advantage of this application is the cost savings, because once developed a platform for design, reconfiguration capability allows you to use this platform for a wide range of applications, reducing component count and simplifying design stages. As a disadvantage, the FPAA's have a number of limitations which must be taken into account in certain cases may make impossible a specific design, such as the maximum or minimum gain, or the magnitude of the possible settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis analiza los elementos que afectan a la evaluación del rendimiento dentro de la técnica de radiodiagnóstico mediante tomografía por emisión de positrones (PET), centrándose en escáneres preclínicos. Se exploran las posibilidades de los protocolos estándar de evaluación sobre los siguientes aspectos: su uso como herramienta para validar programas de simulación Montecarlo, como método para la comparación de escáneres y su validez en el estudio del efecto sobre la calidad de imagen al utilizar radioisótopos alternativos. Inicialmente se estudian los métodos de evaluación orientados a la validación de simulaciones PET, para ello se presenta el programa GAMOS como entorno de simulación y se muestran los resultados de su validación basada en el estándar NEMA NU 4-2008 para escáneres preclínicos. Esta validación se ha realizado mediante la comparación de los resultados simulados frente a adquisiciones reales en el equipo ClearPET, describiendo la metodología de evaluación y selección de los parámetros NEMA. En este apartado también se mencionan las aportaciones desarrolladas en GAMOS para aplicaciones PET, como la inclusión de herramientas para la reconstrucción de imágenes. Por otro lado, la evaluación NEMA del ClearPET es utilizada para comparar su rendimiento frente a otro escáner preclínico: el sistema rPET-1. Esto supone la primera caracterización NEMA NU 4 completa de ambos equipos; al mismo tiempo que se analiza cómo afectan las importantes diferencias de diseño entre ellos, especialmente el tamaño axial del campo de visión y la configuración de los detectores. El 68Ga es uno de los radioisótopos no convencionales en imagen PET que está experimentando un mayor desarrollo, sin embargo, presenta la desventaja del amplio rango o distancia recorrida por el positrón emitido. Además del rango del positrón, otra propiedad física característica de los radioisótopos PET que puede afectar a la imagen es la emisión de fotones gamma adicionales, tal como le ocurre al isótopo 48V. En esta tesis se evalúan dichos efectos mediante estudios de resolución espacial y calidad de imagen NEMA. Finalmente, se analiza el alcance del protocolo NEMA NU 4-2008 cuando se utiliza para este propósito, adaptándolo a tal fin y proponiendo posibles modificaciones. Abstract This thesis analyzes the factors affecting the performance evaluation in positron emission tomography (PET) imaging, focusing on preclinical scanners. It explores the possibilities of standard protocols of assessment on the following aspects: their use as tools to validate Monte Carlo simulation programs, their usefulness as a method for comparing scanners and their validity in the study of the effect of alternative radioisotopes on image quality. Initially we study the methods of performance evaluation oriented to validate PET simulations. For this we present the GAMOS program as a simulation framework and show the results of its validation based on the standard NEMA NU 4-2008 for preclinical PET scanners. This has been accomplished by comparing simulated results against experimental acquisitions in the ClearPET scanner, describing the methodology for the evaluation and selection of NEMA parameters. This section also mentions the contributions developed in GAMOS for PET applications, such as the inclusion of tools for image reconstruction. Furthermore, the evaluation of the ClearPET scanner is used to compare its performance against another preclinical scanner, specifically the rPET-1 system. This is the first complete NEMA NU 4 based characterization study of both systems. At the same time we analyze how do the significant design differences of these two systems, especially the size of the axial field of view and the detectors configuration affect their performance characteristics. 68Ga is one of the unconventional radioisotopes in PET imaging the use of which is currently significantly increasing; however, it presents the disadvantage of the long positron range (distance traveled by the emitted positron before annihilating with an electron). Besides the positron range, additional gamma photon emission is another physical property characteristic of PET radioisotopes that can affect the reconstructed image quality, as it happens to the isotope 48V. In this thesis we assess these effects through studies of spatial resolution and image quality. Finally, we analyze the scope of the NEMA NU 4-2008 to carry out such studies, adapting it and proposing possible modifications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urban areas benefit from significant improvements in accessibility when a new high speed rail (HSR) project is built. These improvements, which are due mainly to a rise in efficiency, produce locational advantagesand increase the attractiveness of these cities, thereby possibly enhancing their competitivenessand economic growth. However, there may be equity issues at stake, as the main accessibility benefits are primarily concentrated in urban areas with a HSR station, whereas other locations obtain only limited benefits. HSR extensions may contribute to an increase in spatial imbalance and lead to more polarized patterns of spatial development. Procedures for assessing the spatial impacts of HSR must therefore follow a twofold approach which addresses issues of both efficiency and equity. This analysis can be made by jointly assessing both the magnitude and distribution of the accessibility improvements deriving from a HSR project. This paper describes an assessment methodology for HSR projects which follows this twofold approach. The procedure uses spatial impact analysis techniques and is based on the computation of accessibility indicators, supported by a Geographical Information System (GIS). Efficiency impacts are assessed in terms of the improvements in accessibility resulting from the HSR project, with a focus on major urban areas; and spatial equity implications are derived from changes in the distribution of accessibility values among these urban agglomerations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La determinación del origen de un material utilizado por el hombre en la prehistoria es de suma importancia en el ámbito de la arqueología. En los últimos años, los estudios de procedencia han utilizado técnicas que suelen ser muy precisas pero con el inconveniente de ser metodologías de carácter destructivo. El fenómeno de la minería a gran escala es una de las características que acompaña al Neolítico, de ahí que la revolución correspondiente a este periodo sea una de las etapas más importantes para la humanidad. El yacimiento arqueológico de Casa Montero es una mina de sílex neolítica ubicada en la Península Ibérica, de gran importancia por su antigüedad y su escala productiva. Este sitio arqueológico corresponde a una cantera de explotación de rocas silícicas desarrollada en el periodo neolítico en la que solamente se han encontrado los desechos de la extracción minera, lo cual incrementa la variabilidad de las muestras analizadas, de las que se desconoce su contexto económico, social y cultural. Es de gran interés arqueológico saber por qué esos grupos neolíticos explotaban de forma tan intensiva determinados tipos de material y cuál era el destino de la cadena productiva del sílex. Además, por ser una excavación de rescate, que ha tenido que procesar varias toneladas de material, en un tiempo relativamente corto, requiere de métodos expeditivos de clasificación y manejo de dicho material. Sin embargo,la implementación de cualquier método de clasificación debe evitar la alteración o modificación de la muestra,ya que,estudios previos sobre caracterización de rocas silícicas tienen el inconveniente de alterar parcialmente el objeto de estudio. Por lo que el objetivo de esta investigación fue la modelización del registro y procesamiento de datos espectrales adquiridos de rocas silícicas del yacimiento arqueológico de Casa Montero. Se implementó la metodología para el registro y procesamiento de datos espectrales de materiales líticos dentro del contexto arqueológico. Lo anterior se ha conseguido con la aplicación de modelos de análisis espectral, algoritmos de suavizado de firmas espectrales, reducción de la dimensionalidad de las características y la aplicación de métodos de clasificación, tanto de carácter vectorial como raster. Para la mayoría de los procedimientos se ha desarrollado una aplicación informática validada tanto por los propios resultados obtenidos como comparativamente con otras aplicaciones. Los ensayos de evaluación de la metodología propuesta han permitido comprobar la eficacia de los métodos. Por lo que se concluye que la metodología propuesta no solo es útil para materiales silícicos, sino que se puede generalizar en aquellos procesos donde la caracterización espectral puede ser relevante para la clasificación de materiales que no deban ser alterados, además, permite aplicarla a gran escala, dado que los costes de ejecución son mínimos si se comparan con los de métodos convencionales. Así mismo, es de destacar que los métodos propuestos, representan la variabilidad del material y permiten relacionarla con el estado del yacimiento, según su contenido respecto de las tipologías de la cadena operativa. ABSTRACT: The determination of the origin of a material used by man in prehistory is very important in the field of archaeology. In recent years the provenance studies have used techniques that tend to be very precise but with the drawback of being destructive methodologies. The phenomenon of mining on a large scale is a feature that accompanies the Neolithic period; the Neolithic revolution is one of the most important periods of humanity. The archaeological site of Casa Montero is a Neolithic flint mine located in the Iberian Peninsula of great importance for its antiquity and its scale. This archaeological site corresponds to a quarry exploitation of silicic rocks developed in the Neolithic period, in which only found debris from mining, which increases the variability of the samples analyzed, including their economic, social and cultural context is unknown. It is of great archaeological interest to know why these Neolithic groups exploited as intensive certain types of material and what the final destination of flint was in the productive chain. In addition, being an excavation of rescue that had to process several tons of material in a relatively short time requires expeditious methods of classification and handling of the material. However, the implementation of any method of classification should avoid the alteration or modification of the sample, since previous studies on characterization of silicic rocks have the disadvantage of destroying or partially modify the object of study. So the objective of this research wasthe modeling of the registration and processing of acquired spectral data of silicic rocks of the archaeological site of Casa Montero. The methodology implemented for modeling the registration and processing of existing spectral data of lithic materials within the archaeological context, was presented as an alternative to the conventional classification methods (methods destructive and expensive) or subjective methods that depend on the experience of the expert. The above has been achieved with the implementation of spectral analysis models, smoothing of spectral signatures and the dimensionality reduction algorithms. Trials of validation of the proposed methodology allowed testing the effectiveness of the methods in what refers to the spectral characterization of siliceous materials of Casa Montero. Is remarkable the algorithmic contribution of the signal filtering, improve of quality and reduction of the dimensionality, as well the proposal of using raster structures for efficient storage and analysis of spectral information. For which it is concluded that the proposed methodology is not only useful for siliceous materials, but it can be generalized in those processes where spectral characterization may be relevant to the classification of materials that must not be altered, also allows to apply it on a large scale, given that the implementation costs are minimal when compared with conventional methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new linear method for optimizing compact low noise oscillators for RF/MW applications will be presented. The first part of this paper makes an overview of Leeson's model. It is pointed out, and it is demonstrates that the phase noise is always the same inside the oscillator loop. It is presented a general phase noise optimization method for reference plane oscillators. The new method uses Transpose Return Relations (RRT) as true loop gain functions for obtaining the optimum values of the elements of the oscillator, whatever scheme it has. With this method, oscillator topologies that have been designed and optimized using negative resistance, negative conductance or reflection coefficient methods, until now, can be studied like a loop gain method. Subsequently, the main disadvantage of Leeson's model is overcome, and now it is not only valid for loop gain methods, but it is valid for any oscillator topology. The last section of this paper lists the steps to be performed to use this method for proper phase noise optimization during the linear design process and before the final non-linear optimization. The power of the proposed RRT method is shown with its use for optimizing a common oscillator, which is later simulated using Harmonic Balance (HB) and manufactured. Then, the comparison of the linear, HB and measurements of the phase noise are compared.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The variation in the adoption of a technology as a major source of competitive advantage has been attributed to the wide-ranging strategic foresight and the integrative capability of a firm. These possible areas of competitive advantage can exist in the periphery of the firm's strategic vision and can get easily blurred as a result of rigidness and can permeate in the decision-making process of the firm. This article explores how electric utility firms with a renewable energy portfolio can become strategically rigid in terms of adoption of newer technologies. The reluctance or delay in the adoption of new technology can be characterized as strategic rigidness, brought upon as a result of a firm's core competence or core capability in the other, more conventional technology arrangement. This paper explores the implications of such rigidness on the performance of a firm and consequently on the energy eco-system. The paper substantiates the results by emphasizing the case of Iberdrola S.A., an incumbent firm as a wind energy developer and its adoption decision behavior. We illustrate that the very routines that create competitive advantage for firms in the electric utility industry are vulnerable as they might also develop as sources of competitive disadvantage, when firms confront environmental change and uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En los vocabularios biomédicos actuales más utilizados, suelen existir mecanismos de composición de términos a partir de términos pre-existentes. Estos mecanismos de composición aumentan la potencia de los lenguajes que los poseen pero parten con la desventaja de la posibilidad de representar un mismo concepto con diferentes conceptos base, lo que incluye un componente de ambigüedad en los mismos. Este trabajo de fin de grado consiste en la realización de una herramienta que permita reconocer términos de estos vocabularios biomédicos complejos, es decir, vocabularios con términos compuestos por otros términos como puede ser el caso de SNOMED. Con la consecución de este proyecto, obtendremos una herramienta capaz de identificar las ambigüedades presentes en la representación de estos conceptos compuestos y representar de una forma homogénea dichos conceptos. Para favorecer la interoperabilidad y accesibilidad de la herramienta se ha decidido ofrecerla mediante una interfaz web accesible desde cualquier dispositivo o lugar con acceso a internet. ---ABSTRACT---In the latest and most used biomedical languages, we usually and term composition operations from existing terms. These mechanisms increase the utility of those terminologies they belong to. Despite this, these operations present a disadvantage, that is, the possibility of representing the same concept with diferent base concepts which introduces a certain degree of ambiguity in those complex terms. The objective of this final degree project consists in developing a tool that allows recognizing terms from those complex biomedical vocabularies, that is, terminologies with terms comprised of simpler terms such as SNOMED. By completing this project, we obtained a tool capable of identifying the present ambiguities in the representation of those composite concepts and represent them in a homogenous format. To facilitate the interoperability and accessibility of the tool it was decided to other it through a web interface loadable from any place or device with access to the internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El análisis de estructuras mediante modelos de elementos finitos representa una de las metodologías más utilizadas y aceptadas en la industria moderna. Para el análisis de estructuras tubulares de grandes dimensiones similares a las sobrestructuras de autobuses y autocares, los elementos de tipo viga son comúnmente utilizados y recomendados debido a que permiten obtener resultados satisfactorios con recursos computacionales reducidos. No obstante, los elementos de tipo viga presentan importante desventaja ya que las uniones modeladas presentan un comportamiento infinitamente rígido, esto determina un comportamiento mas rígido en las estructuras modeladas lo que se traduce en fuentes de error para las simulaciones estructurales (hasta un 60%). Mediante el modelado de uniones tubulares utilizando elementos de tipo área o volumen, se pueden obtener modelos más realistas, ya que las características topológicas de la unión propiamente dicha pueden ser reproducidas con un mayor nivel de detalle. Evitándose de esta manera los inconvenientes de los elementos de tipo viga. A pesar de esto, la modelización de estructuras tubulares de grandes dimensiones con elementos de tipo área o volumen representa una alternativa poco atractiva debido a la complejidad del proceso de modelados y al gran número de elementos resultantes lo que implica la necesidad de grandes recursos computacionales. El principal objetivo del trabajo de investigación presentado, fue el de obtener un nuevo tipo de elemento capaz de proporcionar estimaciones más exactas en el comportamiento de las uniones modeladas, al mismo tiempo manteniendo la simplicidad del procesos de modelado propio de los elementos de tipo viga regular. Con el fin de alcanzar los objetivos planteados, fueron realizadas diferentes metodologías e investigaciones. En base a las investigaciones realizadas, se obtuvo un modelo de unión viga alternativa en el cual se introdujeron un total seis elementos elásticos al nivel de la unión mediante los cuales es posible adaptar el comportamiento local de la misma. Adicionalmente, para la estimación de las rigideces correspondientes a los elementos elásticos se desarrollaron dos metodologías, una primera basada en la caracterización del comportamiento estático de uniones simples y una segunda basada en la caracterización del comportamiento dinámico a través de análisis modales. Las mejoras obtenidas mediante la implementación del modelo de unión alternativa fueron analizadas mediante simulaciones y validación experimental en una estructura tubular compleja representativa de sobrestructuras de autobuses y autocares. En base a los análisis comparativos realizados con la uniones simples modeladas y los experimentos de validación, se determinó que las uniones modeladas con elementos de tipo viga son entre un 5-60% más rígidas que uniones equivalentes modeladas con elementos área o volumen. También se determinó que las uniones área y volumen modeladas son entre un 5 a un 10% mas rígidas en comparación a uniones reales fabricadas. En los análisis realizados en la estructura tubular compleja, se obtuvieron mejoras importantes mediante la implementación del modelo de unión alternativa, las estimaciones del modelo viga se mejoraron desde un 49% hasta aproximadamente un 14%. ABSTRACT The analysis of structures with finite elements models represents one of the most utilized an accepted technique in the modern industry. For the analysis of large tubular structures similar to buses and coaches upper structures, beam type elements are utilized and recommended due to the fact that these elements provide satisfactory results at relatively reduced computational performances. However, the beam type elements have a main disadvantage determined by the fact that the modeled joints have an infinite rigid behavior, this shortcoming determines a stiffer behavior of the modeled structures which translates into error sources for the structural simulations (up to 60%). By modeling tubular junctions with shell and volume elements, more realistic models can be obtained, because the topological characteristics of the junction at the joint level can be reproduced more accurately. This way, the shortcoming that the beam type elements present can be solved. Despite this fact, modeling large tubular structures with shell or volume type elements represents an unattractive alternative due to the complexity of the modeling process and the large number of elements that result which imply the necessity of vast computational performances. The main objective of the research presented in this thesis was to develop a new beam type element that would be able to provide more accurate estimations for the local behavior of the modeled junctions at the same time maintaining the simplicity of the modeling process the regular beam type elements have. In order to reach the established objectives of the research activities, a series of different methodologies and investigations have been necessary. From these investigations an alternative beam T-junction model was obtained, in which a total of six elastic elements at the joint level were introduced, the elastic elements allowed us to adapt the local behavior of the modeled junctions. Additionally, for the estimation of the stiffness values corresponding to the elastic elements two methodologies were developed, one based on the T-junction’s static behavior and a second one based on the T-junction’s dynamic behavior by means of modal analysis. The improvements achieved throughout the implementation of this alternative T-junction model were analyzed though mechanical validation in a complex tubular structures that had a representative configuration for buses and coaches upper structures. From the comparative analyses of the finite element modeled T-junctions and mechanical experimental analysis, was determined that the beam type modeled T-junctions have a stiffer behavior compared to equivalent shell and volume modeled T-junctions with average differences ranging from 5-60% based on the profile configurations. It was also determined that the shell and volume models have a stiffer behavior compared to real T-junctions varying from 5 to 10% depending on the profile configurations. Based on the analysis of the complex tubular structure, significant improvements were obtained by the implementation of the alternative beam T-junction model, the model estimations were improved from a 49% to approximately 14%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past, mining wastes were left wherever they might lie in the surroundings of the mine area. Unfortunately, inactive and abandoned mines continue to pollute our environment, reason why these sites should be restored with minimum impact. Phytoextraction is an environmental-friendly and cost-effective technology less harmful than traditional methods that uses metal hyperaccumulator or at least tolerant plants to extract heavy metals from polluted soils. One disadvantage of hyperaccumulator species is their slow growth rate and low biomass production. Vetiveria zizanioides (L.) Nash, perennial species adapted to Mediterranean climate has a strong root system which can reach up to 3 m deep, is fast growing, and can survive in sites with high metal levels (Chen et al., 2004). Due to the fact that metals in abandoned mine tailings become strongly bonded to soil solids, humic acids used as chelating agents could increase metal bioavailability (Evangelou et al., 2004; Wilde et al., 2005) and thereby promote higher accumulation in the harvestable parts of the plant. The objective of this study was to examine the performance of humic acid assisted phytoextraction using Vetiveria zizanioides (L.) Nash in heavy metals contaminated soils.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las máquinas síncronas con excitación indirecta sin escobillas (tipo brushless) presentan el problema de que no es posible acceder al devanado de campo para desexcitar la máquina. Esto provoca que, a pesar de la operación correcta del sistema de protecciones, la lenta constante de tiempo de desexcitación pueda producir graves daños en el caso de un cortocircuito interno. En este documento se presentan las pruebas de un novedoso sistema de desexcitación rápida para este tipo de máquinas en un generador de 15 MVA. La desexcitación se consigue introduciendo una resistencia en el circuito de campo, obteniendo una respuesta dinámica similar a la que se consigue en las máquinas con excitación estática. La principal aportación de este estudio es la adaptación del método a máquinas de tamaño industrial y las diversas pruebas realizadas en un generador de 15 MVA validando el correcto funcionamiento de este sistema. Synchronous machines with brushless excitation have the disadvantage that the field winding is no accessible for the de-excitation. This means that, despite the proper operation of the protection system, the slow de-excitation time constant may produce severe damage in the event of an internal short circuit. In this paper the test in a 15 MVA generator of a novel high-speed de-excitation system for brushless synchronous machines is presented. The de-excitation is achieved by inserting a resistance in the field circuit, obtaining a dynamic response similar to that archived in machines with static excitation. The main novelty of this paper is the use of this method in a industrial size machine. This system has been validated through experimental tests on a 15 MVA generator, with satisfactory results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dentro del campo de la ciudad como lugar se analiza el concepto de planificación territorial y planeamiento espacial. Flooding is one of the main risks associated to many urban settlements in Spain and, indeed, elsewhere. The location of cities has traditionally ignored this type of risk as other locational criteria prevailed (communications, crop yields, etc.). Defence engineering has been the customary way to offset the risk but, nowadays, the opportunity costs of engineering works in urban areas has highlighted the interest of “soft measures” based on prevention. Early warning systems plus development planning controls rank among the most favoured ones. This paper reflects the results of a recent EU-financed research project on alternative measures geared to the enhancement of urban resilience against flooding. A city study in Spain is used as example of those measures.