31 resultados para Point of interest
em Universidad Politécnica de Madrid
Resumo:
An accurate characterization of the near-region propagation of radio waves inside tunnels is of practical importance for the design and planning of advanced communication systems. However, there has been no consensus yet on the propagation mechanism in this region. Some authors claim that the propagation mechanism follows the free space model, others intend to interpret it by the multi-mode waveguide model. This paper clarifies the situation in the near-region of arched tunnels by analytical modeling of the division point between the two propagation mechanisms. The procedure is based on the combination of the propagation theory and the three-dimensional solid geometry. Three groups of measurements are employed to verify the model in different tunnels at different frequencies. Furthermore, simplified models for the division point in five specific application situations are derived to facilitate the use of the model. The results in this paper could help to deepen the insight into the propagation mechanism within tunnel environments.
Resumo:
This paper investigates the propagation of airblast from quarry blasting. Peak overpressure is calculated as a function of blasting parameters (explosive mass per delay and velocity at which the detonation sequence proceeds along the bench) and polar coordinates of the point of interest (distance to the blast and azimuth with respect to the free face of the blast). The model is in the form of the product of a classical scaled distance attenuation law times a directional correction factor. The latter considers the influence of the bench face, and attenuates overpressure at the top level and amplifies it at the bottom. Such factor also accounts for the effect of the delay by amplifying the pressure in the direction of the initiation sequence if the velocity of initiation exceeds half the speed of sound and up to an initiation velocity in the range of the speed of sound. The model has been fitted to an empirical data set composed by 134 airblast records monitored in 47 blasts at two quarries. The measurements were made at distances to the blast less than 450 m. The model is statistically significant and has a determination coefficient of 0.869
Resumo:
This work studies the physiology of Schizosaccharomyces pombe strain 938 in the production of white wine with high malic acid levels as the sole fermentative yeast, as well as in mixed and sequential fermentations with Saccharomyces cerevisiae Cru Blanc. The induction of controlled maloalcoholic fermentation through the use of Schizosaccharomyces spp. is now being viewed with much interest. The acetic, malic and pyruvic acid concentrations, relative density and pH of the musts were measured over the entire fermentation period. In all fermentations in which Schizo. pombe 938 was involved, nearly all the malic acid was consumed and moderate acetic concentrations produced. The urea content and alcohol level of these wines were notably lower than in those made with Sacch. cerevisiae Cru Blanc alone. The pyruvic acid concentration was significantly higher in Schizo. pombe fermentations. The sensorial properties of the different final wines varied widely.
Resumo:
In this paper, we aim to prove, firstly, that the argument of the excess of complexity is not a whim. We will focous our attention on a particular and widespread case within the Tool Box, word processors, and on the most widely sold products inside this category respectively, the one a few years ago, the other at the present moment: WordStar y WordPerfect. The aspect of their complexity we are interested in is their user interface, because in the first place it is the aspect that most influences the human job.
Resumo:
La innovación en Sistemas Intesivos en Software está alcanzando relevancia por múltiples razones: el software está presente en sectores como automóvil, teléfonos móviles o salud. Las empresas necesitan conocer aquellos factores que afectan a la innovación para incrementar las probabilidades de éxito en el desarrollo de sus productos y, la evaluación de productos sofware es un mecanismo potente para capturar este conocimiento. En consecuencia, las empresas necesitan evaluar sus productos desde la perpectiva de innovación para reducir la distancia entre los productos desarrollados y el mercado. Esto es incluso más relevante en el caso de los productos intensivos en software, donde el tiempo real, la oportunidad, complejidad, interoperabilidad, capacidad de respuesta y compartción de recursos son características críticas de los nuevos sistemas. La evaluación de la innovación de productos ya ha sido estudiada y se han definido algunos esquemas de evaluación pero no son específicos para Sistemas intensivos en Sofwtare; además, no se ha alcanzado consenso en los factores ni el procedimiento de evaluación. Por lo tanto, tiene sentido trabajar en la definición de un marco de evaluación de innovación enfocado a Sistemas intesivos en Software. Esta tesis identifica los elementos necesarios para construir in marco para la evaluación de de Sistemas intensivos en Software desde el punto de vista de la innovación. Se han identificado dos componentes como partes del marco de evaluación: un modelo de referencia y una herramienta adaptativa y personalizable para la realización de la evaluación y posicionamiento de la innovación. El modelo de referencia está compuesto por cuatro elementos principales que caracterizan la evaluación de innovación de productos: los conceptos, modelos de innovación, cuestionarios de evaluación y la evaluación de productos. El modelo de referencia aporta las bases para definir instancias de los modelos de evaluación de innovación de productos que pueden se evaluados y posicionados en la herramienta a través de cuestionarios y que de forma automatizada aporta los resultados de la evaluación y el posicionamiento respecto a la innovación de producto. El modelo de referencia ha sido rigurosamente construido aplicando modelado conceptual e integración de vistas junto con la aplicación de métodos cualitativos de investigación. La herramienta ha sido utilizada para evaluar productos como Skype a través de la instanciación del modelo de referencia. ABSTRACT Innovation in Software intensive Systems is becoming relevant for several reasons: software is present embedded in many sectors like automotive, robotics, mobile phones or heath care. Firms need to have knowledge about factors affecting the innovation to increase the probability of success in their product development and the assessment of innovation in software products is a powerful mechanism to capture this knowledge. Therefore, companies need to assess products from an innovation perspective to reduce the gap between their developed products and the market. This is even more relevant in the case of SiSs, where real time, timeliness, complexity, interoperability, reactivity, and resource sharing are critical features of a new system. Many authors have analysed product innovation assessment and some schemas have been developed but they are not specific to SiSs; in addition, there is no consensus about the factors or the procedures for performing an assessment. Therefore, it has sense to work in the definition of a customized software product innovation evaluation framework. This thesis identifies the elements needed to build a framework to assess software products from the innovation perspective. Two components have been identified as part of the framework to assess Software intensive Systems from the innovation perspective: a reference-model and an adaptive and customizable tool to perform the assessment and to position product innovation. The reference-model is composed by four main elements characterizing product innovation assessment: concepts, innovation models, assessment questionnaires and product assessment. The reference model provides the umbrella to define instances of product innovation assessment models that can be assessed and positioned through questionnaires in the proposed tool that also provides automation in the assessment and positioning of innovation. The reference-model has been rigorously built by applying conceptual modelling and view integration integrated with qualitative research methods. The tool has been used to assess products like Skype through models instantiated from the reference-model.
Resumo:
In this paper we address the new reduction method called Proper Generalized Decomposition (PGD) which is a discretization technique based on the use of separated representation of the unknown fields, specially well suited for solving multidimensional parametric equations. In this case, it is applied to the solution of dynamics problems. We will focus on the dynamic analysis of an one-dimensional rod with a unit harmonic load of frequency (ω) applied at a point of interest. In what follows, we will present the application of the methodology PGD to the problem in order to approximate the displacement field as the sum of the separated functions. We will consider as new variables of the problem, parameters models associated with the characteristic of the materials, in addition to the frequency. Finally, the quality of the results will be assessed based on an example.
Resumo:
Se estudian la influencia en el microclima de racimos (temperatura e iluminación) de tres sistemas de conducción en la variedad Syrah en una zona calida.
Resumo:
There are a number of factors that contribute to the success of dental implant operations. Among others, is the choice of location in which the prosthetic tooth is to be implanted. This project offers a new approach to analyse jaw tissue for the purpose of selecting suitable locations for teeth implant operations. The application developed takes as input jaw computed tomography stack of slices and trims data outside the jaw area, which is the point of interest. It then reconstructs a three dimensional model of the jaw highlighting points of interest on the reconstructed model. On another hand, data mining techniques have been utilised in order to construct a prediction model based on an information dataset of previous dental implant operations with observed stability values. The goal is to find patterns within the dataset that would help predicting the success likelihood of an implant.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
ObjectKineticMonteCarlo models allow for the study of the evolution of the damage created by irradiation to time scales that are comparable to those achieved experimentally. Therefore, the essential ObjectKineticMonteCarlo parameters can be validated through comparison with experiments. However, this validation is not trivial since a large number of parameters is necessary, including migration energies of point defects and their clusters, binding energies of point defects in clusters, as well as the interactionradii. This is particularly cumbersome when describing an alloy, such as the Fe–Cr system, which is of interest for fusion energy applications. In this work we describe an ObjectKineticMonteCarlo model for Fe–Cr alloys in the dilute limit. The parameters used in the model come either from density functional theory calculations or from empirical interatomic potentials. This model is used to reproduce isochronal resistivity recovery experiments of electron irradiateddiluteFe–Cr alloys performed by Abe and Kuramoto. The comparison between the calculated results and the experiments reveal that an important parameter is the capture radius between substitutionalCr and self-interstitialFe atoms. A parametric study is presented on the effect of the capture radius on the simulated recovery curves.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
The fundamental objective of this Ph. D. dissertation is to demonstrate that, under particular circumstances which cover most of the structures with practical interest, periodic structures can be understood and analyzed by means of closed waveguide theories and techniques. To that aim, in the first place a transversely periodic cylindrical structure is considered and the wave equation, under a combination of perfectly conducting and periodic boundary conditions, is studied. This theoretical study runs parallel to the classic analysis of perfectly conducting closed waveguides. Under the light shed by the aforementioned study it is clear that, under certain very common periodicity conditions, transversely periodic cylindrical structures share a lot of properties with closed waveguides. Particularly, they can be characterized by a complete set of TEM, TE and TM modes. As a result, this Ph. D. dissertation introduces the transversely periodic waveguide concept. Once the analogies between the modes of a transversely periodic waveguide and the ones of a closed waveguide have been established, a generalization of a well-known closed waveguide characterization method, the generalized Transverse Resonance Technique, is developed for the obtention of transversely periodic modes. At this point, all the necessary elements for the consideration of discontinuities between two different transversely periodic waveguides are at our disposal. The analysis of this type of discontinuities will be carried out by means of another well known closed waveguide method, the Mode Matching technique. This Ph. D. dissertation contains a sufficient number of examples, including the analysis of a wire-medium slab, a cross-shaped patches periodic surface and a parallel plate waveguide with a textured surface, that demonstrate that the Transverse Resonance Technique - Mode Matching hybrid is highly precise, efficient and versatile. Thus, the initial statement: ”periodic structures can be understood and analyzed by means of closed waveguide theories and techniques”, will be corroborated. Finally, this Ph. D. dissertation contains an adaptation of the aforementioned generalized Transverse Resonance Technique by means of which the analysis of laterally open periodic waveguides, such as the well known Substrate Integrated Waveguides, can be carried out without any approximation. The analysis of this type of structures has suscitated a lot of interest in the recent past and the previous analysis techniques proposed always resorted to some kind of fictitious wall to close the structure. vii Resumen El principal objetivo de esta tesis doctoral es demostrar que, bajo ciertas circunstancias que se cumplen para la gran mayoría de estructuras con interés práctico, las estructuras periódicas se pueden analizar y entender con conceptos y técnicas propias de las guías de onda cerradas. Para ello, en un primer lugar se considera una estructura cilíndrical transversalmente periódica y se estudia la ecuación de onda bajo una combinación de condiciones de contorno periódicas y de conductor perfecto. Este estudio teórico y de caracter general, sigue el análisis clásico de las guías de onda cerradas por conductor eléctrico perfecto. A la luz de los resultados queda claro que, bajo ciertas condiciones de periodicidad (muy comunes en la práctica) las estructuras cilíndricas transversalmente periódicas guardan multitud de analogías con las guías de onda cerradas. En particular, pueden ser descritas mediante un conjunto completo de modos TEM, TE y TM. Por ello, ésta tesis introduce el concepto de guía de onda transversalmente periódica. Una vez establecidas las similitudes entre las soluciones de la ecuación de onda, bajo una combinación de condiciones de contorno periódicas y de conductor perfecto, y los modos de guías de onda cerradas, se lleva a cabo, con éxito, la adaptación de un conocido método de caracterización de guías de onda cerradas, la técnica de la Resonancia Transversal Generalizada, para la obtención de los modos de guías transversalmente periódicas. En este punto, se tienen todos los elementos necesarios para considerar discontinuidades entre guías de onda transversalmente periódicas. El analisis de este tipo de discontinuidades se llevará a cabo mediante otro conocido método de análisis de estructuras cerradas, el Ajuste Modal. Esta tesis muestra multitud de ejemplos, como por ejemplo el análisis de un wire-medium slab, una superficie de parches con forma de cruz o una guía de placas paralelas donde una de dichas placas tiene cierta textura, en los que se demuestra que el método híbrido formado por la Resonancia Transversal Generalizada y el Ajuste Modal, es tremendamente preciso, eficiente y versátil y confirmará la validez de el enunciado inicial: ”las estructuras periódicas se pueden analizar y entender con conceptos y técnicas propias de las guías de onda cerradas” Para terminar, esta tésis doctoral incluye también una modificación de la técnica de la Resonancia Transversal Generalizada mediante la cual es posible abordar el análisis de estructuras periódica abiertas en los laterales, como por ejemplo las famosas guías de onda integradas en sustrato, sin ninguna aproximación. El análisis de este tipo de estructuras ha despertado mucho interés en los últimos años y las técnicas de análisis propuestas hasta ix el momento acostumbran a recurrir a algún tipo de pared ficticia para simular el carácter abierto de la estructura.
Resumo:
Characteristics of the impacts su!ered by the fruit on a transfer point of an experimental fruit packing line were analysed. The transfer is made up by two transporting belts at di!erent heights forming an angle of 903. These transfer points are very common in fruit packing lines, in which fruits receive two impacts: the "rst onto the belt base and the second into the lateral plate. Diferent tests were carried out to study the e!ect of transfer height, velocity, belt structure and padding on the acceleration values recorded by an instrumental sphere (IS 100). Results showed that transfer height and belt structure a!ect mainly impact values on the belt base, and padding a!ects mainly impact values registered for lateral contact. The elect of belt velocity in both impacts is less important when compared to the rest of the variables. Additionally, two powered transfer decelerators were tested at the same point with the aim of decreasing impacts su!ered by the fruit. Comparing impacts registered using these decelerators to those analysed in the first part of the study without decelerators, a high reduction of the impact values was observed.
Resumo:
The possibility of using more economical silicon feedstock, i.e. as support for epitaxial solar cells, is of interest when the cost reduction and the properties are attractive. We have investigated the mechanical behaviour of two blocks of upgraded metallurgical silicon, which is known to present high content of impurities even after being purified by the directional solidification process. These impurities are mainly metals like Al and silicon compounds. Thus, it is important to characterize their effect in order to improve cell performance and to ensure the survival of the wafers throughout the solar value chain. Microstructure and mechanical properties were studied by means of ring on ring and three point bending tests. Additionally, elastic modulus and fracture toughness were measured. These results showed that it is possible to obtain marked improvements in toughness when impurities act as microscopic internal crack arrestors. However, the same impurities can be initiators of damage due to residual thermal stresses introduced during the crystallization process.
Resumo:
Coarse particles of aerodynamic diameter between 2.5 and 10 mm (PMc) are produced by a range of natural (windblown dust and sea sprays) and anthropogenic processes (non-exhaust vehicle emissions, industrial, agriculture, construction and quarrying activities). Although current ambient air quality regulations focus on PM2.5 and PM10, coarse particles are of interest from a public health point of view as they have been associated with certain mortality and morbidity outcomes. In this paper, an analysis of coarse particle levels in three European capitals (London, Madrid and Athens) is presented and discussed. For all three cities we analysed data from both traffic and urban background monitoring sites. The results showed that the levels of coarse particles present significant seasonal, weekly and daily variability. Their wind driven and non-wind driven resuspension as well as their roadside increment due to traffic were estimated. Both the local meteorological conditions and the air mass history indicating long-range atmospheric transport of particles of natural origin are significant parameters that influence the levels of coarse particles in the three cities especially during episodic events.