892 resultados para Portable architecture. Reassemblable structure. Design process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the application of the Integral Masonry System (IMS) to the construction of earthquake resistant houses and its experimental study. To verify the security of this new type of building in seismic areas of the third world two prototypes have been tested, one with adobe and the other with hollow brick. In both cases it’s a two-story 6x6x6 m3 house built to scale 1/2. The tests are carried out at the Laboratory of Antiseismic Structures of the Department of Engineering, Pontifical Catholic University of Peru in Lima, in collaboration with the UPM (Technical University of Madrid). This article shows the design process of the prototypes to test, including the sizing of the reinforcements, the characteristics of the tests and the results obtained. These results show that the IMS with adobe or brick remains stable with no significant cracks faced with a severe earthquake, with an estimated acceleration of 1.8 g. Este artículo presenta una aplicación del Sistema de Albañilería Integral (SAI) a la construcción de viviendas sismorresistentes y su estudio experimental. Para verificar su seguridad para su construcción en zonas sísmicas del tercer mundo se han ensayado dos prototipos, uno con adobe, y otro con ladrillo hueco. Se trata de una vivienda de 6x6x6 m3 y dos plantas que se construyen a escala 1/2. Los ensayos se realizaron en el Laboratorio de Estructuras Antisísmicas del Departamento de Ingeniería de la Pontificia Católica Universidad del Perú (PUCP) de Lima en colaboración con la UPM (Universidad Politécnica de Madrid). Este artículo muestra el proceso de diseño de los prototipos a ensayar, incluido el dimensionado de los refuerzos, las características de los ensayos y los resultados obtenidos. Estos resultados muestran que el SAI con adobe o ladrillo permanece estable sin grietas significativas ante un sismo severo, con una aceleración estimada de 1,8 g.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Self-OrganizingMap (SOM) is a neural network model that performs an ordered projection of a high dimensional input space in a low-dimensional topological structure. The process in which such mapping is formed is defined by the SOM algorithm, which is a competitive, unsupervised and nonparametric method, since it does not make any assumption about the input data distribution. The feature maps provided by this algorithm have been successfully applied for vector quantization, clustering and high dimensional data visualization processes. However, the initialization of the network topology and the selection of the SOM training parameters are two difficult tasks caused by the unknown distribution of the input signals. A misconfiguration of these parameters can generate a feature map of low-quality, so it is necessary to have some measure of the degree of adaptation of the SOM network to the input data model. The topologypreservation is the most common concept used to implement this measure. Several qualitative and quantitative methods have been proposed for measuring the degree of SOM topologypreservation, particularly using Kohonen's model. In this work, two methods for measuring the topologypreservation of the Growing Cell Structures (GCSs) model are proposed: the topographic function and the topology preserving map

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to evaluate the behaviour of a controller designed using a parametric Eigenstructure Assignment method and to evaluate its suitability for use in flexible spacecraft. The challenge of this objective lies in obtaining a suitable controller that is specifically designated to alleviate the deflections and vibrations suffered by external appendages in flexible spacecraft while performing attitude manoeuvres. One of the main problems in these vehicles is the mechanical cross-coupling that exists between the rigid and flexible parts of the spacecraft. Spacecraft with fine attitude pointing requirements need precise control of the mechanical coupling to avoid undesired attitude misalignment. In designing an attitude controller, it is necessary to consider the possible vibration of the solar panels and how it may influence the performance of the rest of the vehicle. The nonlinear mathematical model of a flexible spacecraft is considered a close approximation to the real system. During the process of controller evaluation, the design process has also been taken into account as a factor in assessing the robustness of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivos. Análisis a través del prisma de la sostenibilidad, con un triple objetivo: consumo de energía, aspecto constructivo y recursos arquitectónicos, de algunos ejemplos destacados de vivienda aislada desde 1973, crisis del petróleo, hasta el cambio ideológico del 11 de septiembre de 2001. Emplazadas en microclimas semejantes en distintas latitudes, para extrapolar al clima mediterráneo. Ejemplos. Siete viviendas aisladas de distinto diseño conceptual, situadas en climas subtropicales, en ecosistemas variados pero de concepción medioambiental, constructiva y arquitectónica muy actual. Los ejemplos elegidos, por su intencionalidad, su diseño, y su sentido utilitario, constructivo y semiótico, se analizan desde el hecho acumulativo y el hecho reactivo, mediante el procedimiento de contrastar diversas fuentes de información. Objetivos. El análisis de cada una de las viviendas procedentes de diferentes arquitectos, se hace mediante la simulación de modelos que permitan describir la parte esencial del comportamiento de un sistema de interés, así como diseñar y realizar experimentos con el modelo para extraer conclusiones de sus resultados y apoyar la toma de decisiones proyectual. Procedimiento. En una primera fase, el medio natural queda definido por su localización, interpretación del lugar, el clima concreto (mediante climogramas generales y de isopletas), determinando un diagnóstico medioambiental para el establecimiento de estrategias a partir de los datos experimentales a contrastar con el resultado finalmente obtenido. En una segunda fase se eligen los casos más representativos de LowTech / LowEnergy y HighTech / HighEnergy. Se realiza un análisis del modelo, frente a uno de los elementos predominantes: sol, aire, agua, tierra, luna, vegetación y miscelánea. Resultados. De los casos estudiados se extraen principios aplicables en cada uno de los ámbitos: medioambiental, relacionados con la adaptabilidad energética; constructivo, en relación con la economía; y arquitectónico, vinculado con lo social, con una perspectiva diferente sobre los nuevos espacios vivibles. Conclusiones y relevancia. En los emplazamientos estudiados, los arquitectos herederos del movimiento moderno, han utilizado los recursos medioambientales pasivos y activos más actuales en cada uno de los elementos, así como la orientación, la ventilación, la radiación, la inercia térmica,…las actitudes más contemporáneas como expertos bioclimáticos. Los principios extraídos deben facilitar un proceso proyectual mediante pautas experimentales a desarrollar sin un uso desmesurado de la tecnología. Los principios y las conclusiones obtenidos servirán de aplicación a nuevos modelos conociendo los parámetros más relevantes. La modelización analógica - digital permitirá evaluar el comportamiento más aplicable según las necesidades a satisfacer. Objectives. Architectural analysis through the prism of sustainability with three aims: energy consumption, building technique and architectonical resources. The study is focused on some key examples of low density houses since 1973 (featured by the Oil crisis) until the 11th September 2001´s ideological change. These living spaces are settled in similar microclimates although in different latitudes with strong possibilities for applications in Mediterranean climates. Examples. Seven remote detached dwellings with different conceptual characters, located in subtropical climates, in different ecosystems, however with a sustainable basis and architectonical building concepts very updated. The cultural objects chosen, due to legitimate reasons such as: purpose, a plan, an utilitarian sense, constructive and semiotic, are analyzed under an accumulative perspective along with the reactive fact as the procedure to contrast and compare different sources of information. Goals. The analysis of the examples of different architects, will be done in order to simulate through models which describe and display the essential part of behaviour that corresponds to an interest system, along with the design and to try out or experiment with the model and to draw up results which support the projecting decision process. Procedure. On a first stage, the natural environment is shaped by its location, interpretation of the pace, a particular climate (through general climograms and isophlets), determining an environmental diagnosis which would be able to generate scientific conclusions in order to develop adequate strategies. Hence from experimental data we contrast with the results. On a second stage the more representative cases of LowTech / LowEnergy and HighTech / HighEnergy are choosen. We analyze the model taking into consideration and facing a key element: sun, air, water, soil, moon, vegetation and miscellany. Results. From the study cases we draw up applicable principles in each field: the ecological in relation with energetic adaptability, the constructive potential related with economy, and the social in relation with a different perspective about new architectural living spaces. Conclusions and Relevance. On those settlements studied, the heirs architects from Modern Movement, had used the passive and active, updated environmental resources in each element. For instance, aspects like orientation, ventilation, solar radiation, thermal inertia…, and the most contemporary attitude as bioclimatic expert that they are. These principles speed up through experimental guidelines, the technology is diminished and the design process be provided. The principles and conclusions generated will be useful in order to apply new models because we are able to know the most relevant key parameters. The analogical-digital modelizations allow us to revaluate the applicable behaviour according to the needs to satisfy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between forms of delimitation and expropriation of the commons through the management of the "public" have led to a world of enclosures and exact division lines. But this is not how the individual perceives and experiences space, this is how bureaucracy builds it. The body’s individual spatiality is understood as the complex topological extension configured by the sensible world at every turn, reflecting while allowing the crossings, junctions, intensities, densities, proximities, etc., which weave together the experiential fabric wherein he lives. This individual spatiality, when it resonates with others, produces a form of common spatiality, the understanding of which can and should act as a new frame of reference for intervention strategies and spatial politics in the contemporary world. The roofscape, as a space not fitting within the canonical division of public/private, is a unique study case to frame these new concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neutron spectra unfolding and dose equivalent calculation are complicated tasks in radiation protection, are highly dependent of the neutron energy, and a precise knowledge on neutron spectrometry is essential for all dosimetry-related studies as well as many nuclear physics experiments. In previous works have been reported neutron spectrometry and dosimetry results, by using the ANN technology as alternative solution, starting from the count rates of a Bonner spheres system with a LiI(Eu) thermal neutrons detector, 7 polyethylene spheres and the UTA4 response matrix with 31 energy bins. In this work, an ANN was designed and optimized by using the RDANN methodology for the Bonner spheres system used at CIEMAT Spain, which is composed of a He neutron detector, 12 moderator spheres and a response matrix for 72 energy bins. For the ANN design process a neutrons spectra catalogue compiled by the IAEA was used. From this compilation, the neutrons spectra were converted from lethargy to energy spectra. Then, the resulting energy ?uence spectra were re-binned by using the MCNP code to the corresponding energy bins of the He response matrix before mentioned. With the response matrix and the re-binned spectra the counts rate of the Bonner spheres system were calculated and the resulting re-binned neutrons spectra and calculated counts rate were used as the ANN training data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proyecto tiene como objeto definir la viabilidad de explotación de un importante yacimiento de Estaño y Tántalo que se encuentra en una formación geológica de base pegmatítica situada en el norte de España. En base a las reservas calculadas, se define la capacidad de tratamiento de la planta de procesamiento del mineral para un periodo de explotación de 10 años. Como primer paso se estudian los ensayos de caracterización y concentración realizados en laboratorio a partir de muestras de mano representativas del mineral así como otros en planta piloto llevados a cabo anteriormente. Una vez definida la recuperación del Estaño y Tántalo se procede al diseño conceptual del proceso. Posteriormente se desarrolla un diseño e ingeniería preliminar más aproximados, a partir de los cuales se evalúan los costes de equipos y operacionales que, en base a los retornos por la venta de los concentrados, permitirán calcular la rentabilidad del proyecto y riesgos de la inversión. ABSTRACT The purpose of this project is to define the feasibility of mining a major deposit of tin and tantalum found in a pegmatite formation in northern Spain. Based on the estimated reserves, the operating capacity for mining and mineral processing plant was defined for a period of 10 years. As a first step for the development, a research program for characterization and concentration of the ore, were performed in the laboratory based on representative samples from the deposit. In addition, previous pilot plant results were also taken into account. Once determined the recovery of tin and tantalum, the conceptual design process was defined. As a second step, it was developed a preliminary design and engineering, from which the capital and operating costs were estimated .By means of the calculated returns from the sale of concentrates, the profitability of the project and investment risks were finally assessed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, spacial agencies have shown a growing interest in optical wireless as an alternative to wired and radio-frequency communications. The use of these techniques for intra-spacecraft communications reduces the effect of take-off acceleration and vibrations on the systems by avoiding the need for rugged connectors and provides a significant mass reduction. Diffuse transmission also eases the design process as terminals can be placed almost anywhere without a tight planification to ensure the proper system behaviour. Previous studies have compared the performance of radio-frequency and infrared optical communications. In an intra-satellite environment optical techniques help reduce EMI related problems, and their main disadvantages - multipath dispersion and the need for line-of-sight - can be neglected due to the reduced cavity size. Channel studies demonstrate that the effect of the channel can be neglected in small environments if data bandwidth is lower than some hundreds of MHz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new linear method for optimizing compact low noise oscillators for RF/MW applications will be presented. The first part of this paper makes an overview of Leeson's model. It is pointed out, and it is demonstrates that the phase noise is always the same inside the oscillator loop. It is presented a general phase noise optimization method for reference plane oscillators. The new method uses Transpose Return Relations (RRT) as true loop gain functions for obtaining the optimum values of the elements of the oscillator, whatever scheme it has. With this method, oscillator topologies that have been designed and optimized using negative resistance, negative conductance or reflection coefficient methods, until now, can be studied like a loop gain method. Subsequently, the main disadvantage of Leeson's model is overcome, and now it is not only valid for loop gain methods, but it is valid for any oscillator topology. The last section of this paper lists the steps to be performed to use this method for proper phase noise optimization during the linear design process and before the final non-linear optimization. The power of the proposed RRT method is shown with its use for optimizing a common oscillator, which is later simulated using Harmonic Balance (HB) and manufactured. Then, the comparison of the linear, HB and measurements of the phase noise are compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Civil engineering structures such as floor systems with open-plan layout or lightweight footbridges are susceptible to excessive level of vibrations caused by human loading. Active vibration control (AVC) via inertial mass actuators has been shown to be a viable technique to mitigate vibrations, allowing structures to satisfy vibration serviceability limits. Most of the AVC applications involve the use of SISO (single input single-output) strategies based on collocated control. However, in the case of floor structures, in which mostof the vibration modes are locally spatially distributed, SISO or multi-SISO strategies are quite inefficient. In this paper, a MIMO (multi-inputs multi-outputs) control in decentralised and centralised configuration is designed. The design process simultaneously finds the placement of multiple actuators and sensors and the output feedback gains. Additionally, actuator dynamics, actuator nonlinearities and frequency and time weightings are considered into the design process. Results with SISO and decentralised and centralised MIMO control (for a given number of actuators and sensors) are compared, showing the advantages of MIMO control for floor vibration control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports a high efficiency class-F power amplifier based on a gallium nitride high electron mobility transistor (GaN-HEMT), which is designed at the L band of 1640 MHz. The design is based on source and load pull measurements. During the design process, the parasitics of the package of the device are also taken into account in order to achieve the optimal class-F load condition at the intrinsic drain of the transistor. The fabricated class-F power amplifier achieved a maximum drain efficiency (DE) of 77.8% and a output power of 39.6 W on a bandwidth of 280 MHz. Simulation and measurement results have shown good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a new two-dimensional analytic optics design method is presented that enables the coupling of three ray sets with two lens profiles. This method is particularly promising for optical systems designed for wide field of view and with clearly separated optical surfaces. However, this coupling can only be achieved if different ray sets will use different portions of the second lens profile. Based on a very basic example of a single thick lens, the Simultaneous Multiple Surfaces design method in two dimensions (SMS2D) will help to provide a better understanding of the practical implications on the design process by an increased lens thickness and a wider field of view. Fermat?s principle is used to deduce a set of functional differential equations fully describing the entire optical system. The transformation of these functional differential equations into an algebraic linear system of equations allows the successive calculation of the Taylor series coefficients up to an arbitrary order. The evaluation of the solution space reveals the wide range of possible lens configurations covered by this analytic design method. Ray tracing analysis for calculated 20th order Taylor polynomials demonstrate excellent performance and the versatility of this new analytical optics design concept.