905 resultados para design process
Resumo:
Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.
Resumo:
Tissue engineering has been increasingly brought to the scientific spotlight in response to the tremendous demand for regeneration, restoration or substitution of skeletal or cardiac muscle after traumatic injury, tumour ablation or myocardial infarction. In vitro generation of a highly organized and contractile muscle tissue, however, crucially depends on an appropriate design of the cell culture substrate. The present work evaluated the impact of substrate properties, in particular morphology, chemical surface composition and mechanical properties, on muscle cell fate. To this end, aligned and randomly oriented micron (3.3±0.8 μm) or nano (237±98 nm) scaled fibrous poly(ε-caprolactone) non-wovens were processed by electrospinning. A nanometer-thick oxygen functional hydrocarbon coating was deposited by a radio frequency plasma process. C2C12 muscle cells were grown on pure and as-functionalized substrates and analysed for viability, proliferation, spatial orientation, differentiation and contractility. Cell orientation has been shown to depend strongly on substrate architecture, being most pronounced on micron-scaled parallel-oriented fibres. Oxygen functional hydrocarbons, representing stable, non-immunogenic surface groups, were identified as strong triggers for myotube differentiation. Accordingly, the highest myotube density (28±15% of total substrate area), sarcomeric striation and contractility were found on plasma-coated substrates. The current study highlights the manifold material characteristics to be addressed during the substrate design process and provides insight into processes to improve bio-interfaces.
Resumo:
Vibration serviceability is a widely recognized design criterion for assembly-type structures, such as stadiums, that are likely subjected to rhythmic human-induced excitation. Human-induced excitation of a structure occurs from the movement of the occupants such as walking, running, jumping, or dancing. Vibration serviceability is based on the level of comfort that people have with the vibrations of a structure. Current design guidance uses the natural frequency of the structure to assess vibration serviceability. However, a phenomenon known as human-structure interaction suggests that there is a dynamic interaction between the structure and passive occupants, altering the natural frequency of the system. Human-structure interaction is dependent on many factors, including the dynamic properties of the structure, posture of the occupants, and relative size of the crowd. It is unknown if the shift in natural frequency due to humanstructure interaction is significant enough to warrant consideration in the design process. This study explores the interface of both structural and crowd characteristics through experimental testing to determine if human-structure interaction should be considered because of its potential impact on serviceability assessment. An experimental test structure that represents the dynamic properties of a cantilevered stadium structure was designed and constructed. Experimental modal analysis was implemented to determine the dynamic properties of the empty test structure and when occupied with up to seven people arranged in different locations and postures. Comparisons of the dynamic properties were made between the empty and occupied testing configurations and analytical results from the use of a dynamic crowd model recommended from the Joint Working Group of Europe. Data trends lead to the development of a refined dynamic crowd model. This dynamic model can be used in conjunction with a finite element model of the test structure to estimate the dynamic influence due to human-structure interaction due to occupants standing with straight knees. In the future, the crowd model will be refined and can aid in assessing the dynamic properties of in-service stadium structures.
Resumo:
The purpose of this research project is to study an innovative method for the stability assessment of structural steel systems, namely the Modified Direct Analysis Method (MDM). This method is intended to simplify an existing design method, the Direct Analysis Method (DM), by assuming a sophisticated second-order elastic structural analysis will be employed that can account for member and system instability, and thereby allow the design process to be reduced to confirming the capacity of member cross-sections. This last check can be easily completed by substituting an effective length of KL = 0 into existing member design equations. This simplification will be particularly useful for structural systems in which it is not clear how to define the member slenderness L/r when the laterally unbraced length L is not apparent, such as arches and the compression chord of an unbraced truss. To study the feasibility and accuracy of this new method, a set of 12 benchmark steel structural systems previously designed and analyzed by former Bucknell graduate student Jose Martinez-Garcia and a single column were modeled and analyzed using the nonlinear structural analysis software MASTAN2. A series of Matlab-based programs were prepared by the author to provide the code checking requirements for investigating the MDM. By comparing MDM and DM results against the more advanced distributed plasticity analysis results, it is concluded that the stability of structural systems can be adequately assessed in most cases using MDM, and that MDM often appears to be a more accurate but less conservative method in assessing stability.
Resumo:
Mit der Entwicklung der losgrößenunabhängigen, additiven Fertigungsverfahren eröffnen sich vollkommen neue Wege zur Realisierung von komplexen Integralbauteilen bei niedrigen Stückzahlen. Bei der Bauteilgestaltung müssen (verglichen mit traditionellen Herstellverfahren, beispielsweise Spritzgießen) weniger Fertigungsrestriktionen beachtet werden. Dennoch ist die Gestaltungsfreiheit nicht unbegrenzt. In diesem Beitrag werden basierend auf dem Gedanken des Fertigungsgerechten Konstruierens Möglichkeiten und Herausforderungen bei der Gestaltung von additiv gefertigten Bauteilen herausgestellt. Darauf aufbauend werden Potenziale aufgezeigt, wie Produktentwickler in Zukunft bei der Auslegung und Gestaltung solcher Produkte unterstützt werden können.
Resumo:
Nicht nur in der Medizintechnik, in der Luftfahrt und in der Automobilindustrie werden die generativen Verfahren zunehmend mehr als wichtige Produktionsverfahren angesehen. Auch die (Bau-) Industrie nimmt mehr und mehr die Möglichkeiten und Chancen wahr, welche diese Verfahren für andersartige Konstruktionen und Details eröffnen. Die Ergründung von Veränderungen und Auswirkungen dieser neuen Technologien auf den Entwurf und auf die Umsetzung von Architektur und Baukonstruktion ist Schwerpunkt der Forschungstätigkeiten von Dipl.-Ing. Holger Strauß an den Hochschulstandorten TU Delft, Niederlande und an der Hochschule Ostwestfalen-Lippe in Detmold. Das erste, umfangreiche Forschungsprojekt zu diesem Thema - „Influence of Additive Processes on the development of facade constructions“ - wurde 2008 in Kooperation mit der international agierenden Firma Kawneer-Alcoa im Forschungsschwerpunkt „ConstructionLab“ an der Detmolder Schule für Architektur und Innenarchitektur etabliert. Der Fokus der Bestrebungen liegt zunächst auf der Ergründung von Möglichkeiten für die generative Herstellung von Bauteilen als Ergänzung der Standardprodukte in Systemfassaden. Die Verwendung der Additiven Verfahren und Hightech CAD-CAM Anwendungen bedingt eine neue Art des Konstruierens. Nämlich nicht mehr das fertigungsgerechte, sondern das funktionsgerechte – das „Funktionale Konstruieren“. Neben der Bereicherung der Forschung und Lehre an den Hochschulen durch eine praxisnahe und zielorientierte Aufgabenstellung, fließen alle Ergebnisse in die Promotion von Holger Strauß an der Technischen Universität in Delft am Lehrstuhl Design of Construction bei Prof. Dr.-Ing. Ulrich Knaack ein.
Resumo:
The article presents the design process of intelligent virtual human patients that are used for the enhancement of clinical skills. The description covers the development from conceptualization and character creation to technical components and the application in clinical research and training. The aim is to create believable social interactions with virtual agents that help the clinician to develop skills in symptom and ability assessment, diagnosis, interview techniques and interpersonal communication. The virtual patient fulfills the requirements of a standardized patient producing consistent, reliable and valid interactions in portraying symptoms and behaviour related to a specific clinical condition.
Resumo:
The competitive industrial context compels companies to speed-up every new product design. In order to keep designing products that meet the needs of the end user, a human centered concurrent product design methodology has been proposed. Its setting up is complicated by the difficulties of collaboration between experts involved inthe design process. In order to ease this collaboration, we propose the use of virtual reality as an intermediate design representation in the form of light and specialized immersive convergence support applications. In this paper, we present the As Soon As Possible (ASAP) methodology making possible the development of these tools while ensuring their usefulness and usability. The relevance oft his approach is validated by an industrial use case through the design of an ergonomic-style convergence support tool.
Resumo:
This article focuses on the design process for the transformation of the Rijksmuseum Amsterdam (1885, by P.J.H. Cuypers), with special attention for the evolution of the design by Cruz y Ortiz arquitectos and the associated history of ideas. How did opinions on the intervention evolve from the concept for a masterplan in 1996 to the realized project? To what extent were all those diverse ambitions regarding the city, the monument and the museum realized? What was the role of the designers, not only referring to Cruz y Ortiz, but also to Van Hoogevest Architecten (restoration) and Wilmotte & Associés (interior)? How did the design evolve in a complex and ambitious context involving a great many interested parties, and what effect did this have on the design process from the first sketches to the ultimately realized renovation?
Resumo:
Background: Sensor-based recordings of human movements are becoming increasingly important for the assessment of motor symptoms in neurological disorders beyond rehabilitative purposes. ASSESS MS is a movement recording and analysis system being developed to automate the classification of motor dysfunction in patients with multiple sclerosis (MS) using depth-sensing computer vision. It aims to provide a more consistent and finer-grained measurement of motor dysfunction than currently possible. Objective: To test the usability and acceptability of ASSESS MS with health professionals and patients with MS. Methods: A prospective, mixed-methods study was carried out at 3 centers. After a 1-hour training session, a convenience sample of 12 health professionals (6 neurologists and 6 nurses) used ASSESS MS to capture recordings of standardized movements performed by 51 volunteer patients. Metrics for effectiveness, efficiency, and acceptability were defined and used to analyze data captured by ASSESS MS, video recordings of each examination, feedback questionnaires, and follow-up interviews. Results: All health professionals were able to complete recordings using ASSESS MS, achieving high levels of standardization on 3 of 4 metrics (movement performance, lateral positioning, and clear camera view but not distance positioning). Results were unaffected by patients’ level of physical or cognitive disability. ASSESS MS was perceived as easy to use by both patients and health professionals with high scores on the Likert-scale questions and positive interview commentary. ASSESS MS was highly acceptable to patients on all dimensions considered, including attitudes to future use, interaction (with health professionals), and overall perceptions of ASSESS MS. Health professionals also accepted ASSESS MS, but with greater ambivalence arising from the need to alter patient interaction styles. There was little variation in results across participating centers, and no differences between neurologists and nurses. Conclusions: In typical clinical settings, ASSESS MS is usable and acceptable to both patients and health professionals, generating data of a quality suitable for clinical analysis. An iterative design process appears to have been successful in accounting for factors that permit ASSESS MS to be used by a range of health professionals in new settings with minimal training. The study shows the potential of shifting ubiquitous sensing technologies from research into the clinic through a design approach that gives appropriate attention to the clinic environment.
Resumo:
This paper presents the application of the Integral Masonry System (IMS) to the construction of earthquake resistant houses and its experimental study. To verify the security of this new type of building in seismic areas of the third world two prototypes have been tested, one with adobe and the other with hollow brick. In both cases it’s a two-story 6x6x6 m3 house built to scale 1/2. The tests are carried out at the Laboratory of Antiseismic Structures of the Department of Engineering, Pontifical Catholic University of Peru in Lima, in collaboration with the UPM (Technical University of Madrid). This article shows the design process of the prototypes to test, including the sizing of the reinforcements, the characteristics of the tests and the results obtained. These results show that the IMS with adobe or brick remains stable with no significant cracks faced with a severe earthquake, with an estimated acceleration of 1.8 g. Este artículo presenta una aplicación del Sistema de Albañilería Integral (SAI) a la construcción de viviendas sismorresistentes y su estudio experimental. Para verificar su seguridad para su construcción en zonas sísmicas del tercer mundo se han ensayado dos prototipos, uno con adobe, y otro con ladrillo hueco. Se trata de una vivienda de 6x6x6 m3 y dos plantas que se construyen a escala 1/2. Los ensayos se realizaron en el Laboratorio de Estructuras Antisísmicas del Departamento de Ingeniería de la Pontificia Católica Universidad del Perú (PUCP) de Lima en colaboración con la UPM (Universidad Politécnica de Madrid). Este artículo muestra el proceso de diseño de los prototipos a ensayar, incluido el dimensionado de los refuerzos, las características de los ensayos y los resultados obtenidos. Estos resultados muestran que el SAI con adobe o ladrillo permanece estable sin grietas significativas ante un sismo severo, con una aceleración estimada de 1,8 g.
Resumo:
Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.
Resumo:
An extended 3D distributed model based on distributed circuit units for the simulation of triple‐junction solar cells under realistic conditions for the light distribution has been developed. A special emphasis has been put in the capability of the model to accurately account for current mismatch and chromatic aberration effects. This model has been validated, as shown by the good agreement between experimental and simulation results, for different light spot characteristics including spectral mismatch and irradiance non‐uniformities. This model is then used for the prediction of the performance of a triple‐junction solar cell for a light spot corresponding to a real optical architecture in order to illustrate its suitability in assisting concentrator system analysis and design process.
Resumo:
The objective of this paper is to evaluate the behaviour of a controller designed using a parametric Eigenstructure Assignment method and to evaluate its suitability for use in flexible spacecraft. The challenge of this objective lies in obtaining a suitable controller that is specifically designated to alleviate the deflections and vibrations suffered by external appendages in flexible spacecraft while performing attitude manoeuvres. One of the main problems in these vehicles is the mechanical cross-coupling that exists between the rigid and flexible parts of the spacecraft. Spacecraft with fine attitude pointing requirements need precise control of the mechanical coupling to avoid undesired attitude misalignment. In designing an attitude controller, it is necessary to consider the possible vibration of the solar panels and how it may influence the performance of the rest of the vehicle. The nonlinear mathematical model of a flexible spacecraft is considered a close approximation to the real system. During the process of controller evaluation, the design process has also been taken into account as a factor in assessing the robustness of the system.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.