74 resultados para Free-space method

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent decades, meshless methods (MMs), like the element-free Galerkin method (EFGM), have been widely studied and interesting results have been reached when solving partial differential equations. However, such solutions show a problem around boundary conditions, where the accuracy is not adequately achieved. This is caused by the use of moving least squares or residual kernel particle method methods to obtain the shape functions needed in MM, since such methods are good enough in the inner of the integration domains, but not so accurate in boundaries. This way, Bernstein curves, which are a partition of unity themselves,can solve this problem with the same accuracy in the inner area of the domain and at their boundaries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A set of measurements of electromagnetic properties of building materials is presented in this work. The method is based on the measurement of the polarization state of the reflected signal from the material under study at a fixed angle of incidence. From the measured data, by using the Fresnel equations, it has been obtained the dielectric constant. Measurements were done by using two horn antennas at the frequency of 9 GHz. The obtained results are compared with the free space reflexion and transmission Fresnel method and other reflection methods based on a conductor waveguide. The method explained in this work can be used for other type of materials and its main advantage is the non-destructive character and the ease implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing demand of security oriented to mobile applications has raised the attention to biometrics, as a proper and suitable solution for providing secure environment to mobile devices. With this aim, this document presents a biometric system based on hand geometry oriented to mobile devices, involving a high degree of freedom in terms of illumination, hand rotation and distance to camera. The user takes a picture of their own hand in the free space, without requiring any flat surface to locate the hand, and without removals of rings, bracelets or watches. The proposed biometric system relies on an accurate segmentation procedure, able to isolate hands from any background; a feature extraction, invariant to orientation, illumination, distance to camera and background; and a user classification, based on k-Nearest Neighbor approach, able to provide an accurate results on individual identification. The proposed method has been evaluated with two own databases collected with a HTC mobile. First database contains 120 individuals, with 20 acquisitions of both hands. Second database is a synthetic database, containing 408000 images of hand samples in different backgrounds: tiles, grass, water, sand, soil and the like. The system is able to identify individuals properly with False Reject Rate of 5.78% and False Acceptance Rate of 0.089%, using 60 features (15 features per finger)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biometrics applied to mobile devices are of great interest for security applications. Daily scenarios can benefit of a combination of both the most secure systems and most simple and extended devices. This document presents a hand biometric system oriented to mobile devices, proposing a non-intrusive, contact-less acquisition process where final users should take a picture of their hand in free-space with a mobile device without removals of rings, bracelets or watches. The main contribution of this paper is threefold: firstly, a feature extraction method is proposed, providing invariant hand measurements to previous changes; second contribution consists of providing a template creation based on hand geometric distances, requiring information from only one individual, without considering data from the rest of individuals within the database; finally, a proposal for template matching is proposed, minimizing the intra-class similarity and maximizing the inter-class likeliness. The proposed method is evaluated using three publicly available contact-less, platform-free databases. In addition, the results obtained with these databases will be compared to the results provided by two competitive pattern recognition techniques, namely Support Vector Machines (SVM) and k-Nearest Neighbour, often employed within the literature. Therefore, this approach provides an appropriate solution to adapt hand biometrics to mobile devices, with an accurate results and a non-intrusive acquisition procedure which increases the overall acceptance from the final user.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Protecting signals is one of the main tasks in information transmission. A large number of different methods have been employed since many centuries ago. Most of them have been based on the use of certain signal added to the original one. When the composed signal is received, if the added signal is known, the initial information may be obtained. The main problem is the type of masking signal employed. One possibility is the use of chaotic signals, but they have a first strong limitation: the need to synchronize emitter and receiver. Optical communications systems, based on chaotic signals, have been proposed in a large number of papers. Moreover, because most of the communication systems are digital and conventional chaos generators are analogue, a conversion analogue-digital is needed. In this paper we will report a new system where the digital chaos is obtained from an optically programmable logic structure. This structure has been employed by the authors in optical computing and some previous results in chaotic signals have been reported. The main advantage of this new system is that an analogue-digital conversion is not needed. Previous works by the authors employed Self-Electrooptical Effect Devices but in this case more conventional structures, as semiconductor laser amplifiers, have been employed. The way to analyze the characteristics of digital chaotic signals will be reported as well as the method to synchronize the chaos generators located in the emitter and in the receiver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente Tesis Doctoral aborda la aplicación de métodos meshless, o métodos sin malla, a problemas de autovalores, fundamentalmente vibraciones libres y pandeo. En particular, el estudio se centra en aspectos tales como los procedimientos para la resolución numérica del problema de autovalores con estos métodos, el coste computacional y la viabilidad de la utilización de matrices de masa o matrices de rigidez geométrica no consistentes. Además, se acomete en detalle el análisis del error, con el objetivo de determinar sus principales fuentes y obtener claves que permitan la aceleración de la convergencia. Aunque en la actualidad existe una amplia variedad de métodos meshless en apariencia independientes entre sí, se han analizado las diferentes relaciones entre ellos, deduciéndose que el método Element-Free Galerkin Method [Método Galerkin Sin Elementos] (EFGM) es representativo de un amplio grupo de los mismos. Por ello se ha empleado como referencia en este análisis. Muchas de las fuentes de error de un método sin malla provienen de su algoritmo de interpolación o aproximación. En el caso del EFGM ese algoritmo es conocido como Moving Least Squares [Mínimos Cuadrados Móviles] (MLS), caso particular del Generalized Moving Least Squares [Mínimos Cuadrados Móviles Generalizados] (GMLS). La formulación de estos algoritmos indica que la precisión de los mismos se basa en los siguientes factores: orden de la base polinómica p(x), características de la función de peso w(x) y forma y tamaño del soporte de definición de esa función. Se ha analizado la contribución individual de cada factor mediante su reducción a un único parámetro cuantificable, así como las interacciones entre ellos tanto en distribuciones regulares de nodos como en irregulares. El estudio se extiende a una serie de problemas estructurales uni y bidimensionales de referencia, y tiene en cuenta el error no sólo en el cálculo de autovalores (frecuencias propias o carga de pandeo, según el caso), sino también en términos de autovectores. This Doctoral Thesis deals with the application of meshless methods to eigenvalue problems, particularly free vibrations and buckling. The analysis is focused on aspects such as the numerical solving of the problem, computational cost and the feasibility of the use of non-consistent mass or geometric stiffness matrices. Furthermore, the analysis of the error is also considered, with the aim of identifying its main sources and obtaining the key factors that enable a faster convergence of a given problem. Although currently a wide variety of apparently independent meshless methods can be found in the literature, the relationships among them have been analyzed. The outcome of this assessment is that all those methods can be grouped in only a limited amount of categories, and that the Element-Free Galerkin Method (EFGM) is representative of the most important one. Therefore, the EFGM has been selected as a reference for the numerical analyses. Many of the error sources of a meshless method are contributed by its interpolation/approximation algorithm. In the EFGM, such algorithm is known as Moving Least Squares (MLS), a particular case of the Generalized Moving Least Squares (GMLS). The accuracy of the MLS is based on the following factors: order of the polynomial basis p(x), features of the weight function w(x), and shape and size of the support domain of this weight function. The individual contribution of each of these factors, along with the interactions among them, has been studied in both regular and irregular arrangement of nodes, by means of a reduction of each contribution to a one single quantifiable parameter. This assessment is applied to a range of both one- and two-dimensional benchmarking cases, and includes not only the error in terms of eigenvalues (natural frequencies or buckling load), but also of eigenvectors

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An accurate characterization of the near-region propagation of radio waves inside tunnels is of practical importance for the design and planning of advanced communication systems. However, there has been no consensus yet on the propagation mechanism in this region. Some authors claim that the propagation mechanism follows the free space model, others intend to interpret it by the multi-mode waveguide model. This paper clarifies the situation in the near-region of arched tunnels by analytical modeling of the division point between the two propagation mechanisms. The procedure is based on the combination of the propagation theory and the three-dimensional solid geometry. Three groups of measurements are employed to verify the model in different tunnels at different frequencies. Furthermore, simplified models for the division point in five specific application situations are derived to facilitate the use of the model. The results in this paper could help to deepen the insight into the propagation mechanism within tunnel environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is no unanimous consensus yet on the propagation mechanism before the break point inside tunnels. Some deem that the propagation mechanism follows the free space model, others argue that it should be described by the multimode waveguide model. Firstly, this paper analyzes the propagation loss in two mechanisms. Then, by conjunctively using the propagation theory and the three-dimensional solid geometry, a generic analytical model for the boundary between the free space mechanism and the multi-mode waveguide mechanism inside tunnels has been presented. Three measurement campaigns validate the model in different tunnels at different frequencies. Furthermore, the condition of the validity of the free space model used in tunnel environment has been discussed in some specific situations. Finally, through mathematical derivation, the seemingly conflicting viewpoints on the free space mechanism and the multi-mode waveguide mechanism have been unified in some specific situations by the presented generic model. The results in this paper can be helpful to gain deeper insight and better understanding of the propagation mechanism inside tunnels

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artículo sobre comunicaciones ferroviarias. Abstract: Along with the increase in operating frequencies in advanced radio communication systems utilised inside tunnels, the location of the break point is further and further away from the transmitter. This means that the near region lengthens considerably and even occupies the whole propagation cell or the entire length of some short tunnels. To begin with, this study analyses the propagation loss resulting from the free-space mechanism and the multi-mode waveguide mechanism in the near region of circular tunnels, respectively. Then, by conjunctive employing the propagation theory and the three-dimensional solid geometry, a general analytical model of the dividing point between two propagation mechanisms is presented for the first time. Moreover, the model is validated by a wide range of measurement campaigns in different tunnels at different frequencies. Finally, discussions on the simplified formulae of the dividing point in some application situations are made. The results in this study can be helpful to grasp the essence of the propagation mechanism inside tunnels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Along with the increase of the use of working frequencies in advanced radio communication systems, the near-region inside tunnels lengthens considerably and even occupies the whole propagation cell or the entire length of some short tunnels. This paper analytically models the propagation mechanisms and their dividing point in the near-region of arbitrary cross-sectional tunnels for the first time. To begin with, the propagation losses owing to the free space mechanism and the multimode waveguide mechanism are modeled, respectively. Then, by conjunctively employing the propagation theory and the three-dimensional solid geometry, the paper presents a general model for the dividing point between two propagation mechanisms. It is worthy to mention that this model can be applied in arbitrary cross-sectional tunnels. Furthermore, the general dividing point model is specified in rectangular, circular, and arched tunnels, respectively. Five groups of measurements are used to justify the model in different tunnels at different frequencies. Finally, in order to facilitate the use of the model, simplified analytical solutions for the dividing point in five specific application situations are derived. The results in this paper could help deepen the insight into the propagation mechanisms in tunnels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eye-safety requirements in important applications like LIDAR or Free Space Optical Communications make specifically interesting the generation of high power, short optical pulses at 1.5 um. Moreover, high repetition rates allow reducing the error and/or the measurement time in applications involving pulsed time-of-flight measurements, as range finders, 3D scanners or traffic velocity controls. The Master Oscillator Power Amplifier (MOPA) architecture is an interesting source for these applications since large changes in output power can be obtained at GHz rates with a relatively small modulation of the current in the Master Oscillator (MO). We have recently demonstrated short optical pulses (100 ps) with high peak power (2.7 W) by gain switching the MO of a monolithically integrated 1.5 um MOPA. Although in an integrated MOPA the laser and the amplifier are ideally independent devices, compound cavity effects due to the residual reflectance at the different interfaces are often observed, leading to modal instabilities such as self-pulsations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays the interest in high power semiconductor devices is growing for applications such as telemetry, lidar system or free space communications. Indeed semiconductor devices can be an alternative to solid state lasers because they are more compact and less power consuming. These characteristics are very important for constrained and/or low power supply environment such as airplanes or satellites. Lots of work has been done in the 800-1200 nm range for integrated and free space Master Oscillator Power Amplifier (MOPA) [1]-[3]. At 1.5 ?m, the only commercially available MOPA is from QPC [4]: the fibred output power is about 700 mW and the optical linewidth is 500 kHz. In this paper, we first report on the simulations we have done to determine the appropriate vertical structure and architecture for a good MOPA at 1.58 ?m (section II). Then we describe the fabrication of the devices (section III). Finally we report on the optical and electrical measurements we have done for various devices (section IV).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este estudio ofrece una herramienta de aproximación al espacio morfológico-métrico en el que se formula la ciudad de alta densidad desde la vivienda colectiva. La vivienda colectiva es la célula básica de la ciudad. El estudio configurativo y dimensional del tejido urbano muestra la importancia del fondo edificatorio como parámetro clave a mitad de camino entre la vivienda y la ciudad. El fondo edificatorio traza el margen de la arquitectura en la ciudad y desde él se equipa y cuantifica el territorio urbano. Sus dinámicas van caracterizando los distintos entornos, mientras en su interior se formula el tipo en un ajuste de continua verificación y adaptación. La forma de la ciudad y sus distintas posibilidades configurativas —en cuanto masa construida y espacio público, pero sin perder de vista la relación entre ambos— depende en gran medida del fondo edificatorio. Se trata, por tanto, de un parámetro importante de relación entre las distintas configuraciones del espacio exterior e interior. Al proyectar, una vez establecido un fondo, algunas propiedades se adaptan con facilidad mientras que otras requieren un cierto grado de interpretación o deben ser descartadas. Dada una superficie, la especificación del fondo fuerza la dimensión del frente en las configuraciones posibles. Ambas dimensiones son vitales en el valor del factor de forma del continuo edificado y en su relación se produce el complejo rango de posibilidades. Partiendo de la ciudad, un gran fondo encierra y mezcla en su interior todo tipo de usos sin distinción, repercute un menor coste por unidad de superficie edificada y comparte su frente reduciendo los intercambios térmicos y lumínicos. Sin embargo la ciudad de fondo reducido ajusta la forma al uso y se desarrolla linealmente con repetitividad a lo largo de sus frentes exteriores. En ella, el fuerte intercambio energético se opone a las grandes posibilidades del espacio libre. En cambio desde la casa las distintas medidas del fondo se producen bajo determinados condicionantes: clima, compacidad, ocupación, hibridación, tamaño de casa, etc., mientras que el tipo se desarrolla en base a una métrica afín. Este trabajo parte de esta dialéctica. Estudia la relación de dependencia entre las condiciones del edificio de viviendas y su métrica. Jerarquiza edificios en base al parámetro “fondo” para constituir una herramienta que como un ábaco sea capaz de visibilizar las dinámicas relacionales entre configuración y métrica bajo la condición de alta densidad. Para ello en una primera fase se gestiona una extensa muestra de edificios representativos de vivienda colectiva principalmente europea, extraída de tres prestigiosos libros en forma de repertorio. Se ordenan y categorizan extrayendo datos conmensurables y temas principales que ligan la profundidad de la huella a la morfología y posteriormente, esta información se estudia en diagramas que ponen de manifiesto convergencias y divergencias, acumulaciones y vacíos, límites, intervalos característicos, márgenes y ejes, parámetros y atributos... cuya relación trata de factorizar el lugar morfológico y métrico de la casa como metavivienda y ciudad. La herramienta se establece así como un complejo marco relacional en el que posicionar casos concretos y trazar nexos transversales, tanto de tipo morfológico como cultural, climático o técnico, normativo o tecnológico. Cada nuevo caso o traza añadida produce consonancias y disonancias en el marco que requieren interpretación y verificación. De este modo este instrumento de análisis comparativo se tempera, se especializa, se completa y se perfecciona con su uso. La forma de la residencia en la ciudad densa se muestra así sobre un subsistema morfológico unitario y su entendimiento se hace más fácilmente alcanzable y acumulable tanto para investigaciones posteriores como para el aprendizaje o el ejercicio profesional. ABSTRACT This research study offers a tool to approach the morphometric space in which (multi-family) housing defines high-density cities. Multi-family housing is the basic cell of the city. The configuration and dimension studies of the urban fabric render the importance of building depth as a key parameter half way between the dwelling and the city. The building depth traces de limit of architecture in the city. It qualifies and quantifies the urban territory. Its dynamics characterize the different environments while in its essence, an adjustment process of continuous verification and adaption defines type. The shape of the city and its different configuration possibilities —in terms of built fabric and public space, always keeping an eye on the relationship between them— depend majorly on the building depth. Therefore, it is a relevant parameter that relates the diverse configurations between interior and exterior space. When designing, once the depth is established, some properties are easily adpated. However, others require a certain degree of interpretation or have to be left out of the study. Given a ceratin surface, the establishment of the depth forces the dimensions of the facade in the different configurations. Both depth and facade dimensions are crucial for the form factor of the built mass. Its relationship produces a complex range of possibilities. From an urban point of view, great depth means multiple uses (making no distinction whatsoever,) it presents a lower cost per unit of built area and shares its facade optimizing temperature and light exchange. On the contrary, the city of reduced depth adjusts its shape to the use, and develops linearly and repetitively along its facades. The strong energy exchange opposes to the great possibilities of free space. From the perspective of the dwelling, the different dimensions of depth are produced under certain determinants: climate, compactness, occupancy, hybridization, dwelling size, etc. Meanwhile, the type is developed based on a related meter (as in poetry). This work starts from the previous premise. It studies the dependency relation bewteen the conditions of the dwellings and their meter (dimensions). It organizes buildings hierarchically based on the parameter “depth” to create a tool that, as an abacus, is able to visibilise the relational dynamics between configuration and dimension in high density conditions. For this, in the first stage a large group of representative multi-family housing buildings is managed, mostly from Europe, picked from three prestigious books as a repertoir. They are categorized and ordered drawing commensurable data and key issues that link the depth of the fooprint to its morphology. Later, this information is studied deeply with diagrams that bring out connections and discrepancies, voids and accumulations, limits, charasteristic intervals, margins and axii, parameters, attributes, etc. These relationships try to create factors from a morphological and metrical point of view of the house as a metadwelling. This tool is established as a complex relation frame in which case studies are postitioned and cross-cutting nexii are traced. These can deal with morphology, climate, technique, law or technology. Each new case or nexus produces affinities and discrepancies that require interpretation and verification. Thus, this instrument of comparative analysis is fine-tuned, especialized and completed as its use is improved. The way housing is understood in high density cities is shown as a unitary metric subsystem and its understanding is easy to reach and accumulate for future researchers, students or practicing architects.