26 resultados para Error of measurement
em Universidad Politécnica de Madrid
Resumo:
The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.
Resumo:
Las tendencias actuales apuntan al desarrollo de nuevos materiales económicos y ecológicos con óptimas propiedades mecánicas, acústicas y térmicas. En la caracterización acústica del material es habitual medir su coeficiente de absorción sonora. Las dos técnicas usuales de medida de este parámetro son en cámara reverberante y en tubo de Kundt. No obstante, existen técnicas de medida “in situ” del coeficiente de absorción que permiten una comprobación del comportamiento real en la forma definitiva de colocación del material. En este trabajo se presenta un estudio comparativo del coeficiente de absorción sonora medido en un material usando distintas técnicas de medida.
Resumo:
The aim of this paper was to accurately estimate the local truncation error of partial differential equations, that are numerically solved using a finite difference or finite volume approach on structured and unstructured meshes. In this work, we approximated the local truncation error using the @t-estimation procedure, which aims to compare the residuals on a sequence of grids with different spacing. First, we focused the analysis on one-dimensional scalar linear and non-linear test cases to examine the accuracy of the estimation of the truncation error for both finite difference and finite volume approaches on different grid topologies. Then, we extended the analysis to two-dimensional problems: first on linear and non-linear scalar equations and finally on the Euler equations. We demonstrated that this approach yields a highly accurate estimation of the truncation error if some conditions are fulfilled. These conditions are related to the accuracy of the restriction operators, the choice of the boundary conditions, the distortion of the grids and the magnitude of the iteration error.
Resumo:
Measuring skin temperature (TSK) provides important information about the complex thermal control system and could be interesting when carrying out studies about thermoregulation. The most common method to record TSK involves thermocouples at specific locations; however, the use of infrared thermal imaging (IRT) has increased. The two methods use different physical processes to measure TSK, and each has advantages and disadvantages. Therefore, the objective of this study was to compare the mean skin temperature (MTSK) measurements using thermocouples and IRT in three different situations: pre-exercise, exercise and post-exercise. Analysis of the residual scores in Bland-Altman plots showed poor agreement between the MTSK obtained using thermocouples and those using IRT. The averaged error was -0.75 °C during pre-exercise, 1.22 °C during exercise and -1.16 °C during post-exercise, and the reliability between the methods was low in the pre- (ICC = 0.75 [0.12 to 0.93]), during (ICC = 0.49 [-0.80 to 0.85]) and post-exercise (ICC = 0.35 [-1.22 to 0.81] conditions. Thus, there is poor correlation between the values of MTSK measured by thermocouples and IRT pre-exercise, exercise and post-exercise, and low reliability between the two forms of measurement.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods
Resumo:
This paper aims to analyze the different adjustment methods commonly used to characterize indirect metrology circular features: least square circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The analysis was performed from images obtained by digital optical machines. The calculation algorithms, self-developed, have been implemented in Matlab® and take into consideration as study variables: the amplitude of angular sector of the circular feature, its nominal radio and the magnification used by the optical machine. Under different conditions, it was determined the radius and circularity error of different circular standards. The comparison of the results, obtained by the different methods of adjustments used, with certified values for the standards, has allowed us to determine the accuracy of each method and its scope.
Resumo:
Knowledge of the uncertainty of measurement of testing results is important when results have to be compared with limits and specifications. In the measurement of sound insulation following standards UNE EN ISO 140-4 the uncertainty of the final magnitude is mainly associated to the average sound pressure levels L1 and L2 measured. A parameter that allows us to quantify the spatial variation of the sound pressure level is the standard deviation of the pressure levels measured at different points of the room. In this work, for a wide number of measurements following standards UNE EN ISO 140-4 we analyzed qualitatively the behaviour of the standard deviation for L1 and L2. The study of sound fields in enclosed spaces is very difficult. There are a wide variety of rooms with different sound fields depending on factors as volume, geometry and materials. In general, we observe that the L1 and L2 standard deviations contain peaks and dips independent on characteristics of the rooms at single frequencies that could correspond to critical frequencies of walls, floors and windows or even to temporal alterations of the sound field. Also, in most measurements according to UNE EN ISO 140-4 a large similitude between L1 and L2 standard deviation is found. We believe that such result points to a coupled system between source and receiving rooms, mainly at low frequencies the shape of the L1 and L2 standard deviations is comparable to the velocity level standard deviation on a wall
Resumo:
We analyze the effect of packet losses in video sequences and propose a lightweight Unequal Error Protection strategy which, by choosing which packet is discarded, reduces strongly the Mean Square Error of the received sequence
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Artículo sobre comunicaciones ferroviarias. Abstract: Along with the increase in operating frequencies in advanced radio communication systems utilised inside tunnels, the location of the break point is further and further away from the transmitter. This means that the near region lengthens considerably and even occupies the whole propagation cell or the entire length of some short tunnels. To begin with, this study analyses the propagation loss resulting from the free-space mechanism and the multi-mode waveguide mechanism in the near region of circular tunnels, respectively. Then, by conjunctive employing the propagation theory and the three-dimensional solid geometry, a general analytical model of the dividing point between two propagation mechanisms is presented for the first time. Moreover, the model is validated by a wide range of measurement campaigns in different tunnels at different frequencies. Finally, discussions on the simplified formulae of the dividing point in some application situations are made. The results in this study can be helpful to grasp the essence of the propagation mechanism inside tunnels.
Resumo:
We discuss several methods, based on coordinate transformations, for the evaluation of singular and quasisingular integrals in the direct Boundary Element Method. An intrinsec error of some of these methods is detected. Two new transformations are suggested which improve on those currently available.
Resumo:
Commerce in rural territories should not be considered as a needed service, but as a basic infrastructure, that impact not only existent population, but also tourism, and rural industrialization. So, the rural areas need not only agriculture but industry and services, to have a global and balanced development, including for the countryside and the population. In the work presented in this paper, we are considering the formulation of the direct relation between population and the endowment of commerce sites within a geographical territory, the ?area of commercial interactions?. These are the closer set of towns that can gravitate to each other to cover the required needs for the populations within the area. The products retailed, range from basic products for the daily lives, to all other products for industry, agriculture, and services. The econometric spatial model developed to evaluate the interactions and estimate the parameters, is based on the Spatial Error Model, which allows for other spatial hidden effects to be considered without direct interference to the commercial disposition. The data and territory used to test the model correspond to a rural area in the Spanish Palencia territory (NUTS-3 level). The parameters have dependence from population levels, local rent per head, local and regional government budgets, and particular spatial restrictions. Interesting results are emerging form the model. The more significant is that the spatial effects can replace some number of commerce sites in towns, given the right spatial distribution of the sites and the towns. This is equivalent to consider the area of commercial interactions as the unit of measurement for the basic infrastructure and not only the towns.
Resumo:
Se ha desarrollado un sistema electrónico computerizado, portátil y de bajo consumo, denominado Medidor de Velocidad de Vehículos por Ultrasonidos de Alta Exactitud, VUAE. La alta exactitud de la medida conseguida en el VUAE hace que pueda servir de medida de referencia de la velocidad de un vehículo circulando en carretera. Por tanto el VUAE puede usarse como medida de referencia que permita estimar el error de los cinemómetros comerciales. El VUAE está compuesto por n (n≥2) parejas de emisores y receptores piezoeléctricos de ultrasonidos, denominados E-Rult. Los emisores de las n parejas E-Rult generan n barreras de ultrasonidos, y los receptores piezoeléctricos captan la señal de los ecos cuando el vehículo atraviesa las barreras. Estos ecos se procesan digitalmente para conseguir señales representativas. Posteriormente, utilizando la técnica de la correlación cruzada de señales, se ha podido estimar con alta exactitud la diferencia de tiempos entre los ecos captados en cada barrera. Con los tiempos entre ecos y con la distancia entre cada una de las n barreras de ultrasonidos se puede realizar una estimación de la velocidad del vehículo con alta exactitud. El VUAE se ha contrastado con un sistema de velocidad de referencia, basado en cables piezoeléctricos. ABSTRACT We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n≥2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables.
Resumo:
One of the more aspects that have shaped the landscape is the human impact. The human impact has the clearest indicator of the density of settlements in a particular geographic region. In this paper we study all settlements shown on the map of the Kingdom of Valencia, Spain Geographic Atlas (AGE) of Tomas Lopez (1788), and their correspondence with the current ones. To meet this goal we have developed a specific methodology, the systematic study of all existing settlements in historical cartography. This will determine which have disappeared and which have been renamed. The material used has been the historical cartography of Tomas Lopez, part of the AGE (1789), the Kingdom of Valencia (1789), sheets numbers (78, 79, 80 and 81); Current mapping of the provinces of Alicante, Valencia, Castellon, Teruel, Tattagona and Cuenca; As main software ArcGis V.9.3. The steps followed in the methodology are as follows: 1. Check the scale of the maps. Analyze the possible use of a spherical earth model. 2. Geo-reference of maps with latitude and longitude framework. Move the historical longitude origin to the origin longitude of modern cartography. 3 Digitize of all population settlements or cities. 4 Identify historic settlements or cities corresponding with current ones. 5. If the maps have the same orientation and scale, replace the coordinate transformation of historical settlements with a new one, by a translation in latitude and longitude equal to the calculated mean value of all ancient map points corresponding to the new. 6. Calculation of absolute accuracy of the two maps, i.e. the linear distance between the points of both maps. 7 draw in the GIS, the settlements without correspondence, in the current coordinates, and with a circle of mean error of the sheet, in order to locate their current location. If there are actual settlements exist within this circle, they are candidates to be the searched settlements. We analyzed more than 2000 settlements represented in the Atlas of Tomas Lopez of the Kingdom of Valencia (1789), of which almost 14.5% have no correspondence with the existing settlements. The rural landscape evolution of the Valencia, oldest kingdom of Valencia, one can say that can be severely affected by the anthropization suffered in the period from 1789 to the present, since 70% of existing settlements actually have appeared after Tomas Lopez¿s cartography, dated on 1789