953 resultados para Error Analysis
Resumo:
This article evaluates an authentication technique for mobiles based on gestures. Users create a remindful identifying gesture to be considered as their in-air signature. This work analyzes a database of 120 gestures of different vulnerability, obtaining an Equal Error Rate (EER) of 9.19% when robustness of gestures is not verified. Most of the errors in this EER come from very simple and easily forgeable gestures that should be discarded at enrollment phase. Therefore, an in-air signature robustness verification system using Linear Discriminant Analysis is proposed to infer automatically whether the gesture is secure or not. Different configurations have been tested obtaining a lowest EER of 4.01% when 45.02% of gestures were discarded, and an optimal compromise of EER of 4.82% when 19.19% of gestures were automatically rejected.
Resumo:
The accuracy of Tomás López´s historical cartography of the Canary Islands included in the “Atlas Particular” of the Kingdoms of Spain, Portugal and Adjacent Islands” is analyzed. For this purpose, we propose a methodology based on Geographic Information Systems (GIS), a comparison of digitized historical cartography population centres with current ones. This study shows that the lineal error value is small for the smaller islands: Lanzarote, El Hierro, La Palma and La Gomera. In the large islands of Tenerife, Fuerteventura and Gran Canaria, the error is smaller in central zones but increases towards the coast. This indicates that Tomás López began his cartography starting from central island zones, accumulating errors due to lack of geodetic references as he moved toward the coast.
Resumo:
We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not termínate). The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables. We show that the problem of determining a covering is undecidable in general, and give decidability and complexity results for the Herbrand and linear arithmetic constraint systems. We give sound algorithms for determining covering that are precise and efiicient in practice. Based on this information, we show how to identify goals and procedures that can be guaranteed to not fail at runtime. Applications of such non-failure information include programming error detection, program transiormations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for granularity control. Finally, we report on an implementation of our method and show that better results are obtained than with previously proposed approaches.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
Abstract This paper describes a two-part methodology for managing the risk posed by water supply variability to irrigated agriculture. First, an econometric model is used to explain the variation in the production value of irrigated agriculture. The explanatory variables include an index of irrigation water availability (surface storage levels), a price index representative of the crops grown in each geographical unit, and a time variable. The model corrects for autocorrelation and it is applied to 16 representative Spanish provinces in terms of irrigated agriculture. In the second part, the fitted models are used for the economic evaluation of drought risk. In flow variability in the hydrological system servicing each province is used to perform ex-ante evaluations of economic output for the upcoming irrigation season. The model?s error and the probability distribution functions (PDFs) of the reservoirs? storage variations are used to generate Monte Carlo (Latin Hypercube) simulations of agricultural output 7 and 3 months prior to the irrigation season. The results of these simulations illustrate the different risk profiles of each management unit, which depend on farm productivity and on the probability distribution function of water in flow to reservoirs. The potential for ex-ante drought impact assessments is demonstrated. By complementing hydrological models, this method can assist water managers and decisionmakers in managing reservoirs.
Resumo:
We present a novel approach for the detection of severe obstructive sleep apnea (OSA) based on patients' voices introducing nonlinear measures to describe sustained speech dynamics. Nonlinear features were combined with state-of-the-art speech recognition systems using statistical modeling techniques (Gaussian mixture models, GMMs) over cepstral parameterization (MFCC) for both continuous and sustained speech. Tests were performed on a database including speech records from both severe OSA and control speakers. A 10 % relative reduction in classification error was obtained for sustained speech when combining MFCC-GMM and nonlinear features, and 33 % when fusing nonlinear features with both sustained and continuous MFCC-GMM. Accuracy reached 88.5 % allowing the system to be used in OSA early detection. Tests showed that nonlinear features and MFCCs are lightly correlated on sustained speech, but uncorrelated on continuous speech. Results also suggest the existence of nonlinear effects in OSA patients' voices, which should be found in continuous speech.
Resumo:
In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.
Resumo:
Laparoscopic instrument tracking systems are an essential component in image-guided interventions and offer new possibilities to improve and automate objective assessment methods of surgical skills. In this study we present our system design to apply a third generation optical pose tracker (Micron- Tracker®) to laparoscopic practice. A technical evaluation of this design is performed in order to analyze its accuracy in computing the laparoscopic instrument tip position. Results show a stable fluctuation error over the entire analyzed workspace. The relative position errors are 1.776±1.675 mm, 1.817±1.762 mm, 1.854±1.740 mm, 2.455±2.164 mm, 2.545±2.496 mm, 2.764±2.342 mm, 2.512±2.493 mm for distances of 50, 100, 150, 200, 250, 300, and 350 mm, respectively. The accumulated distance error increases with the measured distance. The instrument inclination covered by the system is high, from 90 to 7.5 degrees. The system reports a low positional accuracy for the instrument tip.
Resumo:
One of the more aspects that have shaped the landscape is the human impact. The human impact has the clearest indicator of the density of settlements in a particular geographic region. In this paper we study all settlements shown on the map of the Kingdom of Valencia, Spain Geographic Atlas (AGE) of Tomas Lopez (1788), and their correspondence with the current ones. To meet this goal we have developed a specific methodology, the systematic study of all existing settlements in historical cartography. This will determine which have disappeared and which have been renamed. The material used has been the historical cartography of Tomas Lopez, part of the AGE (1789), the Kingdom of Valencia (1789), sheets numbers (78, 79, 80 and 81); Current mapping of the provinces of Alicante, Valencia, Castellon, Teruel, Tattagona and Cuenca; As main software ArcGis V.9.3. The steps followed in the methodology are as follows: 1. Check the scale of the maps. Analyze the possible use of a spherical earth model. 2. Geo-reference of maps with latitude and longitude framework. Move the historical longitude origin to the origin longitude of modern cartography. 3 Digitize of all population settlements or cities. 4 Identify historic settlements or cities corresponding with current ones. 5. If the maps have the same orientation and scale, replace the coordinate transformation of historical settlements with a new one, by a translation in latitude and longitude equal to the calculated mean value of all ancient map points corresponding to the new. 6. Calculation of absolute accuracy of the two maps, i.e. the linear distance between the points of both maps. 7 draw in the GIS, the settlements without correspondence, in the current coordinates, and with a circle of mean error of the sheet, in order to locate their current location. If there are actual settlements exist within this circle, they are candidates to be the searched settlements. We analyzed more than 2000 settlements represented in the Atlas of Tomas Lopez of the Kingdom of Valencia (1789), of which almost 14.5% have no correspondence with the existing settlements. The rural landscape evolution of the Valencia, oldest kingdom of Valencia, one can say that can be severely affected by the anthropization suffered in the period from 1789 to the present, since 70% of existing settlements actually have appeared after Tomas Lopez¿s cartography, dated on 1789
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
Esta tesis presenta un análisis teórico del funcionamiento de toberas magnéticas para la propulsión espacial por plasmas. El estudio está basado en un modelo tridimensional y bi-fluido de la expansión supersónica de un plasma caliente en un campo magnético divergente. El modelo básico es ampliado progresivamente con la inclusión de términos convectivos dominantes de electrones, el campo magnético inducido por el plasma, poblaciones electrónicas múltiples a distintas temperaturas, y la capacidad de integrar el flujo en la región de expansión lejana. La respuesta hiperbólica del plasma es integrada con alta precisión y eficiencia haciendo uso del método de las líneas características. Se realiza una caracterización paramétrica de la expansión 2D del plasma en términos del grado de magnetización de iones, la geometría del campo magnético, y el perfil inicial del plasma. Se investigan los mecanismos de aceleración, mostrando que el campo ambipolar convierte la energía interna de electrones en energía dirigida de iones. Las corrientes diamagnéticas de Hall, que pueden hallarse distribuidas en el volumen del plasma o localizadas en una delgada capa de corriente en el borde del chorro, son esenciales para la operación de la tobera, ya que la fuerza magnética repulsiva sobre ellas es la encargada de confinar radialmente y acelerar axialmente el plasma. El empuje magnético es la reacción a esta fuerza sobre el motor. La respuesta del plasma muestra la separación gradual hacia adentro de los tubos de iones respecto de los magnéticos, lo cual produce la formación de corrientes eléctricas longitudinales y pone el plasma en rotación. La ganancia de empuje obtenida y las pérdidas radiales de la pluma de plasma se evalúan en función de los parámetros de diseño. Se analiza en detalle la separación magnética del plasma aguas abajo respecto a las líneas magnéticas (cerradas sobre sí mismas), necesaria para la aplicación de la tobera magnética a fines propulsivos. Se demuestra que tres teorías existentes sobre separación, que se fundamentan en la resistividad del plasma, la inercia de electrones, y el campo magnético que induce el plasma, son inadecuadas para la tobera magnética propulsiva, ya que producen separación hacia afuera en lugar de hacia adentro, aumentando la divergencia de la pluma. En su lugar, se muestra que la separación del plasma tiene lugar gracias a la inercia de iones y la desmagnetización gradual del plasma que tiene lugar aguas abajo, que permiten la separación ilimitada del flujo de iones respecto a las líneas de campo en condiciones muy generales. Se evalúa la cantidad de plasma que permanece unida al campo magnético y retorna hacia el motor a lo largo de las líneas cerradas de campo, mostrando que es marginal. Se muestra cómo el campo magnético inducido por el plasma incrementa la divergencia de la tobera magnética y por ende de la pluma de plasma en el caso propulsivo, contrariamente a las predicciones existentes. Se muestra también cómo el inducido favorece la desmagnetización del núcleo del chorro, acelerando la separación magnética. La hipótesis de ambipolaridad de corriente local, común a varios modelos de tobera magnética existentes, es discutida críticamente, mostrando que es inadecuada para el estudio de la separación de plasma. Una inconsistencia grave en la derivación matemática de uno de los modelos más aceptados es señalada y comentada. Incluyendo una especie adicional de electrones supratérmicos en el modelo, se estudia la formación y geometría de dobles capas eléctricas en el interior del plasma. Cuando dicha capa se forma, su curvatura aumenta cuanto más periféricamente se inyecten los electrones supratérmicos, cuanto menor sea el campo magnético, y cuanto más divergente sea la tobera magnética. El plasma con dos temperaturas electrónicas posee un mayor ratio de empuje magnético frente a total. A pesar de ello, no se encuentra ninguna ventaja propulsiva de las dobles capas, reforzando las críticas existentes frente a las propuestas de estas formaciones como un mecanismo de empuje. Por último, se presenta una formulación general de modelos autosemejantes de la expansión 2D de una pluma no magnetizada en el vacío. El error asociado a la hipótesis de autosemejanza es calculado, mostrando que es pequeño para plumas hipersónicas. Tres modelos de la literatura son particularizados a partir de la formulación general y comparados. Abstract This Thesis presents a theoretical analysis of the operation of magnetic nozzles for plasma space propulsion. The study is based on a two-dimensional, two-fluid model of the supersonic expansion of a hot plasma in a divergent magnetic field. The basic model is extended progressively to include the dominant electron convective terms, the plasma-induced magnetic field, multi-temperature electron populations, and the capability to integrate the plasma flow in the far expansion region. The hyperbolic plasma response is integrated accurately and efficiently with the method of the characteristic lines. The 2D plasma expansion is characterized parametrically in terms of the ion magnetization strength, the magnetic field geometry, and the initial plasma profile. Acceleration mechanisms are investigated, showing that the ambipolar electric field converts the internal electron energy into directed ion energy. The diamagnetic electron Hall current, which can be distributed in the plasma volume or localized in a thin current sheet at the jet edge, is shown to be central for the operation of the magnetic nozzle. The repelling magnetic force on this current is responsible for the radial confinement and axial acceleration of the plasma, and magnetic thrust is the reaction to this force on the magnetic coils of the thruster. The plasma response exhibits a gradual inward separation of the ion streamtubes from the magnetic streamtubes, which focuses the jet about the nozzle axis, gives rise to the formation of longitudinal currents and sets the plasma into rotation. The obtained thrust gain in the magnetic nozzle and radial plasma losses are evaluated as a function of the design parameters. The downstream plasma detachment from the closed magnetic field lines, required for the propulsive application of the magnetic nozzle, is investigated in detail. Three prevailing detachment theories for magnetic nozzles, relying on plasma resistivity, electron inertia, and the plasma-induced magnetic field, are shown to be inadequate for the propulsive magnetic nozzle, as these mechanisms detach the plume outward, increasing its divergence, rather than focusing it as desired. Instead, plasma detachment is shown to occur essentially due to ion inertia and the gradual demagnetization that takes place downstream, which enable the unbounded inward ion separation from the magnetic lines beyond the turning point of the outermost plasma streamline under rather general conditions. The plasma fraction that remains attached to the field and turns around along the magnetic field back to the thruster is evaluated and shown to be marginal. The plasmainduced magnetic field is shown to increase the divergence of the nozzle and the resulting plasma plume in the propulsive case, and to enhance the demagnetization of the central part of the plasma jet, contrary to existing predictions. The increased demagnetization favors the earlier ion inward separation from the magnetic field. The local current ambipolarity assumption, common to many existing magnetic nozzle models, is critically discussed, showing that it is unsuitable for the study of plasma detachment. A grave mathematical inconsistency in a well-accepted model, related to the acceptance of this assumption, is found out and commented on. The formation and 2D shape of electric double layers in the plasma expansion is studied with the inclusion of an additional suprathermal electron population in the model. When a double layer forms, its curvature is shown to increase the more peripherally suprathermal electrons are injected, the lower the magnetic field strength, and the more divergent the magnetic nozzle is. The twoelectron- temperature plasma is seen to have a greater magnetic-to-total thrust ratio. Notwithstanding, no propulsive advantage of the double layer is found, supporting and reinforcing previous critiques to their proposal as a thrust mechanism. Finally, a general framework of self-similar models of a 2D unmagnetized plasma plume expansion into vacuum is presented and discussed. The error associated with the self-similarity assumption is calculated and shown to be small for hypersonic plasma plumes. Three models of the literature are recovered as particularizations from the general framework and compared.
Resumo:
Uno de los procesos de desarrollo más comunes para llevar a cabo un proyecto arquitectónico es el ensayo y error. Un proceso de selección de pruebas que se suele abordar de dos maneras, o bien se efectúa con el fin de ir depurando una posición más óptima, o bien sirve para explorar nuevas vías de investigación. Con el fin de profundizar en esto, el artículo presenta el análisis de dos diferentes procesos de proyecto de viviendas desarrolladas por ensayo y error, obras referenciales en la historia de la arquitectura, la Villa Stonborough de Wittgenstein y la Villa Moller de Adolf Loos. Ambas aunque pertenecientes al mismo periodo histórico, están desarrolladas de maneras muy opuestas, casi enfrentadas. De su estudio se pretende localizar los conceptos que han impulsado sus diferentes vías de producción, para poder extrapolados a otros casos similares. ABSTRACT: One of the most common processes to develop an architectonic project is the trial and error method. The process of selection of tests is usually done on two different ways. Or it is done with the goal to find out the most optimized position, or it is used to explore new ways of research. In order to investigate this item, the article shows the analysis of two different processes of housing projects that have been done by trial and error. Constructions, that are references in the history of architecture, the Villa Stonborough by Wittgenstein and the Villa Moller by Adolf Loos. Although both of them belong to the same historical period, they are developed by different ways, almost confronted. Thanks to this analysis we will attempt to localize the concepts that drove into their different way of production and then we will try to extrapolate these properties to other similar cases.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.
Resumo:
Laparoscopic instrument tracking systems are a key element in image-guided interventions, which requires high accuracy to be used in a real surgical scenario. In addition, these systems are a suitable option for objective assessment of laparoscopic technical skills based on instrument motion analysis. This study presents a new approach that improves the accuracy of a previously presented system, which applies an optical pose tracking system to laparoscopic practice. A design enhancement of the artificial markers placed on the laparoscopic instrument as well as an improvement of the calibration process are presented as a means to achieve more accurate results. A technical evaluation has been performed in order to compare the accuracy between the previous design and the new approach. Results show a remarkable improvement in the fluctuation error throughout the measurement platform. Moreover, the accumulated distance error and the inclination error have been improved. The tilt range covered by the system is the same for both approaches, from 90º to 7.5º. The relative position error is better for the new approach mainly at close distances to the camera system
Resumo:
Purpose: In this paper we study all settlements shown on the map of the Province of Madrid, sheet number 1 of AGE (Atlas Geográfico de España of Tomas Lopez 1804) and their correspondence with the current ones. This map is divided in to zones: Madrid and Almonacid de Zorita. Method: The steps followed in the methodology are as follow: 1. Geo-reference of maps with latitude and longitude framework. Move the historical longitude origin to the origin longitude of modern cartography. 2 Digitize of all population settlements or cities (97 on Madrid and 42 on Almonacid de Zorita), 3 Identify historic settlements or cities corresponding with current ones. 4. If the maps have the same orientation and scale, replace the coordinate transformation of historical settlements with a new one, by a translation in latitude and longitude equal to the calculated mean value of all ancient map points corresponding to the new. 5. Calculation of absolute accuracy of the two maps. 6 draw in the GIS, the settlements accuracy. Result: It was found that all AGE settlements have good correspondence with current, ie only 27 settlements lost in Madrid and 2 in Almonacid. The average accuracy is 2.3 and 5.7 km to Madrid and Almonacid de Zorita respectively. Discussion & Conclusion: The final accuracy map obtained shows that there is less error in the middle of the map. This study highlights the great work done by Tomas Lopez in performing this mapping without fieldwork. This demonstrates the great value that has been the work of Tomas Lopez in the history of cartography.