24 resultados para Cost estimate accuracy

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

En la coyuntura actual, en la que existe por un lado, exceso en la oferta de vivienda (de alto precio o de segunda residencia), y aparece por otro demanda de vivienda (de bajo precio y/o social), el mercado inmobiliario se encuentra paradójicamente bloqueado. Así, surge esta investigación como fruto de este momento histórico, en el cual se somete a debate económico el producto vivienda, no solo como consecuencia de la profunda crisis económica, sino también para la correcta gestión de los recursos desde el punto de vista de lo eficiente y sostenible. Se parte de la hipótesis de que es necesario determinar un estimador de costes de construcción de vivienda autopromovida como una de las soluciones a la habitación en el medio rural de Extremadura, para lo cual se ha tomado como modelo de análisis concretamente la Vivienda Autopromovida subvencionada por la Junta de Extremadura en el marco de la provincia de Cáceres. Con esta investigación se pretende establecer una herramienta matemática precisa que permita determinar la inversión a los promotores, el posible margen de beneficios a los contratistas y el valor real de la garantía en el préstamo a las entidades financieras. Pero el objetivo de mayor proyección social de esta investigación consiste en facilitar una herramienta sencilla a la Junta de Extremadura para que pueda establecer las ayudas de una manera proporcional. De este modo se ayuda a optimizar los recursos, lo cual en época de crisis resulta aun más acuciante, ya que conociendo previamente y con bastante exactitud el importe de las obras se pueden dirigir las ayudas de forma proporcional a las necesidades reales de la ejecución. De hecho, ciertas características difíciles de cuantificar para determinar las ayudas en materia de vivienda, como la influencia del número de miembros familiares o la atención a la discapacidad, se verían contempladas de forma indirecta en el coste estimado con el método aquí propuesto, ya que suponen siempre un aumento de las superficies construidas y útiles, de los huecos de fachadas o del tamaño de locales húmedos y por tanto se contemplan en la ecuación del modelo determinado. Por último, contar con un estimador de costes potencia la forma de asentamiento de la construcción mediante autopromocion de viviendas ya que ayuda a la toma de decisiones al particular, subvencionado o no. En efecto, la herramienta es valida en cierta medida para cualquier autopromocion, constituye un sistema de construcción con las menores posibilidades especulativas y lo más sostenible, es abundante en toda Extremadura, y consigue que el sector de la construcción sea un sistema más eficiente al optimizar su proceso económico de producción. SUMMARY Under the present circumstances, in which there is, on one hand, an excess in the supply of housing (high-price or second-home), and on the other hand a demand for housing (low cost and/or social), paradoxically the property market is at a standstill. This research has come about as a result of this moment in time, in which the product: housing, is undergoing economic debate, not only on account of this serious economic crisis, but for the proper management of resources from the point of view of efficiency and sustainability. A building-costs estimator for owner-developed housing is deemed necessary as one of the solutions for the rural environment that is Extremadura. To this end, it is the Owner-Developed House which has been taken as analysis model. It is subsidized by the Extremadura Regional Government in Caceres Province. This research establishes an accurate mathematical tool to work out the developers’ investment, the builder’s potential profit margin and the reality of the loan for the Financial Institution. But the result of most social relevance in this research is to provide the Extremadura Regional Government with a simple tool, so that it can draw up the Subventions proportionally. Thus, the resources are optimized, an even more vital matter in times of economic slump, due to the fact that if the cost of the building works is known with some accuracy beforehand, the subventions can be allocated in a way that is proportional to the real needs of execution. In fact certain elements related to housing subventions which are hard to quantify, such as the influence of number of family members or disability support, would be covered indirectly in cost estimate with the proposed method, since they inevitably involve an increase in built area, exterior wall openings and the size of plumbed rooms. As such they are covered in the determined model equation. Lastly, the availability of a cost-estimator reinforces the ownerdeveloped building model, since it assists decision-making by the individual, whether subsidized or not. This is because the tool is valid to some extent in any owner-development, and this building scheme, which is common in Extremadura, is the most sustainable, and the least liable to speculation. It makes the building sector more efficient by optimizing the economic production process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ENAGAS tiene la intención de ampliar el Terminal de Regasificación de GNL que tiene en el puerto de Barcelona. El presente Proyecto Básico define las instalaciones de uno de los Tanques de almacenamiento de GNL que se van a construir dentro del Alcance de dicha ampliación, con el suficiente detalle como para permitir a ENAGAS acometer las tareas previas a la ejecución del proyecto, a saber: 1. Planificar y presupuestar la fase de ejecución 2. Solicitar los Permisos y Autorizaciones necesarias de los Organismos competentes 3. Lanzar la Petición de Ofertas para el concurso llave en mano del EPC. Los trabajos de Ingeniería contenidos en el Proyecto Básico son los siguientes: Antecedentes y Datos básicos, Criterios de diseño, Descripción de instalaciones, Cálculos estructurales, Planos del Tanque de GNL, Definición de equipos y materiales a utilizar, Plan de ejecución del proyecto, Especificaciones técnicas para Ingeniería, Compras y Construcción, Paquete para Petición de Ofertas del EPC, Condiciones técnicas particulares, Programa de ejecución y Presupuesto de inversiones. ABSTRACT ENAGAS is expanding its LNG Regasification Terminal located in Barcelona Port (Spain). This Document reports the Front End Engineering and Design (FEED) works undertaken in relation to one of the LNG Storage Tanks to be built within the scope of that expansion. The Project FEED hereby presented comprehensively defines the LNG Storage Tank so as to allow ENAGAS to perform next stages of the Works, namely: 1. Plan and budget the Project Execution phase 2. Request Regulatory authorizations 3. Invite Contractors to bid for the LNG Tank EPC. Main components of the FEED Document contents are as follow:Background and Basic Data, Design Criteria, Description of LNG Tank elements, Engineering Calculations, LNG Tank Drawings, Equipment and Materials definition, Project Execution Plan (PEP), Technical Conditions, EPC Invitation to Tender (ITT) package, Execution Schedule and Cost Estimate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis se basa en el estudio de la trayectoria que pasa por dos puntos en el problema de los dos cuerpos, inicialmente desarrollado por Lambert, del que toma su nombre. En el pasado, el Problema de Lambert se ha utilizado para la determinación de órbitas a partir de observaciones astronómicas de los cuerpos celestes. Actualmente, se utiliza continuamente en determinación de órbitas, misiones planetaria e interplanetarias, encuentro espacial e interceptación, o incluso en corrección de orbitas. Dada su gran importancia, se decide investigar especialmente sobre su solución y las aplicaciones en las misiones espaciales actuales. El campo de investigación abierto, es muy amplio, así que, es necesario determinar unos objetivos específicos realistas, en el contexto de ejecución de una Tesis, pero que sirvan para mostrar con suficiente claridad el potencial de los resultados aportados en este trabajo, e incluso poder extenderlos a otros campos de aplicación. Como resultado de este análisis, el objetivo principal de la Tesis se enfoca en el desarrollo de algoritmos para resolver el Problema de Lambert, que puedan ser aplicados de forma muy eficiente en las misiones reales donde aparece. En todos los desarrollos, se ha considerado especialmente la eficiencia del cálculo computacional necesario en comparación con los métodos existentes en la actualidad, destacando la forma de evitar la pérdida de precisión inherente a este tipo de algoritmos y la posibilidad de aplicar cualquier método iterativo que implique el uso de derivadas de cualquier orden. En busca de estos objetivos, se desarrollan varias soluciones para resolver el Problema de Lambert, todas ellas basadas en la resolución de ecuaciones transcendentes, con las cuales, se alcanzan las siguientes aportaciones principales de este trabajo: • Una forma genérica completamente diferente de obtener las diversas ecuaciones para resolver el Problema de Lambert, mediante desarrollo analítico, desde cero, a partir de las ecuaciones elementales conocidas de las cónicas (geométricas y temporal), proporcionando en todas ellas fórmulas para el cálculo de derivadas de cualquier orden. • Proporcionar una visión unificada de las ecuaciones más relevantes existentes, mostrando la equivalencia con variantes de las ecuaciones aquí desarrolladas. • Deducción de una nueva variante de ecuación, el mayor logro de esta Tesis, que destaca en eficiencia sobre todas las demás (tanto en coste como en precisión). • Estudio de la sensibilidad de la solución ante variación de los datos iniciales, y como aplicar los resultados a casos reales de optimización de trayectorias. • También, a partir de los resultados, es posible deducir muchas propiedades utilizadas en la literatura para simplificar el problema, en particular la propiedad de invariancia, que conduce al Problema Transformado Simplificado. ABSTRACT This thesis is based on the study of the two-body, two-point boundary-value problem, initially developed by Lambert, from who it takes its name. Since the past, Lambert's Problem has been used for orbit determination from astronomical observations of celestial bodies. Currently, it is continuously used in orbit determinations, for planetary and interplanetary missions, space rendezvous, and interception, or even in orbit corrections. Given its great importance, it is decided to investigate their solution and applications in the current space missions. The open research field is very wide, it is necessary to determine specific and realistic objectives in the execution context of a Thesis, but that these serve to show clearly enough the potential of the results provided in this work, and even to extended them to other areas of application. As a result of this analysis, the main aim of the thesis focuses on the development of algorithms to solve the Lambert’s Problem which can be applied very efficiently in real missions where it appears. In all these developments, it has been specially considered the efficiency of the required computational calculation compared to currently existing methods, highlighting how to avoid the loss of precision inherent in such algorithms and the possibility to apply any iterative method involving the use of derivatives of any order. Looking to meet these objectives, a number of solutions to solve the Lambert’s Problem are developed, all based on the resolution of transcendental equations, with which the following main contributions of this work are reached: • A completely different generic way to get the various equations to solve the Lambert’s Problem by analytical development, from scratch, from the known elementary conic equations (geometrics and temporal), by providing, in all cases, the calculation of derivatives of any order. • Provide a unified view of most existing relevant equations, showing the equivalence with variants of the equations developed here. • Deduction of a new variant of equation, the goal of this Thesis, which emphasizes efficiency (both computational cost and accuracy) over all other. • Estudio de la sensibilidad de la solución ante la variación de las condiciones iniciales, mostrando cómo aprovechar los resultados a casos reales de optimización de trayectorias. • Study of the sensitivity of the solution to the variation of the initial data, and how to use the results to real cases of trajectories’ optimization. • Additionally, from results, it is possible to deduce many properties used in literature to simplify the problem, in particular the invariance property, which leads to a simplified transformed problem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existe un creciente interés internacional por el ahorro energético y la sostenibilidad en la edificación con importantes repercusiones en la Arquitectura. La inercia térmica es un parámetro fundamental para poder valorar energéticamente un edificio en condiciones reales. Para ello es necesario cambiar el enfoque tradicional de transmisión de calor en régimen estacionario por otro en régimen dinámico en el que se analizan las ondas térmicas y el flujo de calor oscilante que atraviesan los cerramientos. Los parámetros que definen la inercia térmica son: el espesor, la difusividad y el ciclo térmico. A su vez la difusividad está determinada por la conductividad térmica, la densidad y el calor específico del material. De estos parámetros la conductividad es el más complejo, variable y difícil de medir, especialmente en los cerramientos de tierra debido a su heterogeneidad y complejidad higrotérmica. En general, los métodos de medida de la conductividad o transmitancias en los paramentos presentan inconvenientes a la hora de medir un edificio construido con tierra: dificultades de implementación, el elevado coste o la fiabilidad de los resultados, principalmente. El Método de la Aguja Térmica (MAT) se basa en el principio de la evolución en el tiempo del calor emitido por una fuente lineal al insertarse en el seno de un material. Se ha escogido este método porque resulta práctico, de bajo coste y de fácil aplicación a gran escala pero tiene serios problemas de fiabilidad y exactitud. En esta tesis se desarrolla un método de medida de la conductividad térmica para Piezas de Albañilería de Tierra Cruda en laboratorio basado en el MAT, se mejora su fiabilidad, se analiza su incertidumbre, se compara con otros métodos de referencia y se aplica en adobes, Bloques de Tierra Comprimida y probetas de tierra estabilizada con distintas proporciones de paja. Este método servirá de base a una posterior aplicación in situ. Finalmente se proponen modelos matemáticos para mejorar la exactitud del dispositivo utilizado y para la estimación de la conductividad de cerramientos de tierra en función de su densidad. Con los resultados obtenidos se analizan las posibilidades de amortiguación y retardo de las ondas térmicas y capacidad de almacenaje de energía de los cerramientos en función de su densidad y humedad. There is growing international interest in energy saving and sustainability in buildings with significant impact on Architecture. Thermal inertia is a key parameter to assess energy in buildings in real conditions. This requires changing the traditional approach to heat transfer in steady state by another in dynamic regime which analyzes the thermal waves and oscillating heat flux passing through the external walls. The parameters defining the thermal inertia are: the thickness, the diffusivity and the thermal cycle. In turn, the diffusivity is determined by the thermal conductivity, density and specific heat of the material. Of these parameters, thermal conductivity is the most complex, variable and difficult to measure, especially in earth walls due to their heterogeneity and hygrothermal complexity. In general, the methods of measurement of conductivity and transmittance in walls have drawbacks when measuring a building with earth: implementation difficulties, high cost, or reliability of the results, mainly. The Thermal Needle Procedure (TNP) is based on the principle of evolution in time of heat from a line source when inserted within a material. This method was chosen because it is a practical, low cost and easy to implement on a large scale but has serious problems of reliability and accuracy. This thesis develops a laboratory method for measuring the thermal conductivity of Masonry Units Unfire Earth-based based on TNP, its uncertainty is analyzed, compared to other reference methods and applies in adobes, Compressed Earth Blocks and stabilized soil specimens with different proportions of straw. This method will form the basis of a subsequent application in situ. Finally, mathematical models are proposed to improve the accuracy of the device used, and to estimate the conductivity of earth enclosures depending on its density. With the results obtained earth enclosures are analyzed to estimate their possibilities of delay and buffer of termal waves and energy storage capacity according to their density and moisture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HELLO protocol or neighborhood discovery is essential in wireless ad hoc networks. It makes the rules for nodes to claim their existence/aliveness. In the presence of node mobility, no fix optimal HELLO frequency and optimal transmission range exist to maintain accurate neighborhood tables while reducing the energy consumption and bandwidth occupation. Thus a Turnover based Frequency and transmission Power Adaptation algorithm (TFPA) is presented in this paper. The method enables nodes in mobile networks to dynamically adjust both their HELLO frequency and transmission range depending on the relative speed. In TFPA, each node monitors its neighborhood table to count new neighbors and calculate the turnover ratio. The relationship between relative speed and turnover ratio is formulated and optimal transmission range is derived according to battery consumption model to minimize the overall transmission energy. By taking advantage of the theoretical analysis, the HELLO frequency is adapted dynamically in conjunction with the transmission range to maintain accurate neighborhood table and to allow important energy savings. The algorithm is simulated and compared to other state-of-the-art algorithms. The experimental results demonstrate that the TFPA algorithm obtains high neighborhood accuracy with low HELLO frequency (at least 11% average reduction) and with the lowest energy consumption. Besides, the TFPA algorithm does not require any additional GPS-like device to estimate the relative speed for each node, hence the hardware cost is reduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La mayoría de las aplicaciones forestales del escaneo laser aerotransportado (ALS, del inglés airborne laser scanning) requieren la integración y uso simultaneo de diversas fuentes de datos, con el propósito de conseguir diversos objetivos. Los proyectos basados en sensores remotos normalmente consisten en aumentar la escala de estudio progresivamente a lo largo de varias fases de fusión de datos: desde la información más detallada obtenida sobre un área limitada (la parcela de campo), hasta una respuesta general de la cubierta forestal detectada a distancia de forma más incierta pero cubriendo un área mucho más amplia (la extensión cubierta por el vuelo o el satélite). Todas las fuentes de datos necesitan en ultimo termino basarse en las tecnologías de sistemas de navegación global por satélite (GNSS, del inglés global navigation satellite systems), las cuales son especialmente erróneas al operar por debajo del dosel forestal. Otras etapas adicionales de procesamiento, como la ortorectificación, también pueden verse afectadas por la presencia de vegetación, deteriorando la exactitud de las coordenadas de referencia de las imágenes ópticas. Todos estos errores introducen ruido en los modelos, ya que los predictores se desplazan de la posición real donde se sitúa su variable respuesta. El grado por el que las estimaciones forestales se ven afectadas depende de la dispersión espacial de las variables involucradas, y también de la escala utilizada en cada caso. Esta tesis revisa las fuentes de error posicional que pueden afectar a los diversos datos de entrada involucrados en un proyecto de inventario forestal basado en teledetección ALS, y como las propiedades del dosel forestal en sí afecta a su magnitud, aconsejando en consecuencia métodos para su reducción. También se incluye una discusión sobre las formas más apropiadas de medir exactitud y precisión en cada caso, y como los errores de posicionamiento de hecho afectan a la calidad de las estimaciones, con vistas a una planificación eficiente de la adquisición de los datos. La optimización final en el posicionamiento GNSS y de la radiometría del sensor óptico permitió detectar la importancia de este ultimo en la predicción de la desidad relativa de un bosque monoespecífico de Pinus sylvestris L. ABSTRACT Most forestry applications of airborne laser scanning (ALS) require the integration and simultaneous use of various data sources, pursuing a variety of different objectives. Projects based on remotely-sensed data generally consist in upscaling data fusion stages: from the most detailed information obtained for a limited area (field plot) to a more uncertain forest response sensed over a larger extent (airborne and satellite swath). All data sources ultimately rely on global navigation satellite systems (GNSS), which are especially error-prone when operating under forest canopies. Other additional processing stages, such as orthorectification, may as well be affected by vegetation, hence deteriorating the accuracy of optical imagery’s reference coordinates. These errors introduce noise to the models, as predictors displace from their corresponding response. The degree to which forest estimations are affected depends on the spatial dispersion of the variables involved and the scale used. This thesis reviews the sources of positioning errors which may affect the different inputs involved in an ALS-assisted forest inventory project, and how the properties of the forest canopy itself affects their magnitude, advising on methods for diminishing them. It is also discussed how accuracy should be assessed, and how positioning errors actually affect forest estimation, toward a cost-efficient planning for data acquisition. The final optimization in positioning the GNSS and optical image allowed to detect the importance of the latter in predicting relative density in a monospecific Pinus sylvestris L. forest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural health monitoring (SHM) systems have excellent potential to improve the regular operation and maintenance of structures. Wireless networks (WNs) have been used to avoid the high cost of traditional generic wired systems. The most important limitation of SHM wireless systems is time-synchronization accuracy, scalability, and reliability. A complete wireless system for structural identification under environmental load is designed, implemented, deployed, and tested on three different real bridges. Our contribution ranges from the hardware to the graphical front end. System goal is to avoid the main limitations of WNs for SHM particularly in regard to reliability, scalability, and synchronization. We reduce spatial jitter to 125 ns, far below the 120 μs required for high-precision acquisition systems and much better than the 10-μs current solutions, without adding complexity. The system is scalable to a large number of nodes to allow for dense sensor coverage of real-world structures, only limited by a compromise between measurement length and mandatory time to obtain the final result. The system addresses a myriad of problems encountered in a real deployment under difficult conditions, rather than a simulation or laboratory test bed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accuracy in the liquid hydrocarbons custody transfer is mandatory because it has a great economic impact. By far the most accurate meter is the positive displacement (PD) meter. Increasing such an accuracy may adversely affect the cost of the custody transfer, unless simple models are developed in order to lower the cost, which is the purpose of this work. PD meter consists of a fixed volume rotating chamber. For each turn a pulse is counted, hence, the measured volume is the number of pulses times the volume of the chamber. It does not coincide with the real volume, so corrections have to be made. All the corrections are grouped by a meter factor. Among corrections highlights the slippage flow. By solving the Navier-Stokes equations one can find an analytical expression for this flow. It is neither easy nor cheap to apply straightforward the slippage correction; therefore we have made a simple model where slippage is regarded as a single parameter with dimension of time. The model has been tested for several PD meters. In our careful experiments, the meter factor grows with temperature at a constant pace of 8?10?5?ºC?1. Be warned

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo final de las investigaciones recogidas en esta tesis doctoral es la estimación del volumen de hielo total de los ms de 1600 glaciares de Svalbard, en el Ártico, y, con ello, su contribución potencial a la subida del nivel medio del mar en un escenario de calentamiento global. Los cálculos más exactos del volumen de un glaciar se efectúan a partir de medidas del espesor de hielo obtenidas con georradar. Sin embargo, estas medidas no son viables para conjuntos grandes de glaciares, debido al coste, dificultades logísticas y tiempo requerido por ellas, especialmente en las regiones polares o de montaña. Frente a ello, la determinación de áreas de glaciares a partir de imágenes de satélite sí es viable a escalas global y regional, por lo que las relaciones de escala volumen-área constituyen el mecanismo más adecuado para las estimaciones de volúmenes globales y regionales, como las realizadas para Svalbard en esta tesis. Como parte del trabajo de tesis, hemos elaborado un inventario de los glaciares de Svalbard en los que se han efectuado radioecosondeos, y hemos realizado los cálculos del volumen de hielo de más de 80 cuencas glaciares de Svalbard a partir de datos de georradar. Estos volúmenes han sido utilizados para calibrar las relaciones volumen-área desarrolladas en la tesis. Los datos de georradar han sido obtenidos en diversas campañas llevadas a cabo por grupos de investigación internacionales, gran parte de ellas lideradas por el Grupo de Simulación Numérica en Ciencias e Ingeniería de la Universidad Politécnica de Madrid, del que forman parte la doctoranda y los directores de tesis. Además, se ha desarrollado una metodología para la estimación del error en el cálculo de volumen, que aporta una novedosa técnica de cálculo del error de interpolación para conjuntos de datos del tipo de los obtenidos con perfiles de georradar, que presentan distribuciones espaciales con unos patrones muy característicos pero con una densidad de datos muy irregular. Hemos obtenido en este trabajo de tesis relaciones de escala específicas para los glaciares de Svalbard, explorando la sensibilidad de los parámetros a diferentes morfologías glaciares, e incorporando nuevas variables. En particular, hemos efectuado experimentos orientados a verificar si las relaciones de escala obtenidas caracterizando los glaciares individuales por su tamaño, pendiente o forma implican diferencias significativas en el volumen total estimado para los glaciares de Svalbard, y si esta partición implica algún patrón significativo en los parámetros de las relaciones de escala. Nuestros resultados indican que, para un valor constante del factor multiplicativo de la relacin de escala, el exponente que afecta al área en la relación volumen-área decrece según aumentan la pendiente y el factor de forma, mientras que las clasificaciones basadas en tamaño no muestran un patrón significativo. Esto significa que los glaciares con mayores pendientes y de tipo circo son menos sensibles a los cambios de área. Además, los volúmenes de la población total de los glaciares de Svalbard calculados con fraccionamiento en grupos por tamaño y pendiente son un 1-4% menores que los obtenidas usando la totalidad de glaciares sin fraccionamiento en grupos, mientras que los volúmenes calculados fraccionando por forma son un 3-5% mayores. También realizamos experimentos multivariable para obtener estimaciones óptimas del volumen total mediante una combinación de distintos predictores. Nuestros resultados muestran que un modelo potencial simple volumen-área explica el 98.6% de la varianza. Sólo el predictor longitud del glaciar proporciona significación estadística cuando se usa además del área del glaciar, aunque el coeficiente de determinación disminuye en comparación con el modelo más simple V-A. El predictor intervalo de altitud no proporciona información adicional cuando se usa además del área del glaciar. Nuestras estimaciones del volumen de la totalidad de glaciares de Svalbard usando las diferentes relaciones de escala obtenidas en esta tesis oscilan entre 6890 y 8106 km3, con errores relativos del orden de 6.6-8.1%. El valor medio de nuestras estimaciones, que puede ser considerado como nuestra mejor estimación del volumen, es de 7.504 km3. En términos de equivalente en nivel del mar (SLE), nuestras estimaciones corresponden a una subida potencial del nivel del mar de 17-20 mm SLE, promediando 19_2 mm SLE, donde el error corresponde al error en volumen antes indicado. En comparación, las estimaciones usando las relaciones V-A de otros autores son de 13-26 mm SLE, promediando 20 _ 2 mm SLE, donde el error representa la desviación estándar de las distintas estimaciones. ABSTRACT The final aim of the research involved in this doctoral thesis is the estimation of the total ice volume of the more than 1600 glaciers of Svalbard, in the Arctic region, and thus their potential contribution to sea-level rise under a global warming scenario. The most accurate calculations of glacier volumes are those based on ice-thicknesses measured by groundpenetrating radar (GPR). However, such measurements are not viable for very large sets of glaciers, due to their cost, logistic difficulties and time requirements, especially in polar or mountain regions. On the contrary, the calculation of glacier areas from satellite images is perfectly viable at global and regional scales, so the volume-area scaling relationships are the most useful tool to determine glacier volumes at global and regional scales, as done for Svalbard in this PhD thesis. As part of the PhD work, we have compiled an inventory of the radio-echo sounded glaciers in Svalbard, and we have performed the volume calculations for more than 80 glacier basins in Svalbard from GPR data. These volumes have been used to calibrate the volume-area relationships derived in this dissertation. Such GPR data have been obtained during fieldwork campaigns carried out by international teams, often lead by the Group of Numerical Simulation in Science and Engineering of the Technical University of Madrid, to which the PhD candidate and her supervisors belong. Furthermore, we have developed a methodology to estimate the error in the volume calculation, which includes a novel technique to calculate the interpolation error for data sets of the type produced by GPR profiling, which show very characteristic data distribution patterns but with very irregular data density. We have derived in this dissertation scaling relationships specific for Svalbard glaciers, exploring the sensitivity of the scaling parameters to different glacier morphologies and adding new variables. In particular, we did experiments aimed to verify whether scaling relationships obtained through characterization of individual glacier shape, slope and size imply significant differences in the estimated volume of the total population of Svalbard glaciers, and whether this partitioning implies any noticeable pattern in the scaling relationship parameters. Our results indicate that, for a fixed value of the factor in the scaling relationship, the exponent of the area in the volume-area relationship decreases as slope and shape increase, whereas size-based classifications do not reveal any clear trend. This means that steep slopes and cirque-type glaciers are less sensitive to changes in glacier area. Moreover, the volumes of the total population of Svalbard glaciers calculated according to partitioning in subgroups by size and slope are smaller (by 1-4%) than that obtained considering all glaciers without partitioning into subgroups, whereas the volumes calculated according to partitioning in subgroups by shape are 3-5% larger. We also did multivariate experiments attempting to optimally predict the volume of Svalbard glaciers from a combination of different predictors. Our results show that a simple power-type V-A model explains 98.6% of the variance. Only the predictor glacier length provides statistical significance when used in addition to the predictor glacier area, though the coefficient of determination decreases as compared with the simpler V-A model. The predictor elevation range did not provide any additional information when used in addition to glacier area. Our estimates of the volume of the entire population of Svalbard glaciers using the different scaling relationships that we have derived along this thesis range within 6890-8106 km3, with estimated relative errors in total volume of the order of 6.6-8.1% The average value of all of our estimates, which could be used as a best estimate for the volume, is 7,504 km3. In terms of sea-level equivalent (SLE), our volume estimates correspond to a potential contribution to sea-level rise within 17-20 mm SLE, averaging 19 _ 2 mm SLE, where the quoted error corresponds to our estimated relative error in volume. For comparison, the estimates using the V-A scaling relations found in the literature range within 13-26 mm SLE, averaging 20 _ 2 mm SLE, where the quoted error represents the standard deviation of the different estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of the glacier thickness is one of the most widespread applications of radioglaciology, and is the basis for estimating the glacier volume. The accuracy of the measurement of ice thickness, the distribution of profiles over the glacier and the accuracy of the boundary delineation of the glacier are the most important factors determining the error in the evaluation of the glacier volume. The aim of this study is to get an accurate estimate of the error incurred in the estimate of glacier volume from GPR-retrieved ice-thickness data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se aborda el problema de la modelización, análisis y optimización de pórticos metálicos planos de edificación frente a los estados límites último y de servicio. El objetivo general es presentar una técnica secuencial ordenada de optimización discreta para obtener el coste mínimo de pórticos metálicos planos de edificación, teniendo en cuenta las especificaciones del EC-3, incorporando las uniones semirrígidas y elementos no prismáticos en el proceso de diseño. Asimismo se persigue valorar su grado de influencia sobre el diseño final. El horizonte es extraer conclusiones prácticas que puedan ser de utilidad y aplicación simple para el proyecto de estructuras metálicas. La cantidad de publicaciones técnicas y científicas sobre la respuesta estructural de entramados metálicos es inmensa; por ello se ha hecho un esfuerzo intenso en recopilar el estado actual del conocimiento, sobre las líneas y necesidades actuales de investigación. Se ha recabado información sobre los métodos modernos de cálculo y diseño, sobre los factores que influyen sobre la respuesta estructural, sobre técnicas de modelización y de optimización, al amparo de las indicaciones que algunas normativas actuales ofrecen sobre el tema. En esta tesis se ha desarrollado un procedimiento de modelización apoyado en el método de los elementos finitos implementado en el entorno MatLab; se han incluido aspectos claves tales como el comportamiento de segundo orden, la comprobación ante inestabilidad y la búsqueda del óptimo del coste de la estructura frente a estados límites, teniendo en cuenta las especificaciones del EC-3. También se ha modelizado la flexibilidad de las uniones y se ha analizado su influencia en la respuesta de la estructura y en el peso y coste final de la misma. Se han ejecutado algunos ejemplos de aplicación y se ha contrastado la validez del modelo con resultados de algunas estructuras ya analizadas en referencias técnicas conocidas. Se han extraído conclusiones sobre el proceso de modelización y de análisis, sobre la repercusión de la flexibilidad de las uniones en la respuesta de la estructura. El propósito es extraer conclusiones útiles para la etapa de proyecto. Una de las principales aportaciones del trabajo en su enfoque de optimización es la incorporación de una formulación de elementos no prismáticos con uniones semirrígidas en sus extremos. Se ha deducido una matriz de rigidez elástica para dichos elementos. Se ha comprobado su validez para abordar el análisis no lineal; para ello se han comparado los resultados con otros obtenidos tras aplicar otra matriz deducida analíticamente existente en la literatura y también mediante el software comercial SAP2000. Otra de las aportaciones de esta tesis es el desarrollo de un método de optimización del coste de pórticos metálicos planos de edificación en el que se tienen en cuenta aspectos tales como las imperfecciones, la posibilidad de incorporar elementos no prismáticos y la caracterización de las uniones semirrígidas, valorando la influencia de su flexibilidad sobre la respuesta de la estructura. Así, se han realizado estudios paramétricos para valorar la sensibilidad y estabilidad de las soluciones obtenidas, así como rangos de validez de las conclusiones obtenidas. This thesis deals with the problems of modelling, analysis and optimization of plane steel frames with regard to ultimate and serviceability limit states. The objective of this work is to present an organized sequential technique of discrete optimization for achieving the minimum cost of plane steel frames, taking into consideration the EC-3 specifications as well as including effects of the semi-rigid joints and non-prismatic elements in the design process. Likewise, an estimate of their influence on the final design is an aim of this work. The final objective is to draw practical conclusions which can be handful and easily applicable for a steel-structure project. An enormous amount of technical and scientific publications regarding steel frames is currently available, thus making the achievement of a comprehensive and updated knowledge a considerably hard task. In this work, a large variety of information has been gathered and classified, especially that related to current research lines and needs. Thus, the literature collected encompasses references related to state-of-the-art design methods, factors influencing the structural response, modelling and optimization techniques, as well as calculation and updated guidelines of some steel Design Codes about the subject. In this work a modelling procedure based on the finite element implemented within the MatLab programming environment has been performed. Several keys aspects have been included, such as second order behaviour, the safety assessment against structural instability and the search for an optimal cost considering the limit states according to EC-3 specifications. The flexibility of joints has been taken into account in the procedure hereby presented; its effects on the structural response, on the optimum weight and on the final cost have also been analysed. In order to confirm the validity and adequacy of this procedure, some application examples have been carried out. The results obtained were compared with those available from other authors. Several conclusions about the procedure that comprises modelling, analysis and design stages, as well as the effect of the flexibility of connections on the structural response have been drawn. The purpose is to point out some guidelines for the early stages of a project. One of the contributions of this thesis is an attempt for optimizing plane steel frames in which both non-prismatic beam-column-type elements and semi-rigid connections have been considered. Thus, an elastic stiffness matrix has been derived. Its validity has been tested through comparing its accuracy with other analytically-obtained matrices available in the literature, and with results obtained by the commercial software SAP2000. Another achievement of this work is the development of a method for cost optimization of plane steel building frames in which some relevant aspects have been taken in consideration. These encompass geometric imperfections, non-prismatic beam elements and the numerical characterization of semi-rigid connections, evaluating the effect of its flexibility on the structural response. Hence, some parametric analyses have been performed in order to assess the sensitivity, the stability of the outcomes and their range of applicability as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is the development of a building cost estimation model whose purpose is to quickly and precisely evaluate rebuilding costs for historic heritage buildings affected by catastrophic events. Specifically, this study will be applied to the monumental buildings owned by the Catholic Church that were affected by two earthquakes on May 11, 2011 in the town of Lorca. To estimate the initial total replacement cost new, calculation model will be applied which, on the one hand, will use two-dimensional metric exterior parameters and, on the other, three-dimensional interior cubic parameters. Based on the total of the analyzed buildings, and considering damage caused by the seismic event, the final reconstruction cost for the building units ruined by the earthquakes can be estimated. The proposed calculation model can also be applied to other emergency scenarios and situations for the quick estimation of construction costs necessary for rebuilding historic heritage buildings which have been affected by catastrophic events that deteriorate or ruin their structural or constructive configuration.