929 resultados para C51 - Model Construction and Estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Everglades Depth Estimation Network (EDEN) is an integrated network of realtime water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on grid with 400-square-meter spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to: (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) (U.S. Army Corps of Engineers, 1999). The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades. The first objective of this report is to validate the spatially continuous EDEN water-surface model for the Everglades, Florida developed by Pearlstine et al. (2007) by using an independent field-measured data-set. The second objective is to demonstrate two applications of the EDEN water-surface model: to estimate site-specific ground elevation by using the validated EDEN water-surface model and observed water depth data; and to create water-depth hydrographs for tree islands. We found that there are no statistically significant differences between model-predicted and field-observed water-stage data in both southern Water Conservation Area (WCA) 3A and WCA 3B. Tree island elevations were derived by subtracting field water-depth measurements from the predicted EDEN water-surface. Water-depth hydrographs were then computed by subtracting tree island elevations from the EDEN water stage. Overall, the model is reliable by a root mean square error (RMSE) of 3.31 cm. By region, the RMSE is 2.49 cm and 7.77 cm in WCA 3A and 3B, respectively. This new landscape-scale hydrological model has wide applications for ongoing research and management efforts that are vital to restoration of the Florida Everglades. The accurate, high-resolution hydrological data, generated over broad spatial and temporal scales by the EDEN model, provides a previously missing key to understanding the habitat requirements and linkages among native and invasive populations, including fish, wildlife, wading birds, and plants. The EDEN model is a powerful tool that could be adapted for other ecosystem-scale restoration and management programs worldwide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research examined to what extent Health Belief Model (HBM) and socioeconomic variables were useful in explaining the choice whether or not more effective contraceptive methods were used among married fecund women intending no additional births. The source of the data was the 1976 National Survey of Family Growth conducted under the auspices of the National Center for Health Statistics. Using the HBM as a framework for multivariate analyses limited support was found (using available measures) that the HBM components of motivation and perceived efficacy influence the likelihood of more effective contraceptive method use. Support was also found that modifying variables suggested by the HBM can influence the effects of HBM components on the likelihood of more effective method use. Socioeconomic variables were found, using all cases and some subgroups, to have a significant additional influence on the likelihood of use of more effective methods. Limited support was found for the concept that the greater the opportunity costs of an unwanted birth the greater the likelihood of use of more effective contraceptive methods. This research supports the use of HBM and socioeconomic variables to explain the likelihood of a protective health behavior, use of more effective contraception if no additional births are intended.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate of Marine Isotope Stage (MIS) 11, the interglacial roughly 400,000 years ago, is investigated for four time slices, 416, 410, 400, and 394 ka. The overall picture is that MIS 11 was a relatively warm interglacial in comparison to preindustrial, with Northern Hemisphere (NH) summer temperatures early in MIS 11 (416-410 ka) warmer than preindustrial, though winters were cooler. Later in MIS 11, especially around 400 ka, conditions were cooler in the NH summer, mainly in the high latitudes. Climate changes simulated by the models were mainly driven by insolation changes, with the exception of two local feedbacks that amplify climate changes. Here, the NH high latitudes, where reductions in sea ice cover lead to a winter warming early in MIS 11, as well as the tropics, where monsoon changes lead to stronger climate variations than one would expect on the basis of latitudinal mean insolation change alone, are especially prominent. The results support a northward expansion of trees at the expense of grasses in the high northern latitudes early during MIS 11, especially in northern Asia and North America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates the spatial market integration of the Chilean wheat market in relation with its most representative international markets by using a vector error correction model (VECM) and how a price support policy, as a price band, affect it. The international market was characterized by two relevant wheat prices: PAN from Argentina and Hard Red Winter from the United States. The spatial market integration level, expressed in the error correction term (ECT), allowed concluding that there is a high integration degree among these markets with a variable influence of the price band mechanism mainly related with its estimation methodology. Moreover, this paper showed that Chile can be seen as price taker as long as the speed of its adjustment to international shocks, being these reactions faster than in the United States and Argentina. Finally, the results validated the "Law of the One Price", which assumes price equalization across all local markets in the long run.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sediment spectral reflectance measurements were generated aboard the JOIDES Resolution during Ocean Drilling Program Leg 162 shipboard operations. The large size of the raw data set (over 1.3 gigabytes) and limited computer hard disk storage space precluded detailed analysis of the data at sea, although broad band averages were used as aids in developing splices and determining lithologic boundaries. This data report describes the methods used to collect these data and their shipboard and postcruise processing. These initial results provide the basis for further postcruise research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of Pre- and Protohistoric anthropogenic land cover changes needs to be quantified i) to establish a baseline for comparison with current human impact on the environment and ii) to separate it from naturally occurring changes in our environment. Results are presented from the simple, adaptation-driven, spatially explicit Global Land Use and technological Evolution Simulator (GLUES) for pre-Bronze age demographic, technological and economic change. Using scaling parameters from the History Database of the Global Environment as well as GLUES-simulated population density and subsistence style, the land requirement for growing crops is estimated. The intrusion of cropland into potentially forested areas is translated into carbon loss due to deforestation with the dynamic global vegetation model VECODE. The land demand in important Prehistoric growth areas - converted from mostly forested areas - led to large-scale regional (country size) deforestation of up to 11% of the potential forest. In total, 29 Gt carbon were lost from global forests between 10 000 BC and 2000 BC and were replaced by crops; this value is consistent with other estimates of Prehistoric deforestation. The generation of realistic (agri-)cultural development trajectories at a regional resolution is a major strength of GLUES. Most of the pre-Bronze age deforestation is simulated in a broad farming belt from Central Europe via India to China. Regional carbon loss is, e.g., 5 Gt in Europe and the Mediterranean, 6 Gt on the Indian subcontinent, 18 Gt in East and Southeast Asia, or 2.3 Gt in subsaharan Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution of dissolved aluminium in the West Atlantic Ocean shows a mirror image with that of dissolved silicic acid, hinting at intricate interactions between the ocean cycling of Al and Si. The marine biogeochemistry of Al is of interest because of its potential impact on diatom opal remineralisation, hence Si availability. Furthermore, the dissolved Al concentration at the surface ocean has been used as a tracer for dust input, dust being the most important source of the bio-essential trace element iron to the ocean. Previously, the dissolved concentration of Al was simulated reasonably well with only a dust source, and scavenging by adsorption on settling biogenic debris as the only removal process. Here we explore the impacts of (i) a sediment source of Al in the Northern Hemisphere (especially north of ~ 40° N), (ii) the imposed velocity field, and (iii) biological incorporation of Al on the modelled Al distribution in the ocean. The sediment source clearly improves the model results, and using a different velocity field shows the importance of advection on the simulated Al distribution. Biological incorporation appears to be a potentially important removal process. However, conclusive independent data to constrain the Al / Si incorporation ratio by growing diatoms are missing. Therefore, this study does not provide a definitive answer to the question of the relative importance of Al removal by incorporation compared to removal by adsorptive scavenging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Léogâne, approximately 25 km west of the capital. The earthquake occurred in the boundary region separating the Caribbean plate and the North American plate. This plate boundary is dominated by left-lateral strike slip motion and compression, and accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North American plate (DeMets et al., 2000). Initially the location and focal mechanism of the earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillo-Plantain Garden fault system (EPGFZ), however Hayes et al., (2010) combined seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process involved slip on multiple faults. Besides, the authors showed that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the EPGFZ. In December 2010, a Spanish cooperation project financed by the Politechnical University of Madrid started with a clear objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. One of the tasks of the project was devoted to vulnerability assessment of the current building stock and the estimation of seismic risk scenarios. The study was carried out by following the capacity spectrum method as implemented in the software SELENA (Molina et al., 2010). The method requires a detailed classification of the building stock in predominant building typologies (according to the materials in the structure and walls, number of stories and age of construction) and the use of the building (residential, commercial, etc.). Later, the knowledge of the soil characteristics of the city and the simulation of a scenario earthquake will provide the seismic risk scenarios (damaged buildings). The initial results of the study show that one of the highest sources of uncertainties comes from the difficulty of achieving a precise building typologies classification due to the craft construction without any regulations. Also it is observed that although the occurrence of big earthquakes usually helps to decrease the vulnerability of the cities due to the collapse of low quality buildings and the reconstruction of seismically designed buildings, in the case of Port-au-Prince the seismic risk in most of the districts remains high, showing very vulnerable areas. Therefore the local authorities have to drive their efforts towards the quality control of the new buildings, the reinforcement of the existing building stock, the establishment of seismic normatives and the development of emergency planning also through the education of the population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper summarizes the research activities focused on the behaviour of concrete and concrete structures subjected to blast loading carried out by the Department of Materials Science of the Technical University of Madrid (PUM). These activities comprise the design and construction of a test bench that allows for testing up to four planar concrete specimens with one single explosion, the study of the performance of different protection concepts for concrete structures and, finally, the development of a numerical model for the simulation of concrete structural elements subjected to blast. Up to date 6 different types of concrete have been studied, from plain normal strength concrete, to high strength concrete, including also fibre reinforced concretes with different types of fibres. The numerical model is based on the Cohesive Crack Model approach, and has been developed for the LSDYNA finite element code through a user programmed subroutine. Despite its simplicity, the model is able to predict the failure patterns of the concrete slabs tested with a high level of accuracy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los diques flotantes son estructuras que atenúan la energía del oleaje fundamentalmente por reflexión y turbulencia. Aunque presentan importantes ventajas en términos constructivos y medioambientales, su efectividad es limitada y en la práctica sólo se emplean en condiciones climáticas propias de zonas con oleajes poco energéticos. Por otro lado, el buque es la estructura flotante por excelencia y su empleo para el abrigo portuario y costero en determinadas situaciones puede aportar las ventajas propias de los diques flotantes, al tiempo que ampliar el rango de oleajes frente a los que estas estructuras son efectivas. El propósito de esta Tesis Doctoral es evaluar la viabilidad del empleo de buques fondeados como diques flotantes para el abrigo portuario y costero. Para ello, se han realizado ensayos en modelo físico a escala reducida en un canal de oleaje del Centro de Estudios de Puertos y Costas (CEPYC), con el objeto de determinar los coeficientes de transmisión (Ct), reflexión (Cr) y disipación (Cd) de barcos de diversas tipologías y dimensiones, sometidos a diferentes oleajes en distintas situaciones de carga, fondeo y profundidad del emplazamiento. La efectividad de los buques empleados en los ensayos se ha determinado mediante el análisis de dichos coeficientes y su variación con la altura de ola y el periodo de los oleajes incidentes. Además, se han registrado las fuerzas existentes en las cadenas de fondeo con objeto de comprobar la viabilidad del mismo y facilitar una estimación del diámetro de las cadenas que serían necesarias en cada situación. Posteriormente, se han aplicado los resultados obtenidos en los ensayos en modelo físico reducido a dos situaciones de abrigo portuario y costero. La primera aplicación consiste en el empleo de buques como defensa temporal en fases constructivas por medios marítimos, partiendo de la hipótesis de que, actuando como diques flotantes, puede proteger la zona de la obra y ampliar las ventanas temporales de periodos de actividad en obra marítima. Las actividades que se han analizado son las de dragado de fondos, vertidos de material granular y transporte y fondeo de cajones flotantes para diques y muelles. La segunda aplicación estudiada es el empleo de buques para la protección costera y la formación de salientes y tómbolos. Los coeficientes de transmisión obtenidos se han introducido en formulaciones analíticas que permiten prever la evolución de la costa frente a la protección procurada por el buque actuando como dique flotante exento. Finalmente se han redactado las conclusiones de la investigación y se han propuesto nuevas líneas de investigación relacionadas con esta Tesis Doctoral. Floating breakwaters are structures which attenuate wave energy mainly by reflection and turbulence. They display advantages in terms of construction and ecology, amongst others. However, their use is restricted in practice to certain areas with good climatic conditions and low energy waves. Moreover, ships are the most common floating structures and their use for port and coastal shelter in certain situations could widen the range of applicability in addition to the rest of advantages of floating breakwaters. The purpose of this research is to assess the feasibility of ships anchored as floating breakwaters for port and coastal protection. To that end, tests in a scaled down physical model have been conducted in a wave flume in the Centre of Port and Coastal Studies (CEPYC), in order to determine the transmission (Ct), reflection (Cr) and dissipation (Cd) coefficients of ships of diverse types and dimensions, under different wave, load, anchoring and depth conditions. The effectiveness of the several ships used in the tests has been determined by analyzing these coefficients and their variation with the wave height and period of the incident waves. In addition, the existing forces in the anchor chains have been registered to verify the feasibility of the anchoring systems, as well as to provide an estimation of the diameter of the chains that would be needed in each situation. Subsequently, the results of the tests have been applied to two situations of port and coastal protection. The first one is the use of ships as a temporary defense for maritime works with construction phases by maritime means, on the assumption that, acting as floating breakwaters, they can protect the work area and increase the time windows of periods of activity in maritime works. Dredging, dumping of granular material and transport and positioning of big concrete caissons for docks and breakwaters were the activities analyzed. The second situation is the use of ships for coastal protection and forming salients of sand or tombolos. Some analytical formulations which take into account the transmission coefficients from the tests have been used to predict the evolution of the coastline under the protection given by the ships acting as detached floating breakwaters. Finally, the conclusions of the research have been addressed and the proposal of new lines of work related to the topic has been made.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de la presente investigación es el desarrollo de un modelo de cálculo rápido, eficiente y preciso, para la estimación de los costes finales de construcción, en las fases preliminares del proyecto arquitectónico. Se trata de una herramienta a utilizar durante el proceso de elaboración de estudios previos, anteproyecto y proyecto básico, no siendo por tanto preciso para calcular el “predimensionado de costes” disponer de la total definición grafica y literal del proyecto. Se parte de la hipótesis de que en la aplicación práctica del modelo no se producirán desviaciones superiores al 10 % sobre el coste final de la obra proyectada. Para ello se formulan en el modelo de predimensionado cinco niveles de estimación de costes, de menor a mayor definición conceptual y gráfica del proyecto arquitectónico. Los cinco niveles de cálculo son: dos que toman como referencia los valores “exógenos” de venta de las viviendas (promoción inicial y promoción básica) y tres basados en cálculos de costes “endógenos” de la obra proyectada (estudios previos, anteproyecto y proyecto básico). El primer nivel de estimación de carácter “exógeno” (nivel .1), se calcula en base a la valoración de mercado de la promoción inmobiliaria y a su porcentaje de repercusión de suelo sobre el valor de venta de las viviendas. El quinto nivel de valoración, también de carácter “exógeno” (nivel .5), se calcula a partir del contraste entre el valor externo básico de mercado, los costes de construcción y los gastos de promoción estimados de la obra proyectada. Este contraste entre la “repercusión del coste de construcción” y el valor de mercado, supone una innovación respecto a los modelos de predimensionado de costes existentes, como proceso metodológico de verificación y validación extrínseca, de la precisión y validez de las estimaciones resultantes de la aplicación práctica del modelo, que se denomina Pcr.5n (Predimensionado costes de referencia con .5niveles de cálculo según fase de definición proyectual / ideación arquitectónica). Los otros tres niveles de predimensionado de costes de construcción “endógenos”, se estiman mediante cálculos analíticos internos por unidades de obra y cálculos sintéticos por sistemas constructivos y espacios funcionales, lo que se lleva a cabo en las etapas iniciales del proyecto correspondientes a estudios previos (nivel .2), anteproyecto (nivel .3) y proyecto básico (nivel .4). Estos cálculos teóricos internos son finalmente evaluados y validados mediante la aplicación práctica del modelo en obras de edificación residencial, de las que se conocen sus costes reales de liquidación final de obra. Según va evolucionando y se incrementa el nivel de definición y desarrollo del proyecto, desde los estudios previos hasta el proyecto básico, el cálculo se va perfeccionando en su nivel de eficiencia y precisión de la estimación, según la metodología aplicada: [aproximaciones sucesivas en intervalos finitos], siendo la hipótesis básica como anteriormente se ha avanzado, lograr una desviación máxima de una décima parte en el cálculo estimativo del predimensionado del coste real de obra. El cálculo del coste de ejecución material de la obra, se desarrolla en base a parámetros cúbicos funcionales “tridimensionales” del espacio proyectado y parámetros métricos constructivos “bidimensionales” de la envolvente exterior de cubierta/fachada y de la huella del edificio sobre el terreno. Los costes funcionales y constructivos se ponderan en cada fase del proceso de cálculo con sus parámetros “temáticos/específicos” de gestión (Pg), proyecto (Pp) y ejecución (Pe) de la concreta obra presupuestada, para finalmente estimar el coste de construcción por contrata, como resultado de incrementar al coste de ejecución material el porcentaje correspondiente al parámetro temático/especifico de la obra proyectada. El modelo de predimensionado de costes de construcción Pcr.5n, será una herramienta de gran interés y utilidad en el ámbito profesional, para la estimación del coste correspondiente al Proyecto Básico previsto en el marco técnico y legal de aplicación. Según el Anejo I del Código Técnico de la Edificación (CTE), es de obligado cumplimiento que el proyecto básico contenga una “Valoración aproximada de la ejecución material de la obra proyectada por capítulos”, es decir , que el Proyecto Básico ha de contener al menos un “presupuesto aproximado”, por capítulos, oficios ó tecnologías. El referido cálculo aproximado del presupuesto en el Proyecto Básico, necesariamente se ha de realizar mediante la técnica del predimensionado de costes, dado que en esta fase del proyecto arquitectónico aún no se dispone de cálculos de estructura, planos de acondicionamiento e instalaciones, ni de la resolución constructiva de la envolvente, por cuanto no se han desarrollado las especificaciones propias del posterior proyecto de ejecución. Esta estimación aproximada del coste de la obra, es sencilla de calcular mediante la aplicación práctica del modelo desarrollado, y ello tanto para estudiantes como para profesionales del sector de la construcción. Como se contiene y justifica en el presente trabajo, la aplicación práctica del modelo para el cálculo de costes en las fases preliminares del proyecto, es rápida y certera, siendo de sencilla aplicación tanto en vivienda unifamiliar (aisladas y pareadas), como en viviendas colectivas (bloques y manzanas). También, el modelo es de aplicación en el ámbito de la valoración inmobiliaria, tasaciones, análisis de viabilidad económica de promociones inmobiliarias, estimación de costes de obras terminadas y en general, cuando no se dispone del proyecto de ejecución y sea preciso calcular los costes de construcción de las obras proyectadas. Además, el modelo puede ser de aplicación para el chequeo de presupuestos calculados por el método analítico tradicional (estado de mediciones pormenorizadas por sus precios unitarios y costes descompuestos), tanto en obras de iniciativa privada como en obras promovidas por las Administraciones Públicas. Por último, como líneas abiertas a futuras investigaciones, el modelo de “predimensionado costes de referencia 5 niveles de cálculo”, se podría adaptar y aplicar para otros usos y tipologías diferentes a la residencial, como edificios de equipamientos y dotaciones públicas, valoración de edificios históricos, obras de urbanización interior y exterior de parcela, proyectos de parques y jardines, etc….. Estas lineas de investigación suponen trabajos paralelos al aquí desarrollado, y que a modo de avance parcial se recogen en las comunicaciones presentadas en los Congresos internacionales Scieconf/Junio 2013, Rics‐Cobra/Septiembre 2013 y en el IV Congreso nacional de patología en la edificación‐Ucam/Abril 2014. ABSTRACT The aim of this research is to develop a fast, efficient and accurate calculation model to estimate the final costs of construction, during the preliminary stages of the architectural project. It is a tool to be used during the preliminary study process, drafting and basic project. It is not therefore necessary to have the exact, graphic definition of the project in order to be able to calculate the cost‐scaling. It is assumed that no deviation 10% higher than the final cost of the projected work will occur during the implementation. To that purpose five levels of cost estimation are formulated in the scaling model, from a lower to a higher conceptual and graphic definition of the architectural project. The five calculation levels are: two that take as point of reference the ”exogenous” values of house sales (initial development and basic development), and three based on calculation of endogenous costs (preliminary study, drafting and basic project). The first ”exogenous” estimation level (level.1) is calculated over the market valuation of real estate development and the proportion the cost of land has over the value of the houses. The fifth level of valuation, also an ”exogenous” one (level.5) is calculated from the contrast between the basic external market value, the construction costs, and the estimated development costs of the projected work. This contrast between the ”repercussions of construction costs” and the market value is an innovation regarding the existing cost‐scaling models, as a methodological process of extrinsic verification and validation, of the accuracy and validity of the estimations obtained from the implementation of the model, which is called Pcr.5n (reference cost‐scaling with .5calculation levels according to the stage of project definition/ architectural conceptualization) The other three levels of “endogenous” construction cost‐scaling are estimated from internal analytical calculations by project units and synthetic calculations by construction systems and functional spaces. This is performed during the initial stages of the project corresponding to preliminary study process (level.2), drafting (level.3) and basic project (level.4). These theoretical internal calculations are finally evaluated and validated via implementation of the model in residential buildings, whose real costs on final payment of the works are known. As the level of definition and development of the project evolves, from preliminary study to basic project, the calculation improves in its level of efficiency and estimation accuracy, following the applied methodology: [successive approximations at finite intervals]. The basic hypothesis as above has been made, achieving a maximum deviation of one tenth, in the estimated calculation of the true cost of predimensioning work. The cost calculation for material execution of the works is developed from functional “three‐dimensional” cubic parameters for the planned space and constructive “two dimensional” metric parameters for the surface that envelopes around the facade and the building’s footprint on the plot. The functional and building costs are analyzed at every stage of the process of calculation with “thematic/specific” parameters of management (Pg), project (Pp) and execution (Pe) of the estimated work in question, and finally the cost of contractual construction is estimated, as a consequence of increasing the cost of material execution with the percentage pertaining to the thematic/specific parameter of the projected work. The construction cost‐scaling Pcr.5n model will be a useful tool of great interest in the professional field to estimate the cost of the Basic Project as prescribed in the technical and legal framework of application. According to the appendix of the Technical Building Code (CTE), it is compulsory that the basic project contains an “approximate valuation of the material execution of the work, projected by chapters”, that is, that the basic project must contain at least an “approximate estimate” by chapter, trade or technology. This approximate estimate in the Basic Project is to be performed through the cost‐scaling technique, given that structural calculations, reconditioning plans and definitive contruction details of the envelope are still not available at this stage of the architectural project, insofar as specifications pertaining to the later project have not yet been developed. This approximate estimate of the cost of the works is easy to calculate through the implementation of the given model, both for students and professionals of the building sector. As explained and justified in this work, the implementation of the model for cost‐scaling during the preliminary stage is fast and accurate, as well as easy to apply both in single‐family houses (detached and semi‐detached) and collective housing (blocks). The model can also be applied in the field of the real‐estate valuation, official appraisal, analysis of the economic viability of real estate developments, estimate of the cost of finished projects and, generally, when an implementation project is not available and it is necessary to calculate the building costs of the projected works. The model can also be applied to check estimates calculated by the traditional analytical method (state of measurements broken down into price per unit cost details), both in private works and those promoted by Public Authorities. Finally, as potential lines for future research, the “five levels of calculation cost‐scaling model”, could be adapted and applied to purposes and typologies other than the residential one, such as service buildings and public facilities, valuation of historical buildings, interior and exterior development works, park and garden planning, etc… These lines of investigation are parallel to this one and, by way of a preview, can be found in the dissertations given in the International Congresses Scieconf/June 2013, Rics‐Cobra/September 2013 and in the IV Congress on building pathology ‐Ucam/April 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La última década ha sido testigo de importantes avances en el campo de la tecnología de reconocimiento de voz. Los sistemas comerciales existentes actualmente poseen la capacidad de reconocer habla continua de múltiples locutores, consiguiendo valores aceptables de error, y sin la necesidad de realizar procedimientos explícitos de adaptación. A pesar del buen momento que vive esta tecnología, el reconocimiento de voz dista de ser un problema resuelto. La mayoría de estos sistemas de reconocimiento se ajustan a dominios particulares y su eficacia depende de manera significativa, entre otros muchos aspectos, de la similitud que exista entre el modelo de lenguaje utilizado y la tarea específica para la cual se está empleando. Esta dependencia cobra aún más importancia en aquellos escenarios en los cuales las propiedades estadísticas del lenguaje varían a lo largo del tiempo, como por ejemplo, en dominios de aplicación que involucren habla espontánea y múltiples temáticas. En los últimos años se ha evidenciado un constante esfuerzo por mejorar los sistemas de reconocimiento para tales dominios. Esto se ha hecho, entre otros muchos enfoques, a través de técnicas automáticas de adaptación. Estas técnicas son aplicadas a sistemas ya existentes, dado que exportar el sistema a una nueva tarea o dominio puede requerir tiempo a la vez que resultar costoso. Las técnicas de adaptación requieren fuentes adicionales de información, y en este sentido, el lenguaje hablado puede aportar algunas de ellas. El habla no sólo transmite un mensaje, también transmite información acerca del contexto en el cual se desarrolla la comunicación hablada (e.g. acerca del tema sobre el cual se está hablando). Por tanto, cuando nos comunicamos a través del habla, es posible identificar los elementos del lenguaje que caracterizan el contexto, y al mismo tiempo, rastrear los cambios que ocurren en estos elementos a lo largo del tiempo. Esta información podría ser capturada y aprovechada por medio de técnicas de recuperación de información (information retrieval) y de aprendizaje de máquina (machine learning). Esto podría permitirnos, dentro del desarrollo de mejores sistemas automáticos de reconocimiento de voz, mejorar la adaptación de modelos del lenguaje a las condiciones del contexto, y por tanto, robustecer al sistema de reconocimiento en dominios con condiciones variables (tales como variaciones potenciales en el vocabulario, el estilo y la temática). En este sentido, la principal contribución de esta Tesis es la propuesta y evaluación de un marco de contextualización motivado por el análisis temático y basado en la adaptación dinámica y no supervisada de modelos de lenguaje para el robustecimiento de un sistema automático de reconocimiento de voz. Esta adaptación toma como base distintos enfoque de los sistemas mencionados (de recuperación de información y aprendizaje de máquina) mediante los cuales buscamos identificar las temáticas sobre las cuales se está hablando en una grabación de audio. Dicha identificación, por lo tanto, permite realizar una adaptación del modelo de lenguaje de acuerdo a las condiciones del contexto. El marco de contextualización propuesto se puede dividir en dos sistemas principales: un sistema de identificación de temática y un sistema de adaptación dinámica de modelos de lenguaje. Esta Tesis puede describirse en detalle desde la perspectiva de las contribuciones particulares realizadas en cada uno de los campos que componen el marco propuesto: _ En lo referente al sistema de identificación de temática, nos hemos enfocado en aportar mejoras a las técnicas de pre-procesamiento de documentos, asimismo en contribuir a la definición de criterios más robustos para la selección de index-terms. – La eficiencia de los sistemas basados tanto en técnicas de recuperación de información como en técnicas de aprendizaje de máquina, y específicamente de aquellos sistemas que particularizan en la tarea de identificación de temática, depende, en gran medida, de los mecanismos de preprocesamiento que se aplican a los documentos. Entre las múltiples operaciones que hacen parte de un esquema de preprocesamiento, la selección adecuada de los términos de indexado (index-terms) es crucial para establecer relaciones semánticas y conceptuales entre los términos y los documentos. Este proceso también puede verse afectado, o bien por una mala elección de stopwords, o bien por la falta de precisión en la definición de reglas de lematización. En este sentido, en este trabajo comparamos y evaluamos diferentes criterios para el preprocesamiento de los documentos, así como también distintas estrategias para la selección de los index-terms. Esto nos permite no sólo reducir el tamaño de la estructura de indexación, sino también mejorar el proceso de identificación de temática. – Uno de los aspectos más importantes en cuanto al rendimiento de los sistemas de identificación de temática es la asignación de diferentes pesos a los términos de acuerdo a su contribución al contenido del documento. En este trabajo evaluamos y proponemos enfoques alternativos a los esquemas tradicionales de ponderado de términos (tales como tf-idf ) que nos permitan mejorar la especificidad de los términos, así como también discriminar mejor las temáticas de los documentos. _ Respecto a la adaptación dinámica de modelos de lenguaje, hemos dividimos el proceso de contextualización en varios pasos. – Para la generación de modelos de lenguaje basados en temática, proponemos dos tipos de enfoques: un enfoque supervisado y un enfoque no supervisado. En el primero de ellos nos basamos en las etiquetas de temática que originalmente acompañan a los documentos del corpus que empleamos. A partir de estas, agrupamos los documentos que forman parte de la misma temática y generamos modelos de lenguaje a partir de dichos grupos. Sin embargo, uno de los objetivos que se persigue en esta Tesis es evaluar si el uso de estas etiquetas para la generación de modelos es óptimo en términos del rendimiento del reconocedor. Por esta razón, nosotros proponemos un segundo enfoque, un enfoque no supervisado, en el cual el objetivo es agrupar, automáticamente, los documentos en clusters temáticos, basándonos en la similaridad semántica existente entre los documentos. Por medio de enfoques de agrupamiento conseguimos mejorar la cohesión conceptual y semántica en cada uno de los clusters, lo que a su vez nos permitió refinar los modelos de lenguaje basados en temática y mejorar el rendimiento del sistema de reconocimiento. – Desarrollamos diversas estrategias para generar un modelo de lenguaje dependiente del contexto. Nuestro objetivo es que este modelo refleje el contexto semántico del habla, i.e. las temáticas más relevantes que se están discutiendo. Este modelo es generado por medio de la interpolación lineal entre aquellos modelos de lenguaje basados en temática que estén relacionados con las temáticas más relevantes. La estimación de los pesos de interpolación está basada principalmente en el resultado del proceso de identificación de temática. – Finalmente, proponemos una metodología para la adaptación dinámica de un modelo de lenguaje general. El proceso de adaptación tiene en cuenta no sólo al modelo dependiente del contexto sino también a la información entregada por el proceso de identificación de temática. El esquema usado para la adaptación es una interpolación lineal entre el modelo general y el modelo dependiente de contexto. Estudiamos también diferentes enfoques para determinar los pesos de interpolación entre ambos modelos. Una vez definida la base teórica de nuestro marco de contextualización, proponemos su aplicación dentro de un sistema automático de reconocimiento de voz. Para esto, nos enfocamos en dos aspectos: la contextualización de los modelos de lenguaje empleados por el sistema y la incorporación de información semántica en el proceso de adaptación basado en temática. En esta Tesis proponemos un marco experimental basado en una arquitectura de reconocimiento en ‘dos etapas’. En la primera etapa, empleamos sistemas basados en técnicas de recuperación de información y aprendizaje de máquina para identificar las temáticas sobre las cuales se habla en una transcripción de un segmento de audio. Esta transcripción es generada por el sistema de reconocimiento empleando un modelo de lenguaje general. De acuerdo con la relevancia de las temáticas que han sido identificadas, se lleva a cabo la adaptación dinámica del modelo de lenguaje. En la segunda etapa de la arquitectura de reconocimiento, usamos este modelo adaptado para realizar de nuevo el reconocimiento del segmento de audio. Para determinar los beneficios del marco de trabajo propuesto, llevamos a cabo la evaluación de cada uno de los sistemas principales previamente mencionados. Esta evaluación es realizada sobre discursos en el dominio de la política usando la base de datos EPPS (European Parliamentary Plenary Sessions - Sesiones Plenarias del Parlamento Europeo) del proyecto europeo TC-STAR. Analizamos distintas métricas acerca del rendimiento de los sistemas y evaluamos las mejoras propuestas con respecto a los sistemas de referencia. ABSTRACT The last decade has witnessed major advances in speech recognition technology. Today’s commercial systems are able to recognize continuous speech from numerous speakers, with acceptable levels of error and without the need for an explicit adaptation procedure. Despite this progress, speech recognition is far from being a solved problem. Most of these systems are adjusted to a particular domain and their efficacy depends significantly, among many other aspects, on the similarity between the language model used and the task that is being addressed. This dependence is even more important in scenarios where the statistical properties of the language fluctuates throughout the time, for example, in application domains involving spontaneous and multitopic speech. Over the last years there has been an increasing effort in enhancing the speech recognition systems for such domains. This has been done, among other approaches, by means of techniques of automatic adaptation. These techniques are applied to the existing systems, specially since exporting the system to a new task or domain may be both time-consuming and expensive. Adaptation techniques require additional sources of information, and the spoken language could provide some of them. It must be considered that speech not only conveys a message, it also provides information on the context in which the spoken communication takes place (e.g. on the subject on which it is being talked about). Therefore, when we communicate through speech, it could be feasible to identify the elements of the language that characterize the context, and at the same time, to track the changes that occur in those elements over time. This information can be extracted and exploited through techniques of information retrieval and machine learning. This allows us, within the development of more robust speech recognition systems, to enhance the adaptation of language models to the conditions of the context, thus strengthening the recognition system for domains under changing conditions (such as potential variations in vocabulary, style and topic). In this sense, the main contribution of this Thesis is the proposal and evaluation of a framework of topic-motivated contextualization based on the dynamic and non-supervised adaptation of language models for the enhancement of an automatic speech recognition system. This adaptation is based on an combined approach (from the perspective of both information retrieval and machine learning fields) whereby we identify the topics that are being discussed in an audio recording. The topic identification, therefore, enables the system to perform an adaptation of the language model according to the contextual conditions. The proposed framework can be divided in two major systems: a topic identification system and a dynamic language model adaptation system. This Thesis can be outlined from the perspective of the particular contributions made in each of the fields that composes the proposed framework: _ Regarding the topic identification system, we have focused on the enhancement of the document preprocessing techniques in addition to contributing in the definition of more robust criteria for the selection of index-terms. – Within both information retrieval and machine learning based approaches, the efficiency of topic identification systems, depends, to a large extent, on the mechanisms of preprocessing applied to the documents. Among the many operations that encloses the preprocessing procedures, an adequate selection of index-terms is critical to establish conceptual and semantic relationships between terms and documents. This process might also be weakened by a poor choice of stopwords or lack of precision in defining stemming rules. In this regard we compare and evaluate different criteria for preprocessing the documents, as well as for improving the selection of the index-terms. This allows us to not only reduce the size of the indexing structure but also to strengthen the topic identification process. – One of the most crucial aspects, in relation to the performance of topic identification systems, is to assign different weights to different terms depending on their contribution to the content of the document. In this sense we evaluate and propose alternative approaches to traditional weighting schemes (such as tf-idf ) that allow us to improve the specificity of terms, and to better identify the topics that are related to documents. _ Regarding the dynamic language model adaptation, we divide the contextualization process into different steps. – We propose supervised and unsupervised approaches for the generation of topic-based language models. The first of them is intended to generate topic-based language models by grouping the documents, in the training set, according to the original topic labels of the corpus. Nevertheless, a goal of this Thesis is to evaluate whether or not the use of these labels to generate language models is optimal in terms of recognition accuracy. For this reason, we propose a second approach, an unsupervised one, in which the objective is to group the data in the training set into automatic topic clusters based on the semantic similarity between the documents. By means of clustering approaches we expect to obtain a more cohesive association of the documents that are related by similar concepts, thus improving the coverage of the topic-based language models and enhancing the performance of the recognition system. – We develop various strategies in order to create a context-dependent language model. Our aim is that this model reflects the semantic context of the current utterance, i.e. the most relevant topics that are being discussed. This model is generated by means of a linear interpolation between the topic-based language models related to the most relevant topics. The estimation of the interpolation weights is based mainly on the outcome of the topic identification process. – Finally, we propose a methodology for the dynamic adaptation of a background language model. The adaptation process takes into account the context-dependent model as well as the information provided by the topic identification process. The scheme used for the adaptation is a linear interpolation between the background model and the context-dependent one. We also study different approaches to determine the interpolation weights used in this adaptation scheme. Once we defined the basis of our topic-motivated contextualization framework, we propose its application into an automatic speech recognition system. We focus on two aspects: the contextualization of the language models used by the system, and the incorporation of semantic-related information into a topic-based adaptation process. To achieve this, we propose an experimental framework based in ‘a two stages’ recognition architecture. In the first stage of the architecture, Information Retrieval and Machine Learning techniques are used to identify the topics in a transcription of an audio segment. This transcription is generated by the recognition system using a background language model. According to the confidence on the topics that have been identified, the dynamic language model adaptation is carried out. In the second stage of the recognition architecture, an adapted language model is used to re-decode the utterance. To test the benefits of the proposed framework, we carry out the evaluation of each of the major systems aforementioned. The evaluation is conducted on speeches of political domain using the EPPS (European Parliamentary Plenary Sessions) database from the European TC-STAR project. We analyse several performance metrics that allow us to compare the improvements of the proposed systems against the baseline ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente Tesis analiza y desarrolla metodología específica que permite la caracterización de sistemas de transmisión acústicos basados en el fenómeno del array paramétrico. Este tipo de estructuras es considerado como uno de los sistemas más representativos de la acústica no lineal con amplias posibilidades tecnológicas. Los arrays paramétricos aprovechan la no linealidad del medio aéreo para obtener en recepción señales en el margen sónico a partir de señales ultrasónicas en emisión. Por desgracia, este procedimiento implica que la señal transmitida y la recibida guardan una relación compleja, que incluye una fuerte ecualización así como una distorsión apreciable por el oyente. Este hecho reduce claramente la posibilidad de obtener sistemas acústicos de gran fidelidad. Hasta ahora, los esfuerzos tecnológicos dirigidos al diseño de sistemas comerciales han tratado de paliar esta falta de fidelidad mediante técnicas de preprocesado fuertemente dependientes de los modelos físicos teóricos. Estos están basados en la ecuación de propagación de onda no lineal. En esta Tesis se propone un nuevo enfoque: la obtención de una representación completa del sistema mediante series de Volterra que permita inferir un sistema de compensación computacionalmente ligero y fiable. La dificultad que entraña la correcta extracción de esta representación obliga a desarrollar una metodología completa de identificación adaptada a este tipo de estructuras. Así, a la hora de aplicar métodos de identificación se hace indispensable la determinación de ciertas características iniciales que favorezcan la parametrización del sistema. En esta Tesis se propone una metodología propia que extrae estas condiciones iniciales. Con estos datos, nos encontramos en disposición de plantear un sistema completo de identificación no lineal basado en señales pseudoaleatorias, que aumenta la fiabilidad de la descripción del sistema, posibilitando tanto la inferencia de la estructura basada en bloques subyacente, como el diseño de mecanismos de compensación adecuados. A su vez, en este escenario concreto en el que intervienen procesos de modulación, factores como el punto de trabajo o las características físicas del transductor, hacen inviables los algoritmos de caracterización habituales. Incluyendo el método de identificación propuesto. Con el fin de eliminar esta problemática se propone una serie de nuevos algoritmos de corrección que permiten la aplicación de la caracterización. Las capacidades de estos nuevos algoritmos se pondrán a prueba sobre un prototipo físico, diseñado a tal efecto. Para ello, se propondrán la metodología y los mecanismos de instrumentación necesarios para llevar a cabo el diseño, la identificación del sistema y su posible corrección, todo ello mediante técnicas de procesado digital previas al sistema de transducción. Los algoritmos se evaluarán en términos de error de modelado a partir de la señal de salida del sistema real frente a la salida sintetizada a partir del modelo estimado. Esta estrategia asegura la posibilidad de aplicar técnicas de compensación ya que éstas son sensibles a errores de estima en módulo y fase. La calidad del sistema final se evaluará en términos de fase, coloración y distorsión no lineal mediante un test propuesto a lo largo de este discurso, como paso previo a una futura evaluación subjetiva. ABSTRACT This Thesis presents a specific methodology for the characterization of acoustic transmission systems based on the parametric array phenomenon. These structures are well-known representatives of the nonlinear acoustics field and display large technological opportunities. Parametric arrays exploit the nonlinear behavior of air to obtain sonic signals at the receptors’side, which were generated within the ultrasonic range. The underlying physical process redunds in a complex relationship between the transmitted and received signals. This includes both a strong equalization and an appreciable distortion for a human listener. High fidelity, acoustic equipment based on this phenomenon is therefore difficult to design. Until recently, efforts devoted to this enterprise have focused in fidelity enhancement based on physically-informed, pre-processing schemes. These derive directly from the nonlinear form of the wave equation. However, online limited enhancement has been achieved. In this Thesis we propose a novel approach: the evaluation of a complete representation of the system through its projection onto the Volterra series, which allows the posterior inference of a computationally light and reliable compensation scheme. The main difficulty in the derivation of such representation strives from the need of a complete identification methodology, suitable for this particular type of structures. As an example, whenever identification techniques are involved, we require preliminary estimates on certain parameters that contribute to the correct parameterization of the system. In this Thesis we propose a methodology to derive such initial values from simple measures. Once these information is made available, a complete identification scheme is required for nonlinear systems based on pseudorandom signals. These contribute to the robustness and fidelity of the resulting model, and facilitate both the inference of the underlying structure, which we subdivide into a simple block-oriented construction, and the design of the corresponding compensation structure. In a scenario such as this where frequency modulations occur, one must control exogenous factors such as devices’ operation point and the physical properties of the transducer. These may conflict with the principia behind the standard identification procedures, as it is the case. With this idea in mind, the Thesis includes a series of novel correction algorithms that facilitate the application of the characterization results onto the system compensation. The proposed algorithms are tested on a prototype that was designed and built for this purpose. The methodology and instrumentation required for its design, the identification of the overall acoustic system and its correction are all based on signal processing techniques, focusing on the system front-end, i.e. prior to transduction. Results are evaluated in terms of input-output modelling error, considering a synthetic construction of the system. This criterion ensures that compensation techniques may actually be introduced, since these are highly sensible to estimation errors both on the envelope and the phase of the signals involved. Finally, the quality of the overall system will be evaluated in terms of phase, spectral color and nonlinear distortion; by means of a test protocol specifically devised for this Thesis, as a prior step for a future, subjective quality evaluation.