941 resultados para Analysis Model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. The link between endogenous estrogen, coronary artery disease (CAD), and death in postmenopausal women is uncertain. We analyzed the association between death and blood levels of estrone in postmenopausal women with known coronary artery disease (CAD) or with a high-risk factor score for CAD. Methods. 251 postmenopausal women age 50-90 years not on estrogen therapy. Fasting blood for estrone and heart disease risk factors were collected at baseline. Women were grouped according to their estrone levels (<15 and >= 15 pg/mL). Fatal events were recorded after 5.8 perpendicular to 1.4 years of followup. Results. The Kaplan-Meier survival curve showed a significant trend (P = 0.039) of greater all-cause mortality in women with low estrone levels (< 15 pg/mL). Cox multivariate regression analysis model adjusted for body mass index, diabetes, dyslipidemia, family history, and estrone showed estrone (OR = 0.45; P = 0.038) as the only independent variable for all-cause mortality. Multivariate regression model adjusted for age, body mass index, hypertension, diabetes, dyslipidemia, family history, and estrone showed that only age (OR = 1.06; P = 0.017) was an independent predictor of all-cause mortality. Conclusions. Postmenopausal women with known CAD or with a high-risk factor score for CAD and low estrone levels (< 15 pg/mL) had increased all-cause mortality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In computer systems, specifically in multithread, parallel and distributed systems, a deadlock is both a very subtle problem - because difficult to pre- vent during the system coding - and a very dangerous one: a deadlocked system is easily completely stuck, with consequences ranging from simple annoyances to life-threatening circumstances, being also in between the not negligible scenario of economical losses. Then, how to avoid this problem? A lot of possible solutions has been studied, proposed and implemented. In this thesis we focus on detection of deadlocks with a static program analysis technique, i.e. an analysis per- formed without actually executing the program. To begin, we briefly present the static Deadlock Analysis Model devel- oped for coreABS−− in chapter 1, then we proceed by detailing the Class- based coreABS−− language in chapter 2. Then, in Chapter 3 we lay the foundation for further discussions by ana- lyzing the differences between coreABS−− and ASP, an untyped Object-based calculi, so as to show how it can be possible to extend the Deadlock Analysis to Object-based languages in general. In this regard, we explicit some hypotheses in chapter 4 first by present- ing a possible, unproven type system for ASP, modeled after the Deadlock Analysis Model developed for coreABS−−. Then, we conclude our discussion by presenting a simpler hypothesis, which may allow to circumvent the difficulties that arises from the definition of the ”ad-hoc” type system discussed in the aforegoing chapter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBIETTIVO: sintetizzare le evidenze disponibili sulla relazione tra i fattori di rischio (personali e lavorativi) e l’insorgenza della Sindrome del Tunnel Carpale (STC). METODI: è stata condotta una revisione sistematica della letteratura su database elettronici considerando gli studi caso-controllo e di coorte. Abbiamo valutato la qualità del reporting degli studi con la checklist STROBE. Le stime studio-specifiche sono state espresse come OR (IC95%) e combinate con una meta-analisi condotta con un modello a effetti casuali. La presenza di eventuali bias di pubblicazione è stata valutata osservando l’asimmetria del funnel plot e con il test di Egger. RISULTATI: Sono stati selezionati 29 studi di cui 19 inseriti nella meta-analisi: 13 studi caso-controllo e 6 di coorte. La meta-analisi ha mostrato un aumento significativo di casi di STC tra i soggetti obesi sia negli studi caso-controllo [OR 2,4 (1,9-3,1); I(2)=70,7%] che in quelli di coorte [OR 2,0 (1,6-2,7); I(2)=0%]. L'eterogeneità totale era significativa (I(2)=59,6%). Risultati simili si sono ottenuti per i diabetici e soggetti affetti da malattie della tiroide. L’esposizione al fumo non era associata alla STC sia negli studi caso-controllo [OR 0,7 (0,4-1,1); I(2)=83,2%] che di coorte [OR 0,8 (0,6-1,2); I(2)=45,8%]. A causa delle molteplici modalità di valutazione non è stato possibile calcolare una stima combinata delle esposizioni professionali con tecniche meta-analitiche. Dalla revisione, è risultato che STC è associata con: esposizione a vibrazioni, movimenti ripetitivi e posture incongrue di mano-polso. CONCLUSIONI: I risultati della revisione sistematica confermano le evidenze dell'esistenza di un'associazione tra fattori di rischio personali e STC. Nonostante la diversa qualità dei dati sull'esposizione e le differenze degli effetti dei disegni di studio, i nostri risultati indicano elementi di prova sufficienti di un legame tra fattori di rischio professionali e STC. La misurazione dell'esposizione soprattutto per i fattori di rischio professionali, è un obiettivo necessario per studi futuri.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sulfate aerosol plays an important but uncertain role in cloud formation and radiative forcing of the climate, and is also important for acid deposition and human health. The oxidation of SO2 to sulfate is a key reaction in determining the impact of sulfate in the environment through its effect on aerosol size distribution and composition. This thesis presents a laboratory investigation of sulfur isotope fractionation during SO2 oxidation by the most important gas-phase and heterogeneous pathways occurring in the atmosphere. The fractionation factors are then used to examine the role of sulfate formation in cloud processing of aerosol particles during the HCCT campaign in Thuringia, central Germany. The fractionation factor for the oxidation of SO2 by ·OH radicals was measured by reacting SO2 gas, with a known initial isotopic composition, with ·OH radicals generated from the photolysis of water at -25, 0, 19 and 40°C (Chapter 2). The product sulfate and the residual SO2 were collected as BaSO4 and the sulfur isotopic compositions measured with the Cameca NanoSIMS 50. The measured fractionation factor for 34S/32S during gas phase oxidation is αOH = (1.0089 ± 0.0007) − ((4 ± 5) × 10−5 )T (°C). Fractionation during oxidation by major aqueous pathways was measured by bubbling the SO2 gas through a solution of H2 O2

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The last two decades have seen intense scientific and regulatory interest in the health effects of particulate matter (PM). Influential epidemiological studies that characterize chronic exposure of individuals rely on monitoring data that are sparse in space and time, so they often assign the same exposure to participants in large geographic areas and across time. We estimate monthly PM during 1988-2002 in a large spatial domain for use in studying health effects in the Nurses' Health Study. We develop a conceptually simple spatio-temporal model that uses a rich set of covariates. The model is used to estimate concentrations of PM10 for the full time period and PM2.5 for a subset of the period. For the earlier part of the period, 1988-1998, few PM2.5 monitors were operating, so we develop a simple extension to the model that represents PM2.5 conditionally on PM10 model predictions. In the epidemiological analysis, model predictions of PM10 are more strongly associated with health effects than when using simpler approaches to estimate exposure. Our modeling approach supports the application in estimating both fine-scale and large-scale spatial heterogeneity and capturing space-time interaction through the use of monthly-varying spatial surfaces. At the same time, the model is computationally feasible, implementable with standard software, and readily understandable to the scientific audience. Despite simplifying assumptions, the model has good predictive performance and uncertainty characterization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the costs and outcomes of rescreening for group B streptococci (GBS) compared to universal treatment of term women with history of GBS colonization in a previous pregnancy. STUDY DESIGN: A decision analysis model was used to compare costs and outcomes. Total cost included the costs of screening, intrapartum antibiotic prophylaxis (IAP), treatment for maternal anaphylaxis and death, evaluation of well infants whose mothers received IAP, and total costs for treatment of term neonatal early onset GBS sepsis. RESULTS: When compared to screening and treating, universal treatment results in more women treated per GBS case prevented (155 versus 67) and prevents more cases of early onset GBS (1732 versus 1700) and neonatal deaths (52 versus 51) at a lower cost per case prevented ($8,805 versus $12,710). CONCLUSION: Universal treatment of term pregnancies with a history of previous GBS colonization is more cost-effective than the strategy of screening and treating based on positive culture results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Direct measurements of middle-atmospheric wind oscillations with periods between 5 and 50 days in the altitude range between mid-stratosphere (5 hPa) and upper mesosphere (0.02 hPa) have been made using a novel ground-based Doppler wind radiometer. The oscillations were not inferred from measurements of tracers, as the radiometer offers the unique capability of near-continuous horizontal wind profile measurements. Observations from four campaigns at high, mid and low latitudes with an average duration of 10 months have been analyzed. The dominant oscillation has mostly been found to lie in the extra-long period range (20–40 days), while the well-known atmospheric normal modes around 5, 10 and 16 days have also been observed. Comparisons of our results with ECMWF operational analysis model data revealed remarkably good agreement below 0.3 hPa but discrepancies above.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En la coyuntura actual, en la que existe por un lado, exceso en la oferta de vivienda (de alto precio o de segunda residencia), y aparece por otro demanda de vivienda (de bajo precio y/o social), el mercado inmobiliario se encuentra paradójicamente bloqueado. Así, surge esta investigación como fruto de este momento histórico, en el cual se somete a debate económico el producto vivienda, no solo como consecuencia de la profunda crisis económica, sino también para la correcta gestión de los recursos desde el punto de vista de lo eficiente y sostenible. Se parte de la hipótesis de que es necesario determinar un estimador de costes de construcción de vivienda autopromovida como una de las soluciones a la habitación en el medio rural de Extremadura, para lo cual se ha tomado como modelo de análisis concretamente la Vivienda Autopromovida subvencionada por la Junta de Extremadura en el marco de la provincia de Cáceres. Con esta investigación se pretende establecer una herramienta matemática precisa que permita determinar la inversión a los promotores, el posible margen de beneficios a los contratistas y el valor real de la garantía en el préstamo a las entidades financieras. Pero el objetivo de mayor proyección social de esta investigación consiste en facilitar una herramienta sencilla a la Junta de Extremadura para que pueda establecer las ayudas de una manera proporcional. De este modo se ayuda a optimizar los recursos, lo cual en época de crisis resulta aun más acuciante, ya que conociendo previamente y con bastante exactitud el importe de las obras se pueden dirigir las ayudas de forma proporcional a las necesidades reales de la ejecución. De hecho, ciertas características difíciles de cuantificar para determinar las ayudas en materia de vivienda, como la influencia del número de miembros familiares o la atención a la discapacidad, se verían contempladas de forma indirecta en el coste estimado con el método aquí propuesto, ya que suponen siempre un aumento de las superficies construidas y útiles, de los huecos de fachadas o del tamaño de locales húmedos y por tanto se contemplan en la ecuación del modelo determinado. Por último, contar con un estimador de costes potencia la forma de asentamiento de la construcción mediante autopromocion de viviendas ya que ayuda a la toma de decisiones al particular, subvencionado o no. En efecto, la herramienta es valida en cierta medida para cualquier autopromocion, constituye un sistema de construcción con las menores posibilidades especulativas y lo más sostenible, es abundante en toda Extremadura, y consigue que el sector de la construcción sea un sistema más eficiente al optimizar su proceso económico de producción. SUMMARY Under the present circumstances, in which there is, on one hand, an excess in the supply of housing (high-price or second-home), and on the other hand a demand for housing (low cost and/or social), paradoxically the property market is at a standstill. This research has come about as a result of this moment in time, in which the product: housing, is undergoing economic debate, not only on account of this serious economic crisis, but for the proper management of resources from the point of view of efficiency and sustainability. A building-costs estimator for owner-developed housing is deemed necessary as one of the solutions for the rural environment that is Extremadura. To this end, it is the Owner-Developed House which has been taken as analysis model. It is subsidized by the Extremadura Regional Government in Caceres Province. This research establishes an accurate mathematical tool to work out the developers’ investment, the builder’s potential profit margin and the reality of the loan for the Financial Institution. But the result of most social relevance in this research is to provide the Extremadura Regional Government with a simple tool, so that it can draw up the Subventions proportionally. Thus, the resources are optimized, an even more vital matter in times of economic slump, due to the fact that if the cost of the building works is known with some accuracy beforehand, the subventions can be allocated in a way that is proportional to the real needs of execution. In fact certain elements related to housing subventions which are hard to quantify, such as the influence of number of family members or disability support, would be covered indirectly in cost estimate with the proposed method, since they inevitably involve an increase in built area, exterior wall openings and the size of plumbed rooms. As such they are covered in the determined model equation. Lastly, the availability of a cost-estimator reinforces the ownerdeveloped building model, since it assists decision-making by the individual, whether subsidized or not. This is because the tool is valid to some extent in any owner-development, and this building scheme, which is common in Extremadura, is the most sustainable, and the least liable to speculation. It makes the building sector more efficient by optimizing the economic production process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis de opiniones es un área en la cual múltiples disciplinas han otorgado diferentes enfoques para elaborar modelos que sean capaces de extraer la polaridad de los textos analizados. En función del dominio o categoría del texto analizado, donde ejemplos de categorías son Deportes o Banca, estos modelos deben ser modificados para obtener un análisis de opinión de calidad. En esta tesis se presenta un modelo que pretende elaborar un análisis de opiniones independiente de la categoría a analizar y un extenso estado del arte sobre análisis de opiniones. Se propone un enfoque cuantitativo que haría uso de un léxico polarizado semilla como único recurso cualitativo del modelo. El enfoque propuesto hace uso de un corpus anotado de textos por polaridad y categoría y el léxico polarizado semilla para producir un modelo capaz de elaborar un análisis de opinión de calidad en las distintas categorías analizadas y expandir el léxico polarizado semilla con términos que se adecúan a las categorías procesadas.---ABSTRACT---Sentiment analysis is an area in which multiple disciplines have given diferent approaches to make models that are able to extract the polarity of the analyzed texts. Depending on the domain or category of the analyzed text, where examples of categories are Sports or Banking, these models should be modified to obtain a good opinion analysis. This thesis presents a model that aims to develop a category independent opinion analysis model and a extensive sentiment analysis state of the art. A quantitative approach is proposed that will use a polarized lexicon as the only qualitative resource. The proposed approach uses an annotated corpus by polarity and category and a polarized lexicon seed to produce a model able to develop a good opinion analysis in the various categories analyzed and to expand the polarized lexicon seed with terms that fit the processed categories.