965 resultados para Probabilistic choice models
Resumo:
The aim of this paper is to predict time series of SO2 concentrations emitted by coal-fired power stations in order to estimate in advance emission episodes and analyze the influence of some meteorological variables in the prediction. An emission episode is said to occur when the series of bi-hourly means of SO2 is greater than a specific level. For coal-fired power stations it is essential to predict emission epi- sodes sufficiently in advance so appropriate preventive measures can be taken. We proposed a meth- odology to predict SO2 emission episodes based on using an additive model and an algorithm for variable selection. The methodology was applied to the estimation of SO2 emissions registered in sampling lo- cations near a coal-fired power station located in Northern Spain. The results obtained indicate a good performance of the model considering only two terms of the time series and that the inclusion of the meteorological variables in the model is not significant.
Resumo:
Tese de Doutoramento em Psicologia Básica
Resumo:
Dissertação de mestrado em Direito Judiciário (Direitos Processuais e Organização Judiciária)
Resumo:
Under the framework of constraint based modeling, genome-scale metabolic models (GSMMs) have been used for several tasks, such as metabolic engineering and phenotype prediction. More recently, their application in health related research has spanned drug discovery, biomarker identification and host-pathogen interactions, targeting diseases such as cancer, Alzheimer, obesity or diabetes. In the last years, the development of novel techniques for genome sequencing and other high-throughput methods, together with advances in Bioinformatics, allowed the reconstruction of GSMMs for human cells. Considering the diversity of cell types and tissues present in the human body, it is imperative to develop tissue-specific metabolic models. Methods to automatically generate these models, based on generic human metabolic models and a plethora of omics data, have been proposed. However, their results have not yet been adequately and critically evaluated and compared. This work presents a survey of the most important tissue or cell type specific metabolic model reconstruction methods, which use literature, transcriptomics, proteomics and metabolomics data, together with a global template model. As a case study, we analyzed the consistency between several omics data sources and reconstructed distinct metabolic models of hepatocytes using different methods and data sources as inputs. The results show that omics data sources have a poor overlapping and, in some cases, are even contradictory. Additionally, the hepatocyte metabolic models generated are in many cases not able to perform metabolic functions known to be present in the liver tissue. We conclude that reliable methods for a priori omics data integration are required to support the reconstruction of complex models of human cells.
Resumo:
OBJECTIVE - This study compared the early and late results of the use of one single stent with those of the use of multiple stents in patients with lesions longer than 20mm. METHODS - Prospective assessment of patients electively treated with stents, with optimal stent deployment and followed-up for more than 3 months. From February '94 to January '98, 215 patients with lesions >20mm were treated. These patients were divided into 2 groups as follows: Group A - 105 patients (49%) with one stent implanted; Group B - 110 patients (51%) with multiple stents implanted. RESULTS - The mean length of the lesions was 26mm in group A (21-48mm) versus 29mm in group B (21-52mm) (p=0.01). Major complications occurred in one patient (0.9%) in group A (subacute thrombosis, myocardial infarctionand death) and in 2 patients (1.8%) in group B (one emergency surgery and one myocardial infarction) (p=NS). The results of the late follow-up period (>6 months) were similar for both groups (group A = 82% vs group B = 76%; p=NS), and we observed an event-free survical in 89% of the patients in group A and in 91% of the patients in group B (p=NS). Angina (group A = 11% vs group B = 7%) and lesion revascularization (group A = 5% vs group B = 6%; p=NS) also occurred in a similar percentage. No infarction or death was observed in the late follow-up period; restenosis was identified in 33% and 29% of the patients in groups A and B, respectively (p=NS). CONCLUSION - The results obtained using one stent and using multiple stents were similar; the greater cost-effectiveness of one stent implantation, however, seems to make this strategy the first choice.
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
"A workshop within the 19th International Conference on Applications and Theory of Petri Nets - ICATPN’1998"
Resumo:
Risk management is of paramount importance in the success of tunnelling works and is linked to the tunnelling method and to the constraints of the works. Sequencial Excavation Method (SEM) and Tun-nel Boring Machine (TBM) method have been competing for years. This article, part of a wider study on the influence of the â Safety and Healthâ criterion in the choice of method, reviews the existing literature about the criteria usually employed to choose the tunnelling method and on the criterion â Safety and Healthâ . This crite-rion is particularly important, due to the financial impacts of work accidents and occupational diseases. This article is especially useful to the scientific and technical community, since it synthesizes the relevance of each one of the choice criteria used and it shows why â Safety and Healthâ must be a criterion in the decision mak-ing process to choose the tunnelling method.
Resumo:
In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.
Resumo:
The needs of reducing human error has been growing in every field of study, and medicine is one of those. Through the implementation of technologies is possible to help in the decision making process of clinics, therefore to reduce the difficulties that are typically faced. This study focuses on easing some of those difficulties by presenting real-time data mining models capable of predicting if a monitored patient, typically admitted in intensive care, will need to take vasopressors. Data Mining models were induced using clinical variables such as vital signs, laboratory analysis, among others. The best model presented a sensitivity of 94.94%. With this model it is possible reducing the misuse of vasopressors acting as prevention. At same time it is offered a better care to patients by anticipating their treatment with vasopressors.
Resumo:
Dissertação de Mestrado em MPA – Administração Pública
Resumo:
La verificación y el análisis de programas con características probabilistas es una tarea necesaria del quehacer científico y tecnológico actual. El éxito y su posterior masificación de las implementaciones de protocolos de comunicación a nivel hardware y soluciones probabilistas a problemas distribuidos hacen más que interesante el uso de agentes estocásticos como elementos de programación. En muchos de estos casos el uso de agentes aleatorios produce soluciones mejores y más eficientes; en otros proveen soluciones donde es imposible encontrarlas por métodos tradicionales. Estos algoritmos se encuentran generalmente embebidos en múltiples mecanismos de hardware, por lo que un error en los mismos puede llegar a producir una multiplicación no deseada de sus efectos nocivos.Actualmente el mayor esfuerzo en el análisis de programas probabilísticos se lleva a cabo en el estudio y desarrollo de herramientas denominadas chequeadores de modelos probabilísticos. Las mismas, dado un modelo finito del sistema estocástico, obtienen de forma automática varias medidas de performance del mismo. Aunque esto puede ser bastante útil a la hora de verificar programas, para sistemas de uso general se hace necesario poder chequear especificaciones más completas que hacen a la corrección del algoritmo. Incluso sería interesante poder obtener automáticamente las propiedades del sistema, en forma de invariantes y contraejemplos.En este proyecto se pretende abordar el problema de análisis estático de programas probabilísticos mediante el uso de herramientas deductivas como probadores de teoremas y SMT solvers. Las mismas han mostrado su madurez y eficacia en atacar problemas de la programación tradicional. Con el fin de no perder automaticidad en los métodos, trabajaremos dentro del marco de "Interpretación Abstracta" el cual nos brinda un delineamiento para nuestro desarrollo teórico. Al mismo tiempo pondremos en práctica estos fundamentos mediante implementaciones concretas que utilicen aquellas herramientas.