923 resultados para Teorema de Bayes
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
SIN FINANCIACIÓN
Resumo:
Data are lacking on the characteristics of atrial activity in centenarians, including interatrial block (IAB). The aim of this study was to describe the prevalence of IAB and auricular arrhythmias in subjects older than 100 years and to elucidate their clinical implications. We studied 80 centenarians (mean age 101.4 ± 1.5 years; 21 men) with follow-ups of 6–34 months. Of these 80 centenarians, 71 subjects (88.8%) underwent echocardiography. The control group comprised 269 septuagenarians. A total of 23 subjects (28.8%) had normal P wave, 16 (20%) had partial IAB, 21 (26%) had advanced IAB, and 20 (25.0%) had atrial fibrillation/flutter. The IAB groups exhibited premature atrial beats more frequently than did the normal P wave group (35.1% vs 17.4%; P < .001); also, other measurements in the IAB groups frequently fell between values observed in the normal P wave and the atrial fibrillation/flutter groups. These measurements included sex preponderance, mental status and dementia, perceived health status, significant mitral regurgitation, and mortality. The IAB group had a higher previous stroke rate (24.3%) than did other groups. Compared with septuagenarians, centenarians less frequently presented a normal P wave (28.8% vs 53.5%) and more frequently presented advanced IAB (26.3% vs 8.2%), atrial fibrillation/flutter (25.0% vs 10.0%), and premature atrial beats (28.3 vs 7.0%) (P < .01). Relatively few centenarians (<30%) had a normal P wave, and nearly half had IAB. Our data suggested that IAB, particularly advanced IAB, is a pre–atrial fibrillation condition associated with premature atrial beats. Atrial arrhythmias and IAB occurred more frequently in centenarians than in septuagenarians.
Resumo:
SIN FINANCIACIÓN
Resumo:
Despite current recommendations, a high percentage of patients with severe symptomatic aortic stenosis are managed conservatively. The aim of this study was to study symptomatic patients undergoing conservative management from the IDEAS registry, describing their baseline clinical characteristics, mortality, and the causes according to the reason for conservative management. Consecutive patients with severe aortic stenosis diagnosed at 48 centers during January 2014 were included. Baseline clinical characteristics, echocardiographic data, Charlson index, and EuroSCORE-II were registered, including vital status and performance of valve intervention during one-year follow-up. For the purpose of this substudy we assessed symptomatic patients undergoing conservative management, including them in 5 groups according to the reason for performing conservative management [I: comorbidity/frailty (128, 43.8%); II: dementia 18 (6.2%); III: advanced age 34 (11.6%); IV: patients’ refusal 62 (21.2%); and V: other reasons 50 (17.1%)]. We included 292 patients aged 81.5 ± 9 years. Patients from group I had higher Charlson index (4 ± 2.3), higher EuroSCORE-II (7.5 ± 6), and a higher overall (42.2%) and non-cardiac mortality (16.4%) than the other groups. In contrast, patients from group III had fewer comorbidities, lower EuroSCORE-II (4 ± 2.5), and low overall (20.6%) and non-cardiac mortality (5.9%). Patients with severe symptomatic aortic stenosis managed conservatively have different baseline characteristics and clinical course according to the reason for performing conservative management. A prospective assessment of comorbidity and other geriatric syndromes might contribute to improve therapeutic strategy in this clinical setting.
Resumo:
Antecedentes La ectasia corneal post-lasik (ECPL) es una complicación infrecuente, pero devastadora en la cirugía lasik (queratomileusis asistida con éxcimer láser) para el tratamiento de la miopía con o sin astigmatismo. Con base en la tomografía corneal por elevación por imágenes de Scheimpflug (Sistema Pentacam HR, Oculus Wetzlar, Alemania), se propone un novedoso índice acumulativo de riesgo para ser utilizado como prueba diagnóstica de tamizaje y así prevenir esta complicación. Metodología Se realizó un estudio observacional analítico, de corte transversal tipo pruebas diagnósticas, con el fin de evaluar las características operativas del índice NICE teniendo como estándar de referencia el módulo de Belin-Ambrosio (Pentacam HR) utilizando un modelo de regresión logística binaria, tablas de contingencia y estimando el área bajo la curva ROC. Resultados Se evaluaron 361 ojos de los cuales el 59,3% provenían de pacientes de sexo femenino, la edad media global fue de 30 años (RIC 11,0). El modelo logístico binario aplicado se construyó con base en cuatro variables independientes cuantitativas (K2, PAQUI, EP e I-S) y una cualitativa (SEXO), y se determinó su relación con la variable dependiente, NICE (puntaje final). Las variables predictoras fueron estadísticamente significativas clasificando adecuadamente el 92,9% de los ojos evaluados según presencia o ausencia de riesgo. El coeficiente de Nagelkerke fue de 74,4%. Conclusiones El índice acumulativo de riesgo NICE es una herramienta diagnóstica novedosa en la evaluación de candidatos a cirugía refractiva lasik para prevenir la ectasia secundaria.
Resumo:
Introducción Los sistemas de puntuación para predicción se han desarrollado para medir la severidad de la enfermedad y el pronóstico de los pacientes en la unidad de cuidados intensivos. Estas medidas son útiles para la toma de decisiones clínicas, la estandarización de la investigación, y la comparación de la calidad de la atención al paciente crítico. Materiales y métodos Estudio de tipo observacional analítico de cohorte en el que reviso las historias clínicas de 283 pacientes oncológicos admitidos a la unidad de cuidados intensivos (UCI) durante enero de 2014 a enero de 2016 y a quienes se les estimo la probabilidad de mortalidad con los puntajes pronósticos APACHE IV y MPM II, se realizó regresión logística con las variables predictoras con las que se derivaron cada uno de los modelos es sus estudios originales y se determinó la calibración, la discriminación y se calcularon los criterios de información Akaike AIC y Bayesiano BIC. Resultados En la evaluación de desempeño de los puntajes pronósticos APACHE IV mostro mayor capacidad de predicción (AUC = 0,95) en comparación con MPM II (AUC = 0,78), los dos modelos mostraron calibración adecuada con estadístico de Hosmer y Lemeshow para APACHE IV (p = 0,39) y para MPM II (p = 0,99). El ∆ BIC es de 2,9 que muestra evidencia positiva en contra de APACHE IV. Se reporta el estadístico AIC siendo menor para APACHE IV lo que indica que es el modelo con mejor ajuste a los datos. Conclusiones APACHE IV tiene un buen desempeño en la predicción de mortalidad de pacientes críticamente enfermos, incluyendo pacientes oncológicos. Por lo tanto se trata de una herramienta útil para el clínico en su labor diaria, al permitirle distinguir los pacientes con alta probabilidad de mortalidad.
Resumo:
Subtle structural differencescan be observed in the islets of Langer-hans region of microscopic image of pancreas cell of the rats having normal glucose tolerance and the rats having pre-diabetic(glucose intolerant)situa-tions. This paper proposes a way to automatically segment the islets of Langer-hans region fromthe histological image of rat's pancreas cell and on the basis of some morphological feature extracted from the segmented region the images are classified as normal and pre-diabetic.The experiment is done on a set of 134 images of which 56 are of normal type and the rests 78 are of pre-diabetictype. The work has two stages: primarily,segmentationof theregion of interest (roi)i.e. islets of Langerhansfrom the pancreatic cell and secondly, the extrac-tion of the morphological featuresfrom the region of interest for classification. Wavelet analysis and connected component analysis method have been used for automatic segmentationof the images. A few classifiers like OneRule, Naïve Bayes, MLP, J48 Tree, SVM etc.are used for evaluation among which MLP performed the best.
Resumo:
ResumenEn el presente artículo se analiza cuáles son las restricciones que impone la Convención Americanade Derechos Humanos en la construcción de un sistema de elección de representantes populares.Para ello, se tomarán herramientas de Social Choice Theory, que nos permitirán depurar y encontrarprecisamente cuales sistemas electorales no pueden ser tolerados en el Sistema Interamericano deDerechos Humanos.Palabras clave: Social Choice Theory, Derechos Políticos, Teorema de la Imposibilidad de Arrow,Sistema Interamericano de Derechos Humanos.AbstractThis article analyzes which are the restrictions that the American Convention of Human Rights imposeson the construction of an electoral system for popular representation. To do so, tools from Social ChoiceTheory will be taken which will allow us to precise and find which exact electoral systems cannot be toleratedin the Inter-American Human Rights System.Keywords: Social Choice Theory, Political Rights, Arrow’s Impossibility Theorem, Inter-AmericanHuman Rights System.
Resumo:
In questa tesi verrà analizzata la dinamica dei sistemi conservativi a infiniti gradi di libertà, descritti dai campi. Verrà utilizzato l'approccio della meccanica lagrangiana, in cui, partendo da un principio, il principio di minima azione, si arriverà alla scrittura delle equazioni del moto, dette equazioni di Eulero-Lagrange, governate da una funzione, detta Lagrangiana. Successivamente si descriverà il formalismo hamiltoniano, in cui le equazioni di Eulero-Lagrange verranno riscritte in nuove coordinate, e interverrà una nuova funzione, detta Hamiltoniana. Verrà inoltre affrontato un altro argomento, in cui si vedranno quantità del sistema che si conservano nel tempo. Questa legge di conservazione, descritta dal Teorema di Noether, è dovuta a simmetrie della Lagrangiana, ovvero a trasformazioni continue delle coordinate del sistema che lasciano la Lagrangiana invariata. Ad ogni simmetria della Lagrangiana corrisponde una quantità conservata. Infine, per concludere, verrà applicato il metodo lagrangiano al sistema descritto dal campo elettromagnetico, e da qui si vedrà che le equazioni di Eulero-Lagrange diventeranno le note equazioni di Maxwell.
Resumo:
The main purpose of this thesis is to go beyond two usual assumptions that accompany theoretical analysis in spin-glasses and inference: the i.i.d. (independently and identically distributed) hypothesis on the noise elements and the finite rank regime. The first one appears since the early birth of spin-glasses. The second one instead concerns the inference viewpoint. Disordered systems and Bayesian inference have a well-established relation, evidenced by their continuous cross-fertilization. The thesis makes use of techniques coming both from the rigorous mathematical machinery of spin-glasses, such as the interpolation scheme, and from Statistical Physics, such as the replica method. The first chapter contains an introduction to the Sherrington-Kirkpatrick and spiked Wigner models. The first is a mean field spin-glass where the couplings are i.i.d. Gaussian random variables. The second instead amounts to establish the information theoretical limits in the reconstruction of a fixed low rank matrix, the “spike”, blurred by additive Gaussian noise. In chapters 2 and 3 the i.i.d. hypothesis on the noise is broken by assuming a noise with inhomogeneous variance profile. In spin-glasses this leads to multi-species models. The inferential counterpart is called spatial coupling. All the previous models are usually studied in the Bayes-optimal setting, where everything is known about the generating process of the data. In chapter 4 instead we study the spiked Wigner model where the prior on the signal to reconstruct is ignored. In chapter 5 we analyze the statistical limits of a spiked Wigner model where the noise is no longer Gaussian, but drawn from a random matrix ensemble, which makes its elements dependent. The thesis ends with chapter 6, where the challenging problem of high-rank probabilistic matrix factorization is tackled. Here we introduce a new procedure called "decimation" and we show that it is theoretically to perform matrix factorization through it.
Resumo:
In this work, we explore and demonstrate the potential for modeling and classification using quantile-based distributions, which are random variables defined by their quantile function. In the first part we formalize a least squares estimation framework for the class of linear quantile functions, leading to unbiased and asymptotically normal estimators. Among the distributions with a linear quantile function, we focus on the flattened generalized logistic distribution (fgld), which offers a wide range of distributional shapes. A novel naïve-Bayes classifier is proposed that utilizes the fgld estimated via least squares, and through simulations and applications, we demonstrate its competitiveness against state-of-the-art alternatives. In the second part we consider the Bayesian estimation of quantile-based distributions. We introduce a factor model with independent latent variables, which are distributed according to the fgld. Similar to the independent factor analysis model, this approach accommodates flexible factor distributions while using fewer parameters. The model is presented within a Bayesian framework, an MCMC algorithm for its estimation is developed, and its effectiveness is illustrated with data coming from the European Social Survey. The third part focuses on depth functions, which extend the concept of quantiles to multivariate data by imposing a center-outward ordering in the multivariate space. We investigate the recently introduced integrated rank-weighted (IRW) depth function, which is based on the distribution of random spherical projections of the multivariate data. This depth function proves to be computationally efficient and to increase its flexibility we propose different methods to explicitly model the projected univariate distributions. Its usefulness is shown in classification tasks: the maximum depth classifier based on the IRW depth is proven to be asymptotically optimal under certain conditions, and classifiers based on the IRW depth are shown to perform well in simulated and real data experiments.
Resumo:
Questa tesi tratta del problema isoperimetrico nel piano, ossia di trovare, se esiste, il dominio che, a parità di perimetro (inteso come lunghezza del bordo), massimizza l'area. Intuitivamente la risposta pare piuttosto semplice: il cerchio è il dominio che ha area massima, ma la dimostrazione è tutt'altro che banale. Inizialmente verranno presentate le proprietà isoperimetriche dei poligoni regolari: tra tutti i poligoni con un numero fissato di lati n e perimetro fissato p, il poligono regolare di n lati e di perimetro p è l'unico che massimizza l'area. Nel seguito si generalizza questo fatto a un qualunque dominio limitato del piano il cui bordo è una curva chiusa, semplice e assolutamente continua. Infatti, nelle ipotesi appena dette, vale che l’area è minore o uguale di una certa costante moltiplicata per il quadrato della lunghezza del bordo, e si ha l'uguaglianza se e solo se il bordo è una circonferenza. Infine, nell'ultimo capitolo, viene data la dimostrazione, sorprendentemente semplice, della disuguaglianza isoperimetrica dovuta a Hélein per un aperto lipschitziano, facente uso del solo Teorema di Stokes applicato ad un particolare campo vettoriale ispirato alla teoria delle calibrazioni.
Resumo:
Questo elaborato è incentrato sullo studio di un modello a volatilità locale, formulato da A. Conze e P. Henry-Labordère a partire dalla costruzione di R. Bass per le immersioni di Skorokhod. Dato un processo di prezzi, di cui è noto un numero finito di distribuzioni marginali, si suppone che sia una martingala non negativa esprimibile come funzione del tempo e di un altro processo stocastico (ad esempio un moto Browniano): l'obiettivo è l'individuazione di tale funzione. Per raggiungerlo ci si ricondurrà alla risoluzione di un'equazione di punto fisso, per la cui soluzione verranno forniti risultati di esistenza e unicità. La determinazione di questa funzione sarà funzionale al calcolo delle sensitività del modello.
Resumo:
In questa tesi vengono studiati anelli commutativi unitari in cui ogni catena ascentente o ogni catena discendente di ideali diventa stazionaria dopo un numero finito di passi. Un anello commutativo unitario R in cui vale la condizione della catena ascendente, ossia ogni catena ascendente di ideali a_1 ⊆ a_2 ⊆ · · · ⊆ R diventa stazionaria dopo un numero finito di passi, o, equivalentemente, in cui ogni ideale è generato da un numero finito di elementi, si dice noetheriano. Questa classe di anelli deve il proprio nome alla matematica tedesca Emmy Noether che, nel 1921, studiando un famoso risultato di Lasker per ideali di anelli di polinomi, si accorse che esso valeva in tutti gli anelli in cui gli ideali sono finitamente generati. Questi anelli giocano un ruolo importante in geometria algebrica, in quanto le varietà algebriche sono luoghi di zeri di polinomi in più variabili a coefficienti in un campo K e le proprietà degli ideali dell’anello K[x_1, . . . , x_n] si riflettono nelle proprietà delle varietà algebriche di K^n. Inoltre, per questi anelli esistono procedure algoritmiche che sono possibili proprio grazie alla condizione della catena ascendente. Un anello commutativo unitario R in cui vale la condizione della catena discendente, ossia ogni ogni catena discendente di ideali . . . a_2 ⊆ a_1 ⊆ R diventa stazionaria dopo un numero finito di passi, si dice artiniano, dal nome del matematico austriaco Emil Artin che li introdusse e ne studiò le proprietà. Il Teorema di Akizuki afferma che un anello commutativo unitario R è artiniano se e solo se è noetheriano di dimensione zero, ossia ogni suo ideale primo è massimale.