928 resultados para python django bootstrap
Resumo:
Long-term survival models have historically been considered for analyzing time-to-event data with long-term survivors fraction. However, situations in which a fraction (1 - p) of systems is subject to failure from independent competing causes of failure, while the remaining proportion p is cured or has not presented the event of interest during the time period of the study, have not been fully considered in the literature. In order to accommodate such situations, we present in this paper a new long-term survival model. Maximum likelihood estimation procedure is discussed as well as interval estimation and hypothesis tests. A real dataset illustrates the methodology.
Resumo:
During 2008D2010, ticks were collected from road-killed wild animals within the Serra dos Orgaos National Park area in the state of Rio de Janeiro, Brazil. In total, 193 tick specimens were collected, including Amblyomma dubitatum Neumann and Amblyomma cajennense (F.) from four Hydrochoerus hydrochaeris (L.), Amblyomma calcaratum Neumann and A. cajennense from four Tamandua tetradactyla (L.), Amblyomma aureolatum (Pallas) and A. cajennense from five Cerdocyon thous L., Amblyomma longirostre (Koch) from one Sphiggurus villosus (Cuvier), Amblyomma varium Koch from three Bradypus variegatus Schinz, and A. cajennense from one Buteogallus meridionalis (Latham). Molecular analyses based on polymerase chain reaction targeting two rickettsial genes (gltA and ompA) on tick DNA extracts showed that 70.6% (12/17) of the A. dubitatum adult ticks, and all Amblyomma sp. nymphal pools collected from capybaras were shown to contain rickettsial DNA, which after DNA sequencing, revealed to be 100% identical to the recently identified Rickettsia sp. strain Pampulha from A. dubitatum ticks collected in the state of Minas Gerais, Brazil. Phylogenetic analysis with concatenated sequences (gltA-ompA) showed that our sequence from A. dubitatum ticks, referred to Rickettsia sp. strain Serra dos Orgaos, segregated under 99% bootstrap support in a same cluster with Old World rickettsiae, namely R. tamurae, R. monacensis, and Rickettsia sp. strain 774e. Because A. dubitatum is known to bite humans, the potential role of Rickettsia sp. strain Serra dos Orgaos as human pathogen must be taken into account, because both R. tamurae and R. monacencis have been reported infecting human beings.
Resumo:
Background Statistical methods for estimating usual intake require at least two short-term dietary measurements in a subsample of the target population. However, the percentage of individuals with a second dietary measurement (replication rate) may influence the precision of estimates, such as percentiles and proportions of individuals below cut-offs of intake. Objective To investigate the precision of the usual food intake estimates using different replication rates and different sample sizes. Participants/setting Adolescents participating in the continuous National Health and Nutrition Examination Survey 2007-2008 (n=1,304) who completed two 24-hour recalls. Statistical analyses performed The National Cancer Institute method was used to estimate the usual intake of dark green vegetables in the original sample comprising 1,304 adolescents with a replication rate of 100%. A bootstrap with 100 replications was performed to estimate CIs for percentiles and proportions of individuals below cut-offs of intake. Using the same bootstrap replications, four sets of data sets were sampled with different replication rates (80%, 60%, 40%, and 20%). For each data set created, the National Cancer Institute method was performed and percentiles, Cl, and proportions of individuals below cut-offs were calculated. Precision estimates were checked by comparing each Cl obtained from data sets with different replication rates with the Cl obtained from original data set. Further, we sampled 1,000, 750, 500, and 250 individuals from the original data set, and performed the same analytical procedures. Results Percentiles of intake and percentage of individuals below the cut-off points were similar throughout the replication rates and sample sizes, but the Cl increased as the replication rate decreased. Wider CIs were observed at 40% and 20% of replication rate. Conclusions The precision of the usual intake estimates decreased when low replication rates were used. However, even with different sample sizes, replication rates >40% may not lead to an important loss of precision. J Acad Nutr Diet. 2012;112:1015-1020.
Resumo:
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Resumo:
The self-consistency of a thermodynamical theory for hadronic systems based on the non-extensive statistics is investigated. We show that it is possible to obtain a self-consistent theory according to the asymptotic bootstrap principle if the mass spectrum and the energy density increase q-exponentially. A direct consequence is the existence of a limiting effective temperature for the hadronic system. We show that this result is in agreement with experiments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background Vampire bat related rabies harms both livestock industry and public health sector in central Brazil. The geographical distributions of vampire bat-transmitted rabies virus variants are delimited by mountain chains. These findings were elucidated by analyzing a high conserved nucleoprotein gene. This study aims to elucidate the detailed epidemiological characters of vampire bat-transmitted rabies virus by phylogenetic methods based on 619-nt sequence including unconserved G-L intergenic region. Findings The vampire bat-transmitted rabies virus isolates divided into 8 phylogenetic lineages in the previous nucleoprotein gene analysis were divided into 10 phylogenetic lineages with significant bootstrap values. The distributions of most variants were reconfirmed to be delimited by mountain chains. Furthermore, variants in undulating areas have narrow distributions and are apparently separated by mountain ridges. Conclusions This study demonstrates that the 619-nt sequence including G-L intergenic region is more useful for a state-level phylogenetic analysis of rabies virus than the partial nucleoprotein gene, and simultaneously that the distribution of vampire bat-transmitted RABV variants tends to be separated not only by mountain chains but also by mountain ridges, thus suggesting that the diversity of vampire bat-transmitted RABV variants was delimited by geographical undulations.
Resumo:
[ES] Los web frameworks son herramientas para mejorar el desarrollo y mantenimiento de sitios web. Aprender a utilizar un framework requiere varios meses y existen más de 100 web frameworks. Por ello es interesante que haya estudios que muestren sus diferencias. En este proyecto se realizó una comparativa de web frameworks para valorar sus diferencias, debilidades y fortalezas. Para seleccionar los web frameworks se utilizaron variables como las estadísticas de uso, popularidad y resultados en otras omparativas. Además, se decidió que los web frameworks seleccionados estuviesen basados en distintos lenguajes de programación. En base a esto se seleccionaron los web frameworks : Rails, Grails, Django y Codelgniter. Para compararlos se implementó una aplicación muy sencilla, MyBlog, con cada uno de ellos, un sistema de usuarios con blogs, posts y comentarios. La preparación para esta implementación consistió en : leer documentación sobre el lenguaje de programación, realizar un conjunto de ejercicios muy sencillos y leer la documentación del web framework. Todas estas tareas, incluida la implementación de MyBlog se tuvieron que realizar en un tiempo límite asignado. En base a este desarrollo se concluyó que Rails, Grails y Django son frameworks que requieren mucho tiempo en su aprendizaje, mientras que Codelgniter es mucho más sencillo de aprender. Sin embargo, los primeros producen un código más conciso y menos repetitivo, mientras que el último resulta en un código repetitivo y extenso. Por otro lado, la documentación de Grails era de baja calidad e incrementaba la dificultad en su aprendizaje. Rails y Django presentan una buena documentación. Rails es el único framework con un gran soporte para migraciones y Javascript. Django es el único que soporta las class-based views. Grails es el único que soporta internacionalización desde la generación de código.
Resumo:
[ES] En este proyecto se trata el proceso de análisis y desarrollo llevado a cabo con el objetivo de construir un prototipo funcional de simulador virtual de endoscopia rígida monocanal orientado a la histeroscopia. Para el desarrollo del prototipo se toma como base el entorno ESQUI, un entorno de simulación virtual médica de código libre. Este entorno provee una librería, basada a su vez en la conocida librería gráfica VTK(Visual ToolKit), cuyo propósito es poner a disposición del programador toda la algoritmia necesaria para construir una simulación médica virtual. En este proyecto, esta librería se depuró y amplió para mejorar el soporte a las técnicas de endoscopia rígida que se persiguen simular. Por otro lado se emplea el Simball 4D, un dispositivo de interfaz humana de la empresa G-coder Systems, para capturar la interacción del usuario emulando la morfología y dinámica de un endoscopio rígido. Todos estos elementos se conectan con una interfaz gráfica sencilla, intuitiva y práctica soportada por wxWidgets y utilizando Python como lenguaje de scripting. Finalmente, se analiza el prototipo resultante y se proponen una serie de líneas futuras de cara a la aplicación didáctica del mismo, tanto en relación a los objetivos conceptuales del prototipo como a los aspectos específicos del entorno ESQUI.
Resumo:
[ES] Este Trabajo de Fin de Grado ha tenido como objetivo el desarrollo de un gestor de menús de restaurantes como aplicación web para una empresa que ofrece hostings de menús y publicidad mediante la publicación de dichos menús en pantallas y portales web. Las empresas asociadas (bares y restaurantes) podrán elaborar menús compuestos de dos platos (primero y segundo), postre y bebidas para ser ‘enviados’ al servicio de publicación. La aplicación proporciona un sistema de gestión de dichos menús facilitando la reutilización de platos entre menús, la personalización de la imagen representativa de cada plato, así como diversas operaciones de copia, visualización y modificación de los menús y de los platos. Los usuarios registrados tendrán la posibilidad de recuperar su contraseña de forma automática en caso de que la misma sea olvidada. La información relacionada con los platos, menús y usuarios registrados será almacenada automáticamente sobre una base de datos diseñada al efecto. Por otro lado, la aplicación web dispone de una página accesible únicamente para el administrador para la gestión de los usuarios, por ejemplo, editar, alta, baja, habilitar y deshabilitar cuentas de usuarios. Por último, las tecnologías y herramientas utilizadas en la elaboración de este trabajo incluyen Php, Mysql, jQuery, CSS, HTML y sobre todo el framework Twitter Bootstrap que ha sido de gran ayuda en el desarrollo del proyecto.
Resumo:
[ES] Juego Stacker para HTML5 propone una aplicación web con dos modalidades de juegos basados en el clásico Stacker. La modalidad classic stacker pretende simular dicho juego, en la cual el jugador ha de apilar una fila horizontal de cuadrados que se desplazan a velocidad constante horizontalmente sobre otra fila horizontal de cuadrados que se encuentran en la parte inferior sin describir ningún movimiento. La velocidad de movimiento de la fila que ha de apilar irá aumentando conforme se vayan superando los niveles. El juego acaba cuando no dispone de más cuadrados en la fila, que se perderán si no se consigue apilar de manera exacta. La otra modalidad de juego se le conoce como super stacker. En esta modalidad, el jugador ha de apilar una serie de figuras con formas distintas sobre otras figuras estáticas que forman parte de un mundo generado. Las figuras que ha de apilar el jugador son sensibles a fuerzas tales como la gravedad, colisión entre objetos, fricción, etc. Si alguna de estas figuras entra en contacto con alguno de los límites del mundo, el jugador ha perdido. Ganará cuando la estructura final formada aguanta un número de segundos determinados, pasando así a otro nivel (escenario) de mayor complejidad. Para esta modalidad de juego se ha necesitado un motor físico portado a Javascript que simule las fuerzas mencionadas anteriormente. Resaltar también que se ha optado por realizar un diseño adaptable utilizando frameworks como bootstrap 3 debido al gran auge de los dispositivos móviles con dimensiones de pantalla variables.
Resumo:
[ES]La reidentificación consiste en volver a identificar a un individuo/objeto que ya se ha detectado previamente desde distintas cámaras. En este proyecto se exploran diferentes técnicas para la reidentificación de personas. Se implementan y prueban técnicas que no requieren de aprendizaje previo para realizar una ordenación inicial, al ser este tipo de métodos los que mayor aplicación tienen en un escenario real. Así mismo se usan técnicas de reordenación sobre esta ordenación inicial utilizando la información de un operador humano, aplicando entre otros métodos aprendizaje semisupervisado. Para realizar todo el proceso y facilitar la combinación y automatización de las diversas técnicas se crea un framework denominado PyReID basado en Python y OpenCV, de software libre y disponible públicamente en Github.
Resumo:
Bioinformatics is a recent and emerging discipline which aims at studying biological problems through computational approaches. Most branches of bioinformatics such as Genomics, Proteomics and Molecular Dynamics are particularly computationally intensive, requiring huge amount of computational resources for running algorithms of everincreasing complexity over data of everincreasing size. In the search for computational power, the EGEE Grid platform, world's largest community of interconnected clusters load balanced as a whole, seems particularly promising and is considered the new hope for satisfying the everincreasing computational requirements of bioinformatics, as well as physics and other computational sciences. The EGEE platform, however, is rather new and not yet free of problems. In addition, specific requirements of bioinformatics need to be addressed in order to use this new platform effectively for bioinformatics tasks. In my three years' Ph.D. work I addressed numerous aspects of this Grid platform, with particular attention to those needed by the bioinformatics domain. I hence created three major frameworks, Vnas, GridDBManager and SETest, plus an additional smaller standalone solution, to enhance the support for bioinformatics applications in the Grid environment and to reduce the effort needed to create new applications, additionally addressing numerous existing Grid issues and performing a series of optimizations. The Vnas framework is an advanced system for the submission and monitoring of Grid jobs that provides an abstraction with reliability over the Grid platform. In addition, Vnas greatly simplifies the development of new Grid applications by providing a callback system to simplify the creation of arbitrarily complex multistage computational pipelines and provides an abstracted virtual sandbox which bypasses Grid limitations. Vnas also reduces the usage of Grid bandwidth and storage resources by transparently detecting equality of virtual sandbox files based on content, across different submissions, even when performed by different users. BGBlast, evolution of the earlier project GridBlast, now provides a Grid Database Manager (GridDBManager) component for managing and automatically updating biological flatfile databases in the Grid environment. GridDBManager sports very novel features such as an adaptive replication algorithm that constantly optimizes the number of replicas of the managed databases in the Grid environment, balancing between response times (performances) and storage costs according to a programmed cost formula. GridDBManager also provides a very optimized automated management for older versions of the databases based on reverse delta files, which reduces the storage costs required to keep such older versions available in the Grid environment by two orders of magnitude. The SETest framework provides a way to the user to test and regressiontest Python applications completely scattered with side effects (this is a common case with Grid computational pipelines), which could not easily be tested using the more standard methods of unit testing or test cases. The technique is based on a new concept of datasets containing invocations and results of filtered calls. The framework hence significantly accelerates the development of new applications and computational pipelines for the Grid environment, and the efforts required for maintenance. An analysis of the impact of these solutions will be provided in this thesis. This Ph.D. work originated various publications in journals and conference proceedings as reported in the Appendix. Also, I orally presented my work at numerous international conferences related to Grid and bioinformatics.
Resumo:
La valutazione dell’intensità secondo una procedura formale trasparente, obiettiva e che permetta di ottenere valori numerici attraverso scelte e criteri rigorosi, rappresenta un passo ed un obiettivo per la trattazione e l’impiego delle informazioni macrosismiche. I dati macrosismici possono infatti avere importanti applicazioni per analisi sismotettoniche e per la stima della pericolosità sismica. Questa tesi ha affrontato il problema del formalismo della stima dell’intensità migliorando aspetti sia teorici che pratici attraverso tre passaggi fondamentali sviluppati in ambiente MS-Excel e Matlab: i) la raccolta e l’archiviazione del dataset macrosismico; ii), l’associazione (funzione di appartenenza o membership function) tra effetti e gradi di intensità della scala macrosismica attraverso i principi della logica dei fuzzy sets; iii) l’applicazione di algoritmi decisionali rigorosi ed obiettivi per la stima dell’intensità finale. L’intera procedura è stata applicata a sette terremoti italiani sfruttando varie possibilità, anche metodologiche, come la costruzione di funzioni di appartenenza combinando le informazioni macrosismiche di più terremoti: Monte Baldo (1876), Valle d’Illasi (1891), Marsica (1915), Santa Sofia (1918), Mugello (1919), Garfagnana (1920) e Irpinia (1930). I risultati ottenuti hanno fornito un buon accordo statistico con le intensità di un catalogo macrosismico di riferimento confermando la validità dell’intera metodologia. Le intensità ricavate sono state poi utilizzate per analisi sismotettoniche nelle aree dei terremoti studiati. I metodi di analisi statistica sui piani quotati (distribuzione geografica delle intensità assegnate) si sono rivelate in passato uno strumento potente per analisi e caratterizzazione sismotettonica, determinando i principali parametri (localizzazione epicentrale, lunghezza, larghezza, orientazione) della possibile sorgente sismogenica. Questa tesi ha implementato alcuni aspetti delle metodologie di analisi grazie a specifiche applicazioni sviluppate in Matlab che hanno permesso anche di stimare le incertezze associate ai parametri di sorgente, grazie a tecniche di ricampionamento statistico. Un’analisi sistematica per i terremoti studiati è stata portata avanti combinando i vari metodi per la stima dei parametri di sorgente con i piani quotati originali e ricalcolati attraverso le procedure decisionali fuzzy. I risultati ottenuti hanno consentito di valutare le caratteristiche delle possibili sorgenti e formulare ipotesi di natura sismotettonica che hanno avuto alcuni riscontri indiziali con dati di tipo geologico e geologico-strutturale. Alcuni eventi (1915, 1918, 1920) presentano una forte stabilità dei parametri calcolati (localizzazione epicentrale e geometria della possibile sorgente) con piccole incertezze associate. Altri eventi (1891, 1919 e 1930) hanno invece mostrato una maggiore variabilità sia nella localizzazione dell’epicentro che nella geometria delle box: per il primo evento ciò è probabilmente da mettere in relazione con la ridotta consistenza del dataset di intensità mentre per gli altri con la possibile molteplicità delle sorgenti sismogenetiche. Anche l’analisi bootstrap ha messo in evidenza, in alcuni casi, le possibili asimmetrie nelle distribuzioni di alcuni parametri (ad es. l’azimut della possibile struttura), che potrebbero suggerire meccanismi di rottura su più faglie distinte.
Resumo:
La tesi tratta aspetti relativi all'ottimizzazione strutturale. Algoritmi di ottimizzazione scritti in linguaggio di programmazione Python, sia basati sul metodo del simplesso che di tipo gentico, sono stati integrati in ambiente Salome-Meca/CAE ed applicati ad esempi di interesse strutturale.