928 resultados para python django bootstrap
Resumo:
A necessidade de conhecer uma população impulsiona um processo de recolha e análise de informação. Usualmente é muito difícil ou impossível estudar a totalidade da população, daí a importância do estudo com recurso a amostras. Conceber um estudo por amostragem é um processo complexo, desde antes da recolha dos dados até a fase de análise dos mesmos. Na maior parte dos estudos utilizam-se combinações de vários métodos probabilísticos de amostragem para seleção de uma amostra, que se pretende representativa da população, denominado delineamento de amostragem complexo. O conhecimento dos erros de amostragem é necessário à correta interpretação dos resultados de inquéritos e à avaliação dos seus planos de amostragem. Em amostras complexas, têm sido usadas aproximações ajustadas à natureza complexa do plano da amostra para a estimação da variância, sendo as mais utilizadas: o método de linearização Taylor e as técnicas de reamostragem e replicação. O principal objetivo deste trabalho é avaliar o desempenho dos estimadores usuais da variância em amostras complexas. Inspirado num conjunto de dados reais foram geradas três populações com características distintas, das quais foram sorteadas amostras com diferentes delineamentos de amostragem, na expectativa de obter alguma indicação sobre em que situações se deve optar por cada um dos estimadores da variância. Com base nos resultados obtidos, podemos concluir que o desempenho dos estimadores da variância da média amostral de Taylor, Jacknife e Bootstrap varia com o tipo de delineamento e população. De um modo geral, o estimador de Bootstrap é o menos preciso e em delineamentos estratificados os estimadores de Taylor e Jackknife fornecem os mesmos resultados; Evaluation of variance estimation methods in complex samples ABSTRACT: The need to know a population drives a process of collecting and analyzing information. Usually is to hard or even impossible to study the whole population, hence the importance of sampling. Framing a study by sampling is a complex process, from before the data collection until the data analysis. Many studies have used combinations of various probabilistic sampling methods for selecting a representative sample of the population, calling it complex sampling design. Knowledge of sampling errors is essential for correct interpretation of the survey results and evaluation of the sampling plans. In complex samples to estimate the variance has been approaches adjusted to the complex nature of the sample plane. The most common are: the linearization method of Taylor and techniques of resampling and replication. The main objective of this study is to evaluate the performance of usual estimators of the variance in complex samples. Inspired on real data we will generate three populations with distinct characteristics. From this populations will be drawn samples using different sampling designs. In the end we intend to get some lights about in which situations we should opt for each one of the variance estimators. Our results show that the performance of the variance estimators of sample mean Taylor, Jacknife and Bootstrap varies with the design and population. In general, the Bootstrap estimator is less precise and in stratified design Taylor and Jackknife estimators provide the same results.
Resumo:
Este relatório de estágio descreve as atividades desenvolvidas no estágio curricular e desenvolve o tema ―Inseminação artificial em Python regius‖. O estágio de seis meses foi realizado na Exoclinic – clínica veterinária de aves e exóticos, permitindo consolidar conhecimentos através da prática clínica e agregar-lhes a área da medicina de animais exóticos, dedicando especial atenção à medicina de répteis. Desenvolvem-se os casos mais frequentes: medicina preventiva nas várias classes de animais, patologia dentária em roedores e lagomorfos, doença do bico e das penas dos psitacídeos e anorexia em répteis. Na monografia aborda-se a anatomofisiologia e o comportamento reprodutivos das cobras e a sua influência no maneio reprodutivo em cativeiro, métodos de avaliação de reprodutores e técnicas de inseminação artificial. Realizou-se um ensaio prático com seis animais, realizando colheitas de sémen, avaliação de alguns parâmetros do sémen, controlo folicular por ecografia e inseminação artificial. Apresentam-se os métodos, os resultados e respetiva discussão; Abstract: Exotic animal medicine This traineeship report describes the activities during the traineeship and develops the theme "Artificial insemination in Python regius." The six-month internship was held in Exoclinic - clínica veterinária de aves e exóticos, consolidating knowledge through clinical practice and adding the medical field of exotic animals, with special attention to reptiles. The most frequent clinical cases are developed: preventive medicine in the various classes of animals, dental pathology in rodents and lagomorphs, Psittacine Beak and Feather Disease in birds and anorexia in reptile. The monograph addresses the anatomy, physiology and reproductive behavior of snakes and their influence on reproductive management in captivity, methods for reproductors selection and evaluation and artificial insemination techniques. Implementation of a field trial with six animals, performing semen crops and evaluation of some of its parameters, follicular monitoring by ultrasound and artificial insemination. Methods and results are presented and discussed.
Resumo:
3. PRACTICAL RESOLUTION OF DIFFERENTIAL SYSTEMS by Marilia Pires, University of Évora, Portugal This practice presents the main features of a free software to solve mathematical equations derived from concrete problems: i.- Presentation of Scilab (or python) ii.- Basics (number, characters, function) iii.- Graphics iv.- Linear and nonlinear systems v.- Differential equations
Resumo:
O curso proposto está dividido em sete capítulos que vão desde a apresentação da importância da análise de imagens em geologia até à discussão e aplicação de aprendizagem máquina na análise de imagens. Sou defensor do software livre, assim todos os programas utilizados neste curso caiem nesta categoria. Os exemplos apresentados serão demonstrados com recurso aos seguintes programas: QGIS – Sistemas de informação geográfica GIMP – Tratamento de imagens R - Cálculo RStudio – IDE para o R Anaconda Python Notebook – IDE para Python OpenCV – Visão computacional Pretendo que o curso para o qual este texto serve de suporte seja eminentemente prático, um curso de “mãos na massa”, esperando-se que cada participante possa tratar temas do seu interesse pessoal. No primeiro capítulo é feita uma introdução sobre o que são imagens e a sua importância em geologia. O segundo capítulo trata de descrever os passos para a instalação do software proposto e fornecer pequenos exemplos da sua utilização. O terceiro capítulo descreve os métodos e as limitações da aquisição das imagens. São dados alguns exemplos de funções de aquisição de imagens. Os exemplos práticos deste capítulo incluem exemplos em Python e R. O quarto capítulo fala dos parâmetros contidos num ficheiro de imagens. Neste capítulo são apresentados exemplos em Python. O quinto capítulo trata das ferramentas que se podem aplicar durante o préprocessamento de uma imagem. O sexto capítulo trata de mostrar alguns exemplos de análise de imagens e no sétimo capítulo é abordada a questão de utilização de algoritmos de aprendizagem máquina na análise de imagens.
Resumo:
In this school, we introduced the basis of the mathematical analysis to study
differential equations (ordinary and partial). One aim to prepare students and staff members for more concrete problems arising in mathematical modeling in engineering and biological processes. Theoretical and numerical lectures were given, with a presentation of free scientific computing software using Python.
A website and a drive were created to facilitate exchanges between students, lecturers and organizers:
Resumo:
Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.
Resumo:
Apresentamos uma versão inicial da solução em desenvolvimento para estimação dos efeitos desejados através do modelo animal univariado, utilizando duas abordagens distintas para a obtenção do melhor estimador linear não viesado (BLUP) dos parâmetros do modelo.
Resumo:
Objective The objective of this study was to develop a clinical nomogram to predict gallium-68 prostate-specific membrane antigen positron emission tomography/computed tomography (68Ga-PSMA-11-PET/CT) positivity in different clinical settings of PSA failure. Materials and methods Seven hundred three (n = 703) prostate cancer (PCa) patients with confirmed PSA failure after radical therapy were enrolled. Patients were stratified according to different clinical settings (first-time biochemical recurrence [BCR]: group 1; BCR after salvage therapy: group 2; biochemical persistence after radical prostatectomy [BCP]: group 3; advanced stage PCa before second-line systemic therapies: group 4). First, we assessed 68Ga-PSMA-11-PET/CT positivity rate. Second, multivariable logistic regression analyses were used to determine predictors of positive scan. Third, regression-based coefficients were used to develop a nomogram predicting positive 68Ga-PSMA-11-PET/CT result and 200 bootstrap resamples were used for internal validation. Fourth, receiver operating characteristic (ROC) analysis was used to identify the most informative nomogram’s derived cut-off. Decision curve analysis (DCA) was implemented to quantify nomogram’s clinical benefit. Results 68Ga-PSMA-11-PET/CT overall positivity rate was 51.2%, while it was 40.3% in group 1, 54% in group 2, 60.5% in group 3, and 86.9% in group 4 (p < 0.001). At multivariable analyses, ISUP grade, PSA, PSA doubling time, and clinical setting were independent predictors of a positive scan (all p ≤ 0.04). A nomogram based on covariates included in the multivariate model demonstrated a bootstrap-corrected accuracy of 82%. The nomogram-derived best cut-off value was 40%. In DCA, the nomogram revealed clinical net benefit of > 10%. Conclusions This novel nomogram proved its good accuracy in predicting a positive scan, with values ≥ 40% providing the most informative cut-off in counselling patients to 68Ga-PSMA-11-PET/CT. This tool might be important as a guide to clinicians in the best use of PSMA-based PET imaging.
Resumo:
This thesis provides a necessary and sufficient condition for asymptotic efficiency of a nonparametric estimator of the generalised autocovariance function of a Gaussian stationary random process. The generalised autocovariance function is the inverse Fourier transform of a power transformation of the spectral density, and encompasses the traditional and inverse autocovariance functions. Its nonparametric estimator is based on the inverse discrete Fourier transform of the same power transformation of the pooled periodogram. The general result is then applied to the class of Gaussian stationary ARMA processes and its implications are discussed. We illustrate that for a class of contrast functionals and spectral densities, the minimum contrast estimator of the spectral density satisfies a Yule-Walker system of equations in the generalised autocovariance estimator. Selection of the pooling parameter, which characterizes the nonparametric estimator of the generalised autocovariance, controlling its resolution, is addressed by using a multiplicative periodogram bootstrap to estimate the finite-sample distribution of the estimator. A multivariate extension of recently introduced spectral models for univariate time series is considered, and an algorithm for the coefficients of a power transformation of matrix polynomials is derived, which allows to obtain the Wold coefficients from the matrix coefficients characterizing the generalised matrix cepstral models. This algorithm also allows the definition of the matrix variance profile, providing important quantities for vector time series analysis. A nonparametric estimator based on a transformation of the smoothed periodogram is proposed for estimation of the matrix variance profile.
Resumo:
This dissertation explores the link between hate crimes that occurred in the United Kingdom in June 2017, June 2018 and June 2019 through the posts of a robust sample of Conservative and radical right users on Twitter. In order to avoid the traditional challenges of this kind of research, I adopted a four staged research protocol that enabled me to merge content produced by a group of randomly selected users to observe the phenomenon from different angles. I collected tweets from thirty Conservative/right wing accounts for each month of June over the three years with the help of programming languages such as Python and CygWin tools. I then examined the language of my data focussing on humorous content in order to reveal whether, and if so how, radical users online often use humour as a tool to spread their views in conditions of heightened disgust and wide-spread political instability. A reflection on humour as a moral occurrence, expanding on the works of Christie Davies as well as applying recent findings on the behavioural immune system on online data, offers new insights on the overlooked humorous nature of radical political discourse. An unorthodox take on the moral foundations pioneered by Jonathan Haidt enriched my understanding of the analysed material through the addition of a moral-based layer of enquiry to my more traditional content-based one. This convergence of theoretical, data driven and real life events constitutes a viable “collection of strategies” for academia, data scientists; NGO’s fighting hate crimes and the wider public alike. Bringing together the ideas of Davies, Haidt and others to my data, helps us to perceive humorous online content in terms of complex radical narratives that are all too often compressed into a single tweet.
Resumo:
This thesis deals with optimization techniques and modeling of vehicular networks. Thanks to the models realized with the integer linear programming (ILP) and the heuristic ones, it was possible to study the performances in 5G networks for the vehicular. Thanks to Software-defined networking (SDN) and Network functions virtualization (NFV) paradigms it was possible to study the performances of different classes of service, such as the Ultra Reliable Low Latency Communications (URLLC) class and enhanced Mobile BroadBand (eMBB) class, and how the functional split can have positive effects on network resource management. Two different protection techniques have been studied: Shared Path Protection (SPP) and Dedicated Path Protection (DPP). Thanks to these different protections, it is possible to achieve different network reliability requirements, according to the needs of the end user. Finally, thanks to a simulator developed in Python, it was possible to study the dynamic allocation of resources in a 5G metro network. Through different provisioning algorithms and different dynamic resource management techniques, useful results have been obtained for understanding the needs in the vehicular networks that will exploit 5G. Finally, two models are shown for reconfiguring backup resources when using shared resource protection.
Resumo:
In the last few decades, offshore field has grown fast especially after the notable development of technologies, explorations of oil and gas in deep water and the high concern of offshore companies in renewable energy mainly Wind Energy. Fatigue damage was noticed as one of the main problems causing failure of offshore structures. The purpose of this research is to focus on the evaluation of Stress Concentration Factor and its influence on Fatigue Life for 2 tubular KT-Joints in offshore Jacket structure using different calculation methods. The work is done by using analytical calculations, mainly Efthymiou’s formulations, and numerical solutions, FEM analysis, using ABAQUS software. As for the analytical formulations, the calculations were done according to the geometrical parameters of each method using excel sheets. As for the numerical model, 2 different types of tubular KT-Joints are present where for each model 5 shell element type, 3 solid element type and 3 solid-with-weld element type models were built on ABAQUS. Meshing was assigned according to International Institute of Welding (IIW) recommendations, 5 types of mesh element, to evaluate the Hot-spot stresses. 23 different types of unitary loading conditions were assigned, 9 axial, 7 in-plane bending moment and 7 out-plane bending moment loads. The extraction of Hot-spot stresses and the evaluation of the Stress Concentration Factor were done using PYTHON scripting and MATLAB. Then, the fatigue damage evaluation for a critical KT tubular joint based on Simplified Fatigue Damage Rule and Local Approaches (Strain Damage Parameter and Stress Damage Parameter) methods were calculated according to the maximum Stress Concentration Factor conducted from DNV and FEA methods. In conclusion, this research helped us to compare different results of Stress Concentration Factor and Fatigue Life using different methods and provided us with a general overview about what to study next in the future.
Resumo:
Nella prima parte del mio lavoro viene presentato uno studio di una prima soluzione "from scratch" sviluppata da Andrew Karpathy. Seguono due miei miglioramenti: il primo modificando direttamente il codice della precedente soluzione e introducendo, come obbiettivo aggiuntivo per la rete nelle prime fasi di gioco, l'intercettazione della pallina da parte della racchetta, migliorando l'addestramento iniziale; il secondo é una mia personale implementazione utilizzando algoritmi più complessi, che sono allo stato dell'arte su giochi dell'Atari, e che portano un addestramento molto più veloce della rete.
Resumo:
In questo lavoro di tesi si è voluta creare una metodologia solida per la generazione di geome-trie banchi di flussaggio stazionario, sia per Tumble che per Swirl, a varie alzate valvole (in questo caso solo aspirazione, ma estendibile anche a quelle di scarico), avvalendosi del soft-ware SALOME; seguite da creazione griglia di calcolo e infine simulazione in ambiente Open-FOAM. Per prima cosa si è importata la geometria creata in un CAD esterno e importata in SALOME in formato STEP. A seguito si sono posizionate le valvole all’alzata da simulare, insieme alla creazione del falso cilindro, diversificato tra il caso Tumble e Swirl. Si è importato il file del banco di flussaggio, in formato STL, in snappyHexMesh e generata la griglia; questa è stata utilizzata per la simulazione in ambiente OpenFOAM, utilizzando l’utility rhoPorousSimpleFoam. Infine, si sono estratti i dati per il calcolo di grandezze utili, coppia di Tumble/Swirl e portata in massa, oltre alla creazione di immagini di visualizzazione campi di moto, utilizzando il post processore ParaView. In parallelo si è sviluppata l’automatizzazione delle varie fasi servendosi sia di script scritti in python che in bash.
Resumo:
Il progetto descritto in questo elaborato di tesi è stato svolto presso Il Centro Protesi INAIL (Vigorso di Budrio, BO). Il lavoro è stato realizzato a supporto di un progetto di ricerca, finanziato dal Dipartimento della Difesa USA, in collaborazione con la Northwestern University di Chicago e il Minneapolis Veteran Affairs Health Care Sytem. La ricerca ha lo scopo di determinare l’efficacia comparativa di metodi alternativi per realizzare il calco del moncone dell’amputato di arto inferiore e la successiva invasatura su misura. Il progetto di tesi nasce dall’assenza di un software commerciale in grado di analizzare come evolve la forma del moncone, dal calco all'invasatura finita, basandosi sulla digitalizzazione tridimensionale delle superfici. La libreria sviluppata è implementata in Python e utilizza algoritmi e strumenti di geometria computazionale al fine di supportare i processi di elaborazione dati. Il flusso di lavoro si sviluppa nelle seguenti fasi: • Acquisizione e pre-processing del dato; • Identificazione digitale dei punti di repere; • Allineamento dei modelli per orientarli in un sistema di riferimento globale secondo una logica comune; • Registrazione di due modelli per allinearli l’uno all’altro; • Generazione di outcome e parametri dimensionali, derivanti da mappe distanza, sezioni, cammini geodetici e regioni di interesse; • Estrazione di indicatori statistici riassuntivi delle differenze, correlate ad un insieme di scansioni tramite la PCA. Le funzionalità sono state validate tramite appositi test su dati clinici rilevati dallo studio o dati sintetici con caratteristiche note a priori. La libreria fornisce un insieme di interfacce che permette l’accesso anche a utenti non esperti ed è caratterizzata da modularità, semplicità di installazione ed estensibilità delle funzionalità. Tra gli sviluppi futuri si prevede l’identificazione di possibili ottimizzazioni individuate da un utilizzo degli strumenti esteso a più casi d’uso.