983 resultados para Software Packages
Resumo:
This paper reviews nine software packages with particular reference to their GARCH model estimation accuracy when judged against a respected benchmark. We consider the numerical consistency of GARCH and EGARCH estimation and forecasting. Our results have a number of implications for published research and future software development. Finally, we argue that the establishment of benchmarks for other standard non-linear models is long overdue.
Resumo:
The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
A presente dissertação tem como objetivo estudar e aprimorar métodos de projetos de controladores para sistemas de potência, sendo que esse trabalho trata da estabilidade dinâmica de sistemas de potência e, portanto, do projeto de controladores amortecedores de oscilações eletromecânicas para esses sistemas. A escolha dos métodos aqui estudados foi orientada pelos requisitos que um estabilizador de sistemas de potência (ESP) deve ter, que são robustez, descentralização e coordenação. Sendo que alguns deles tiveram suas características aprimoradas para atender a esses requisitos. A abordagem dos métodos estudados foi restringida à análise no domínio tempo, pois a abordagem temporal facilita a modelagem das incertezas paramétricas, para atender ao requisito da robustez, e também permite a formulação do controle descentralizado de maneira simples. Além disso, a abordagem temporal permite a formulação do problema de projeto utilizando desigualdades matriciais lineares (LMI’s), as quais possuem como vantagem o fato do conjunto solução ser sempre convexo e a existência de algoritmos eficientes para o cálculo de sua solução. De fato, existem diversos pacotes computacionais desenvolvidos no mercado para o cálculo da solução de um problema de inequações matriciais lineares. Por esse motivo, os métodos de projeto para controladores de saída buscam sempre colocar o problema na forma de LMI’s, tendo em vista que ela garante a obtenção de solução, caso essa solução exista.
Resumo:
Humans can perceive three dimension, our world is three dimensional and it is becoming increasingly digital too. We have the need to capture and preserve our existence in digital means perhaps due to our own mortality. We have also the need to reproduce objects or create small identical objects to prototype, test or study them. Some objects have been lost through time and are only accessible through old photographs. With robust model generation from photographs we can use one of the biggest human data sets and reproduce real world objects digitally and physically with printers. What is the current state of development in three dimensional reconstruction through photographs both in the commercial world and in the open source world? And what tools are available for a developer to build his own reconstruction software? To answer these questions several pieces of software were tested, from full commercial software packages to open source small projects, including libraries aimed at computer vision. To bring to the real world the 3D models a 3D printer was built, tested and analyzed, its problems and weaknesses evaluated. Lastly using a computer vision library a small software with limited capabilities was developed.
Resumo:
Human T-lymphotropic virus type 1 (HTLV-1) is associated with adult T-cell leukemia (ATL) and HTLV-1 associated myelopathy/tropical spastic paraparesis (HAM/TSP) and has also been implicated in several disorders, including periodontal disease. The proviral load is an important biological marker for understanding HTLV-1 pathogenesis and elucidating whether or not the virus is related to the clinical manifestation of the disease. This study describes the oral health profile of HTLV-1 carriers and HAM/TSP patients in order to investigate the association between the proviral load in saliva and the severity of the periodontal disease and to examine virus intra-host variations from peripheral blood mononuclear cells and saliva cells. It is a cross-sectional analytical study of 90 individuals carried out from November 2006 to May 2008. Of the patients, 60 were HTLV-1 positive and 30 were negative. Individuals from the HTLV-1 positive and negative groups had similar mean age and social-economic status. Data were analyzed using two available statistical software packages, STATA 8.0 and SPSS 11.0 to conduct frequency analysis. Differences of P?<?0.05 were considered statistically significant. HTLV-1 patients had poorer oral health status when compared to seronegative individuals. A weak positive correlation between blood and saliva proviral loads was observed. The mean values of proviral load in blood and saliva in patients with HAM/TSP was greater than those in HTLV-1 carriers. The HTLV-1 molecular analysis from PBMC and saliva specimens suggests that HTLV-1 in saliva is due to lymphocyte infiltration from peripheral blood. A direct relationship between the proviral load in saliva and oral manifestations was observed. J. Med. Virol. 84:1428-1436, 2012. (c) 2012 Wiley Periodicals, Inc.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
Am COMPASS-Experiment am CERN-SPS wird die Spinsstruktur des Nukleons mit Hilfe der Streuung von polarisierten Myonen an polarisierten Nukleonen untersucht. Der in der inklusiven tiefinelastischen Streuung gemessene Beitrag der Quarks zum Nukleonspin reicht nicht aus, um den Spin des Nukleons zu erklären. Daher soll geklärt werden, wie die Gluonpolarisation und die Bahndrehimpulse von Quarks und Gluonen zum Gesamtspin des Nukleons beitragen. Da sich die Gluonpolarisation aus der $Q^{2}$-Abhängigkeit der Asymmetrien in der inklusiven Streuung nur abschätzen lässt, wird eine direkte Messung der Gluonpolarisation benötigt. Die COMPASS-Kollaboration bestimmt daher die Wirkungsquerschnittsasymmetrien für Photon-Gluon-Fusionprozesse, indem sie zum einen die offene Charmproduktion und zum anderen die Produktion von Hadronpaaren mit großen Transversalimpulsen verwendet. In dieser Arbeit wird die Messung der Gluonpolarisation mit den COMPASS-Daten der Jahre 2003 und 2004 vorgestellt. Für die Analyse werden die Ereignisse mit großem Impulsübertrag ($Q^{2}>1$ $GeV^{2}/c^{2}$) und mit Hadronpaaren mit großem Transversalimpuls ($p_{perp}>0.7$ $GeV/c$) verwendet. Die Photon-Nukleon-Asymmetrie wurde aus dem gewichteten Doppelverhältnis der selektierten Ereignisse bestimmt. Der Schnitt auf $p_{perp}>0.7$rn$GeV/c$ unterdrückt die Prozesse führender Ordnung und QCD-Compton Prozesse, so dass die Asymmetrie direkt mit der Gluonpolarisation über die Analysierstärke verknüpft ist. Der gemessene Wert ist sehr klein und verträglich mit einer verschwindenden Gluonpolarisation. Zur Vermeidung von falschen Asymmetrien aufgrund der Änderung der Detektorakzeptanz wurden Doppelverhältnisse untersucht, bei denen sich der Wirkungsquerschnitt aufhebt und nur die Detektorasymmetrien übrig bleiben. Es konnte gezeigt werden, dass das COMPASS-Spektrometer keine signifikante Zeitabhängigkeit aufweist. Für die Berechnung der Analysierstärke wurden Monte Carlo Ereignisse mit Hilfe des LEPTO-Generators und des COMGeant Software Paketes erzeugt. Dabei ist eine gute Beschreibung der Daten durch das Monte Carlo sehr wichtig. Dafür wurden zur Verbesserung der Beschreibung JETSET Parameter optimiert. Es ergab sich ein Wert von rn$frac{Delta G}{G}=0.054pm0.145_{(stat)}pm0.131_{(sys)}pm0.04_{(MC)}$ bei einem mittleren Impulsbruchteil von $langle x_{gluon}rangle=0.1$ und $langle Q^{2}rangle=1.9$ $GeV^{2}/c^{2}$. Dieses Ergebnis deutet auf eine sehr kleine Gluonpolarisation hin und steht im Einklang mit den Ergebnissen anderer Methoden, wie offene Charmproduktion und mit den Ergebnissen, die am doppelt polarisierten RHIC Collider am BNL erzielt wurden.
Resumo:
Bite mark analysis offers the opportunity to identify the biter based on the individual characteristics of the dentitions. Normally, the main focus is on analysing bite mark injuries on human bodies, but also, bite marks in food may play an important role in the forensic investigation of a crime. This study presents a comparison of simulated bite marks in different kinds of food with the dentitions of the presumed biter. Bite marks were produced by six adults in slices of buttered bread, apples, different kinds of Swiss chocolate and Swiss cheese. The time-lapse influence of the bite mark in food, under room temperature conditions, was also examined. For the documentation of the bite marks and the dentitions of the biters, 3D optical surface scanning technology was used. The comparison was performed using two different software packages: the ATOS modelling and analysing software and the 3D studio max animation software. The ATOS software enables an automatic computation of the deviation between the two meshes. In the present study, the bite marks and the dentitions were compared, as well as the meshes of each bite mark which were recorded in the different stages of time lapse. In the 3D studio max software, the act of biting was animated to compare the dentitions with the bite mark. The examined food recorded the individual characteristics of the dentitions very well. In all cases, the biter could be identified, and the dentitions of the other presumed biters could be excluded. The influence of the time lapse on the food depends on the kind of food and is shown on the diagrams. However, the identification of the biter could still be performed after a period of time, based on the recorded individual characteristics of the dentitions.
Resumo:
Analog filters and direct digital filters are implemented using digital signal processing techniques. Specifically, Butterworth, Elliptic, and Chebyshev filters are implemented using the Motorola 56001 Digital Signal Processor by the integration of three software packages: MATLAB, C++, and Motorola's Application Development System. The integrated environment allows the novice user to design a filter automatically by specifying the filter order and critical frequencies, while permitting more experienced designers to take advantage of MATLAB's advanced design capabilities. This project bridges the gap between the theoretical results produced by MATLAB and the practicalities of implementing digital filters using the Motorola 56001 Digital Signal Processor. While these results are specific to the Motorola 56001 they may be extended to other digital signal processors. MATLAB handles the filter calculations, a C++ routine handles the conversion to assembly code, and the Motorola software compiles and transmits the code to the processor
Resumo:
The Franches-Montagnes is an indigenous Swiss horse breed, with approximately 2500 foalings per year. The stud book is closed, and no introgression from other horse breeds was conducted since 1998. Since 2006, breeding values for 43 different traits (conformation, performance and coat colour) are estimated with a best linear unbiased prediction (BLUP) multiple trait animal model. In this study, we evaluated the genetic diversity for the breeding population, considering the years from 2003 to 2008. Only horses with at least one progeny during that time span were included. Results were obtained based on pedigree information as well as from molecular markers. A series of software packages were screened to combine best the best linear unbiased prediction (BLUP) methodology with optimal genetic contribution theory. We looked for stallions with highest breeding values and lowest average relationship to the dam population. Breeding with such stallions is expected to lead to a selection gain, while lowering the future increase in inbreeding within the breed.
Resumo:
The ability to make scientific findings reproducible is increasingly important in areas where substantive results are the product of complex statistical computations. Reproducibility can allow others to verify the published findings and conduct alternate analyses of the same data. A question that arises naturally is how can one conduct and distribute reproducible research? This question is relevant from the point of view of both the authors who want to make their research reproducible and readers who want to reproduce relevant findings reported in the scientific literature. We present a framework in which reproducible research can be conducted and distributed via cached computations and describe specific tools for both authors and readers. As a prototype implementation we introduce three software packages written in the R language. The cacheSweave and stashR packages together provide tools for caching computational results in a key-value style database which can be published to a public repository for readers to download. The SRPM package provides tools for generating and interacting with "shared reproducibility packages" (SRPs) which can facilitate the distribution of the data and code. As a case study we demonstrate the use of the toolkit on a national study of air pollution exposure and mortality.
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
In this paper we compare the performance of two image classification paradigms (object- and pixel-based) for creating a land cover map of Asmara, the capital of Eritrea and its surrounding areas using a Landsat ETM+ imagery acquired in January 2000. The image classification methods used were maximum likelihood for the pixel-based approach and Bhattacharyya distance for the object-oriented approach available in, respectively, ArcGIS and SPRING software packages. Advantages and limitations of both approaches are presented and discussed. Classifications outputs were assessed using overall accuracy and Kappa indices. Pixel- and object-based classification methods result in an overall accuracy of 78% and 85%, respectively. The Kappa coefficient for pixel- and object-based approaches was 0.74 and 0.82, respectively. Although pixel-based approach is the most commonly used method, assessment and visual interpretation of the results clearly reveal that the object-oriented approach has advantages for this specific case-study.