929 resultados para source code analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia de Telecomunicações e Informática

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En termes de temps d'execució i ús de dades, les aplicacions paral·leles/distribuïdes poden tenir execucions variables, fins i tot quan s'empra el mateix conjunt de dades d'entrada. Existeixen certs aspectes de rendiment relacionats amb l'entorn que poden afectar dinàmicament el comportament de l'aplicació, tals com: la capacitat de la memòria, latència de la xarxa, el nombre de nodes, l'heterogeneïtat dels nodes, entre d'altres. És important considerar que l'aplicació pot executar-se en diferents configuracions de maquinari i el desenvolupador d'aplicacions no port garantir que els ajustaments de rendiment per a un sistema en particular continuïn essent vàlids per a d'altres configuracions. L'anàlisi dinàmica de les aplicacions ha demostrat ser el millor enfocament per a l'anàlisi del rendiment per dues raons principals. En primer lloc, ofereix una solució molt còmoda des del punt de vista dels desenvolupadors mentre que aquests dissenyen i evaluen les seves aplicacions paral·leles. En segon lloc, perquè s'adapta millor a l'aplicació durant l'execució. Aquest enfocament no requereix la intervenció de desenvolupadors o fins i tot l'accés al codi font de l'aplicació. S'analitza l'aplicació en temps real d'execució i es considra i analitza la recerca dels possibles colls d'ampolla i optimitzacions. Per a optimitzar l'execució de l'aplicació bioinformàtica mpiBLAST, vam analitzar el seu comportament per a identificar els paràmetres que intervenen en el rendiment d'ella, com ara: l'ús de la memòria, l'ús de la xarxa, patrons d'E/S, el sistema de fitxers emprat, l'arquitectura del processador, la grandària de la base de dades biològica, la grandària de la seqüència de consulta, la distribució de les seqüències dintre d'elles, el nombre de fragments de la base de dades i/o la granularitat dels treballs assignats a cada procés. El nostre objectiu és determinar quins d'aquests paràmetres tenen major impacte en el rendiment de les aplicacions i com ajustar-los dinàmicament per a millorar el rendiment de l'aplicació. Analitzant el rendiment de l'aplicació mpiBLAST hem trobat un conjunt de dades que identifiquen cert nivell de serial·lització dintre l'execució. Reconeixent l'impacte de la caracterització de les seqüències dintre de les diferents bases de dades i una relació entre la capacitat dels workers i la granularitat de la càrrega de treball actual, aquestes podrien ser sintonitzades dinàmicament. Altres millores també inclouen optimitzacions relacionades amb el sistema de fitxers paral·lel i la possibilitat d'execució en múltiples multinucli. La grandària de gra de treball està influenciat per factors com el tipus de base de dades, la grandària de la base de dades, i la relació entre grandària de la càrrega de treball i la capacitat dels treballadors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En aquest projecte es desenvoluparà ServEngine, un framework per al desenvolupament de serveis de portals, comunitats virtuals i sistemes de comerç electrònic basat en el patró MVC i, com s'elegirà més endavant després d'un procés d'anàlisi i comparació, utilitzant Struts com a base de desenvolupament.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conventional methods of gene prediction rely on the recognition of DNA-sequence signals, the coding potential or the comparison of a genomic sequence with a cDNA, EST, or protein database. Reasons for limited accuracy in many circumstances are species-specific training and the incompleteness of reference databases. Lately, comparative genome analysis has attracted increasing attention. Several analysis tools that are based on human/mouse comparisons are already available. Here, we present a program for the prediction of protein-coding genes, termed SGP-1 (Syntenic Gene Prediction), which is based on the similarity of homologous genomic sequences. In contrast to most existing tools, the accuracy of SGP-1 depends little on species-specific properties such as codon usage or the nucleotide distribution. SGP-1 may therefore be applied to nonstandard model organisms in vertebrates as well as in plants, without the need for extensive parameter training. In addition to predicting genes in large-scale genomic sequences, the program may be useful to validate gene structure annotations from databases. To this end, SGP-1 output also contains comparisons between predicted and annotated gene structures in HTML format. The program can be accessed via a Web server at http://soft.ice.mpg.de/sgp-1. The source code, written in ANSI C, is available on request from the authors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[ANGLÈS] This project introduces GNSS-SDR, an open source Global Navigation Satellite System software-defined receiver. The lack of reconfigurability of current commercial-of-the-shelf receivers and the advent of new radionavigation signals and systems make software receivers an appealing approach to design new architectures and signal processing algorithms. With the aim of exploring the full potential of this forthcoming scenario with a plurality of new signal structures and frequency bands available for positioning, this paper describes the software architecture design and provides details about its implementation, targeting a multiband, multisystem GNSS receiver. The result is a testbed for GNSS signal processing that allows any kind of customization, including interchangeability of signal sources, signal processing algorithms, interoperability with other systems, output formats, and the offering of interfaces to all the intermediate signals, parameters and variables. The source code release under the GNU General Public License (GPL) secures practical usability, inspection, and continuous improvement by the research community, allowing the discussion based on tangible code and the analysis of results obtained with real signals. The source code is complemented by a development ecosystem, consisting of a website (http://gnss-sdr.org), as well as a revision control system, instructions for users and developers, and communication tools. The project shows in detail the design of the initial blocks of the Signal Processing Plane of the receiver: signal conditioner, the acquisition block and the receiver channel, the project also extends the functionality of the acquisition and tracking modules of the GNSS-SDR receiver to track the new Galileo E1 signals available. Each section provides a theoretical analysis, implementation details of each block and subsequent testing to confirm the calculations with both synthetically generated signals and with real signals from satellites in space.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of open source software continues to grow on a daily basis. Today, enterprise applications contain 40% to 70% open source code and this fact has legal, development, IT security, risk management and compliance organizations focusing their attention on its use, as never before. They increasingly understand that the open source content within an application must be detected. Once uncovered, decisions regarding compliance with intellectual property licensing obligations must be made and known security vulnerabilities must be remediated. It is no longer sufficient from a risk perspective to not address both open source issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

VariScan is a software package for the analysis of DNA sequence polymorphisms at the whole genome scale. Among other features, the software:(1) can conduct many population genetic analyses; (2) incorporates a multiresolution wavelet transform-based method that allows capturing relevant information from DNA polymorphism data; and (3) it facilitates the visualization of the results in the most commonly used genome browsers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Matkapuhelinverkot kehittyvät jatkuvasti tarjoten asiakkailleen uusia palveluja ja nopeampia datayhteyksiä. Verkkojen eri protokollien testaamisessa käytetään apuna tietoliikenneanalysaattoreita, joiden avulla matkapuhelinverkkojen eri rajapinnoissa liikkuvaa informaatiota voidaan tutkia yksityiskohtaisesti. Tämän työn tarkoituksena oli suunnitella ja toteuttaa etämonitorointianalysaattorin testauksessa käytettävä testausohjelmisto ICONIX-prosessin avulla. Suunnitteluun katsottiin kuuluvan prosessiin mukaiset vaatimusmäärittelyn, analyysin ja alustavan suunnittelun sekä yksityiskohtaisen suunnittelun vaiheet. Toteutus muodostui vastaavasti ohjelmointityöstä ja yksikkötestauksesta. Työn tuloksena saatiin suunnittelun ja toteutuksen aikana syntyneet erilaiset kaaviot ja ohjelmakoodi. Lisäksi testausohjelmistoa käytettiin etämonitorointianalysaattorin toiminnallisuus- ja suorituskykytesteissä, joiden perusteella arvioitiin toteutetun testausohjelmiston toimivuutta. Testausohjelmiston todettiin sopivan etämonitorointianalysaattorin testaukseen, sillä niin toiminnallisuustestit kuin kuormitustestitkin saatiin suoritettua onnistuneesti toteutetun testausohjelmiston avulla. ICONIX-prosessin todettiin sopivan testausohjelmiston suunnitteluun, vaikka testausohjelmisto onkin toimintaperiaatteeltaan erilainen, kuin prosessia esittelevissä lähteissä esimerkkeinä käytetyt ohjelmistot. Eri suunnitteluvaiheisiin kului prosessiin tottumattomalta aikaa, mutta toisaalta laadittuja suunnitelmia ei tarvinnut enää toteutusvaiheen aikana muuttaa ja ohjelmointityö oli hyvin suoraviivaista.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we review the basic techniques of performance analysis within the UNIX environment that are relevant in computational chemistry, with particular emphasis on the execution profile using the gprof tool. Two case studies (in ab initio and molecular dynamics calculations) are presented in order to illustrate how execution profiling can be used to effectively identify bottlenecks and to guide source code optimization. Using these profiling and optimization techniques it was possible to obtain significant speedups (of up to 30%) in both cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Katselmoinnit ja tarkastusmenettelyt ovat osa ohjelmistotuotantoprosessin laadunvarmistusta. Staattisella tarkastamisella tarkoitetaan ohjelmistotuotteen visuaalista tarkastamista ohjelmistovirheiden havaitsemiseksi ja korjaamiseksi. Ohjelmiston lähdekoodin tarkastaminen voidaan suorittaa automaattisesti tarkoitukseen sopivalla ohjelmistolla l. analyysityökalulla. Tässä työssä toteutettiin analyysityökalu C#-kielisten lähdekoodien tarkastamiseen. Työkalulla suoritetussa kenttätestauksessa havaittiin tarkastettavissa ohjelmistoissa ohjelmiston ylläpitoon vaikuttavia puutteita. Lisäksi työssä tarkasteltiin katselmointeja osana ohjelmistotuotantoprosessin laadunvarmistusta sekä erilaisia ohjelmistovirheitä ja niiden lähteitä.