982 resultados para Automated Software Debugging


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan automatisoitua testausta ja käyttöliittymätestauksen tekemistä helpommaksi Symbian-käyttöjärjestelmässä. Työssä esitellään Symbian ja Symbian-sovelluskehityksessä kohdattavia haasteita. Lisäksi kerrotaan testausstrategioista ja -tavoista sekä automatisoidusta testaamisesta. Lopuksi esitetään työkalu, jolla testitapausten luominen toiminnalisuus- ja järjestelmätestaukseen tehdään helpommaksi. Graafiset käyttöliittymättuovat ainutlaatuisia haasteita ohjelmiston testaamiseen. Ne tehdään usein monimutkaisista komponenteista ja niitä suunnitellaan jatkuvasti uusiksi ohjelmistokehityksen aikana. Graafisten käyttöliittymien testaukseen käytetään usein kaappaus- ja toistotyökaluja. Käyttöliittymätestauksen testitapausten suunnittelu ja toteutus vaatii paljon panostusta. Koska graafiset käyttöliittymät muodostavat suuren osan koodista, voitaisiin säästää paljon resursseja tekemällä testitapausten luomisesta helpompaa. Käytännön osuudessa toteutettu projekti pyrkii tähän tekemällä testiskriptien luomisesta visuaalista. Näin ollen itse testien skriptikieltä ei tarvitse ymmärtää ja testien hahmottaminen on myös helpompaa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä diplomityössä esitellään ohjelmistotestauksen ja verifioinnin yleisiä periaatteita sekä käsitellään tarkemmin älypuhelinohjelmistojen verifiointia. Työssä esitellään myös älypuhelimissa käytettävä Symbian-käyttöjärjestelmä. Työn käytännön osuudessa suunniteltiin ja toteutettiin Symbian-käyttöjärjestelmässä toimiva palvelin, joka tarkkailee ja tallentaa järjestelmäresurssien käyttöä. Verifiointi on tärkeä ja kuluja aiheuttava tehtävä älypuhelinohjelmistojen kehityssyklissä. Kuluja voidaan vähentää automatisoimalla osa verifiointiprosessista. Toteutettu palvelin automatisoijärjestelmäresurssien tarkkailun tallentamalla tietoja niistä tiedostoon testien ajon aikana. Kun testit ajetaan uudestaan, uusia tuloksia vertaillaan lähdetallenteeseen. Jos tulokset eivät ole käyttäjän asettamien virherajojen sisällä, siitä ilmoitetaan käyttäjälle. Virherajojen ja lähdetallenteen määrittäminen saattaa osoittautua vaikeaksi. Kuitenkin, jos ne määritetään sopivasti, palvelin tuottaa hyödyllistä tietoa poikkeamista järjestelmäresurssien kulutuksessa testaajille.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TTCN-kieltä käytetään testitapausten määrittelemiseen tietoliikennejärjestelmissä. Nykyään TTCN:stä on tullut yhä suositumpi tapa toteuttaa testitapauksia. TTCN tarjoaa hyvän ja yksinkertaisen tavan muuntaa käsin testattavat testitapaukset automatisoiduiksi. Tämän diplomityön yhteydessä toteutettiin TTCN testitapaukset WCDMA -tukiaseman käyttö- ja kunnossapito- (O&M) ohjelmistolle. Ohjelmistoa on käytetty myös toisen sukupolven tukiasemissa, mutta kolmannen sukupolven tukiasemissa sillä on huomattavasti isompi rooli. WCDMA -tukiasemassa O&M käsittelee muun muassa tukiaseman käynnistyksen, virhetilanteet ja valvoo tukiaseman komponentteja. Ensimmäisiä tehtäviä diplomityötä tehdessä oli valita ne testitapaukset, jotka olisivat mahdollisia ja hyödyllisiä toteuttaa TTCN:n avulla. Testitapaukset valittiin valmiina olleista testitapausten kuvauksista. Valitut testitapaukset toteutettiin käyttäen rinnakkaista ja modulaarista TTCN-kieltä ja testattiin WCDMA -tukiasemaa vasten käyttäen TTCN Tester ohjelmistoa. Tämän diplomityön yhteydessä toteutettuja testitapauksia käytetään varmistamaan, että tukiasema voi toipua erilaisista virhetilanteista O&M ohjelmiston avulla. Testitapauksia WCDMA -tukiasemaa vasten ajettaessa varmistetaan myös, että O&M ohjelmisto toimii määrittelyn mukaisesti eri tilanteissa. Toteutetut testi tapaukset korvaavat nykyään käsin testatut O&M testi tapaukset tukiaseman O&M ohjelmistoa testatessa. Automatisoidut testi tapaukset tekevät O&M ohjelmiston testaamisen merkittävästi nopeammaksi ja helpommaksi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jatkuvasti lisääntyvä matkapuhelinten käyttäjien määrä, internetin kehittyminen yleiseksi tiedon ja viihteen lähteeksi on luonut tarpeen palvelulle liikkuvan työaseman liittämiseksi tietokoneverkkoihin. GPRS on uusi teknologia, joka tarjoaa olemassa olevia matka- puhelinverkkoja (esim. NMT ja GSM) nopeamman, tehokkaamman ja taloudellisemman liitynnän pakettidataverkkoihin, kuten internettiin ja intranetteihin. Tämän työn tavoitteena oli toteuttaa GPRS:n paketinohjausyksikön (Packet Control Unit, PCU) testauksessa tarvittavat viestintäajurit työasemaympristöön. Aidot matkapuhelinverkot ovat liian kalliita, eikä niistä saa tarvittavasti lokitulostuksia, jotta niitä voisi käyttää GPRS:n testauksessa ohjelmiston kehityksen alkuvaihessa. Tämän takia PCU-ohjelmiston testaus suoritetaan joustavammassa ja helpommin hallittavassa ympäristössä, joka ei aseta kovia reaaliaikavaatimuksia. Uusi toimintaympäristö ja yhteysmedia vaativat PCU:n ja muiden GPRS-verkon yksiköiden välisistä yhteyksistä huolehtivien ohjelman osien, viestintäajurien uuden toteutuksen. Tämän työn tuloksena syntyivät tarvittavien viestintäajurien työasemaversiot. Työssä tarkastellaan eri tiedonsiirtotapoja ja -protokollia testattavan ohjelmiston vaateiden, toteutetun ajurin ja testauksen kannalta. Työssä esitellään kunkin ajurin toteuttama rajapinta ja toteutuksen aste, eli mitkä toiminnot on toteutettu ja mitä on jätetty pois. Ajureiden rakenne ja toiminta selvitetään siltä osin, kuin se on oleellista ohjelman toiminnan kannalta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of sample inoculation is critical for achieving an optimal yield of discrete colonies in both monomicrobial and polymicrobial samples to perform identification and antibiotic susceptibility testing. Consequently, we compared the performance between the InoqulA (BD Kiestra), the WASP (Copan), and manual inoculation methods. Defined mono- and polymicrobial samples of 4 bacterial species and cloudy urine specimens were inoculated on chromogenic agar by the InoqulA, the WASP, and manual methods. Images taken with ImagA (BD Kiestra) were analyzed with the VisionLab version 3.43 image analysis software to assess the quality of growth and to prevent subjective interpretation of the data. A 3- to 10-fold higher yield of discrete colonies was observed following automated inoculation with both the InoqulA and WASP systems than that with manual inoculation. The difference in performance between automated and manual inoculation was mainly observed at concentrations of >10(6) bacteria/ml. Inoculation with the InoqulA system allowed us to obtain significantly more discrete colonies than the WASP system at concentrations of >10(7) bacteria/ml. However, the level of difference observed was bacterial species dependent. Discrete colonies of bacteria present in 100- to 1,000-fold lower concentrations than the most concentrated populations in defined polymicrobial samples were not reproducibly recovered, even with the automated systems. The analysis of cloudy urine specimens showed that InoqulA inoculation provided a statistically significantly higher number of discrete colonies than that with WASP and manual inoculation. Consequently, the automated InoqulA inoculation greatly decreased the requirement for bacterial subculture and thus resulted in a significant reduction in the time to results, laboratory workload, and laboratory costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large enterprises have for many years employed eBusiness solutions in order to improve their efficiency. Smaller companies, however, have not been able to leverage these technologies due to the high level of know-how and resources required in implementing them. To solve this, novel software services are being developed to facilitate eBusiness adoption for the small enterprise with the aim of making B2Bi feasible not only between large organisations but also between trading partners of all sizes. The objective of this study was to find what standards and techniques on eBusiness and software testing and quality assurance fit best for building these new kinds of software considering the requirements their unique eBusiness approach poses. The research was conducted as a literature study with focus on standards on software testing and quality assurance together with standards on eBusiness. The study showed that the current software testing and quality assurance standards do not possess such characteristics as would make select standards evidently better fitted for building this type of software, which were established to be best developed as web services in order for them to meet their requirements. A selection of eBusiness standards and technologies was proposed to support this approach. The main finding in the study was, however, that these kinds of web services that have high interoperability requirements will have to be able to carry out automated interoperability and conformance testing as part of their operation; this objective dictates how the software are built and how testing during software development is to be done. The study showed that research on automated interoperability and conformance testing for web services is still limited and more research is needed to make the building of highly-interoperable web services more feasible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to develop a an automated bench top electronic penetrometer (ABEP) that allows performing tests with high rate of data acquisition (up to 19,600 Hz) and with variation of the displacement velocity and of the base area of cone penetration. The mechanical components of the ABEP are: a supporting structure, stepper motor, velocity reducer, double nut ball screw and six penetration probes. The electronic components of ABEP are: a "driver" to control rotation and displacement, power supply, three load cells, two software programs for running and storing data, and a data acquisition module. This penetrometer presented in compact size, portable and in 32 validation tests it proved easy to operate, and showed high resolution, high velocity in reliability in data collection. During the validation tests the equipment met the objectives, because the test results showed that the ABEP could use different sizes of cones, allowed work at different velocities, showed for velocity and displacement, were only 1.3% and 0.7%, respectively, at the highest velocity (30 mm s-1) and 1% and 0.9%, respectively for the lowest velocity (0.1 mm s-1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new procedures for quickly obtaining accurate information on the physiological potential of seed lots is essential for developing quality control programs for the seed industry. In this study, the effectiveness of an automated system of seedling image analysis (Seed Vigor Imaging System - SVIS) in determining the physiological potential of sun hemp seeds and its relationship with electrical conductivity tests, were evaluated. SVIS evaluations were performed three and four days after sowing and data on the vigor index and the length and uniformity of seedling growth were collected. The electrical conductivity test was made on 50 seed replicates placed in containers with 75 mL of deionised water at 25 ºC and readings were taken after 1, 2, 4, 8 and 16 hours of imbibition. Electrical conductivity measurements at 4 or 8 hours and the use of the SVIS on 3-day old seedlings can effectively detect differences in vigor between different sun hemp seed lots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The software Seed Vigor Imaging System (SVIS®), has been successfully used to evaluate seed physiological potential by automated analyses of scanned seedlings. In this research, the efficiency of this system was compared to other tests accepted for assessing cucumber (Cucumis sativus L.) seed vigor of distinct seed lots of Supremo and Safira cultivars. Seeds were subjected to germination, traditional and saturated salt accelerated aging, seedling emergence, seedling length and SVIS analyses (determination of vigor indices and seedling growth uniformity, lengths of primary root, hypocotyl and whole seedlings). It was also determined whether the definition of seedling growth/uniformity ratios affects the sensitivity of the SVIS®. Results showed that analyses SVIS have provided consistent identification of seed lots performance, and have produced information comparable to those from recommended seed vigor tests, thus demonstrating a suitable sensitivity for a rapid and objective evaluation of physiological potential of cucumber seeds. Analyses of four-days-old cucumber seedlings using the SVIS® are more accurate and growth/uniformity does not affect the precision of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the user experience and usability in software application are becoming a major design issue due to the adaptation of many processes using new technologies. Therefore, the study of the user experience and usability might be included in every software development project and, thus, they should be tested to get traceable results. As a result of different testing methods to evaluate the concepts, a non-expert on the topic might have doubts on which option he/she should opt for and how to interpret the outcomes of the process. This work aims to create a process to ease the whole testing methodology based on the process created by Seffah et al. and a supporting software tool to follow the procedure of these testing methods for the user experience and usability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Central Library of Cochin University of Science and Technology (CUSAT) has been automated by proprietary software (Adlib Library) since 2000. After 11 years, in 2011, the university authorities decided to shift to an open source software (OSS), for integrated library management system (ILMS), Koha for automating the library housekeeping operations. In this context, this study attempts to share the experiences in cataloging with both type of software. The features of the cataloging modules of both the software are analysed on the badis of certain check points. It is found that the cataloging module of Koha is almost in par with that of proven proprietary software that has been in market for the past 25 years. Some suggestions made by this study may be incorporated for the further development and perfection of Koha.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this session we look at the sorts of errors that occur in programs, and how we can use different testing and debugging strategies (such as unit testing and inspection) to track them down. We also look at error handling within the program and at how we can use Exceptions to manage errors in a more sophisticated way. These slides are based on Chapter 6 of the Book 'Objects First with BlueJ'

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.