963 resultados para software quality attribute
Resumo:
Purpose of the study: Basic life support (BLS) and automated externaldefibrillation (AED) represent important skills to be acquired duringpregraduate medical training. Since 3 years, our medical school hasintroduced a BLS-AED course (with certification) for all second yearmedical students. Few reports about quality and persistence over timeof BLS-AED learning are available to date in the medical literature.Comprehensive evaluation of students' acquired skills was performedat the end of the 2008 academic year, 6 month after certification.Materials and methods: The students (N = 142) were evaluated duringa 9 minutes «objective structured clinical examination» (OSCE) station.Out of a standardized scenario, they had to recognize a cardiac arrestsituation and start a resuscitation process. Their performance wererecorded on a PC using an Ambuman(TM) mannequin and the AmbuCPR software kit(TM) during a minimum of 8 cycles (30 compressions:2 ventilations each). BLS parameters were systematically checked. Nostudent-rater interactions were allowed during the whole evaluation.Results: Response of the victim was checked by 99% of the students(N = 140), 96% (N = 136) called for an ambulance and/or an AED. Openthe airway and check breathing were done by 96% (N = 137), 92% (N =132) gave 2 rescue breaths. Pulse was checked by 95% (N=135), 100%(N = 142) begun chest compression, 96% (N = 136) within 1 minute.Chest compression rate was 101 ± 18 per minute (mean ± SD), depthcompression 43 ± 8 mm, 97% (N = 138) respected a compressionventilationratio of 30:2.Conclusions: Quality of BLS skills acquisition is maintained during a6-month period after a BLS-AED certification. Main targets of 2005 AHAguidelines were well respected. This analysis represents one of thelargest evaluations of specific BLS teaching efficiency reported. Furtherfollow-up is needed to control the persistence of these skills during alonger time period and noteworthy at the end of the pregraduatemedical curriculum.
Resumo:
OBJECTIVE: Imaging during a period of minimal myocardial motion is of paramount importance for coronary MR angiography (MRA). The objective of our study was to evaluate the utility of FREEZE, a custom-built automated tool for the identification of the period of minimal myocardial motion, in both a moving phantom at 1.5 T and 10 healthy adults (nine men, one woman; mean age, 24.9 years; age range, 21-32 years) at 3 T. CONCLUSION: Quantitative analysis of the moving phantom showed that dimension measurements approached those obtained in the static phantom when using FREEZE. In vitro, vessel sharpness, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were significantly improved when coronary MRA was performed during the software-prescribed period of minimal myocardial motion (p < 0.05). Consistent with these objective findings, image quality assessments by consensus review also improved significantly when using the automated prescription of the period of minimal myocardial motion. The use of FREEZE improves image quality of coronary MRA. Simultaneously, operator dependence can be minimized while the ease of use is improved.
Resumo:
La formació de traductors implica l´ús de procediments i eines que permetin els estudiants familiaritzar-se amb contextos professionals. El software lliure especialitzat inclou eines de qualitat professional i procediments accessibles per a les institucions acadèmiques i els estudiants a distància que treballen a casa seva. Els projectes reals que utilitzen software lliure i traducció col·laborativa (crowdsourcing) constitueixen recursos indispensables en la formació de traductors.
Resumo:
MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.
Resumo:
The reason for this study is to propose a new quantitative approach on how to assess the quality of Open Access University Institutional Repositories. The results of this new approach are tested in the Spanish University Repositories. The assessment method is based in a binary codification of a proposal of features that objectively describes the repositories. The purposes of this method are assessing the quality and an almost automatically system for updating the data of the characteristics. First of all a database was created with the 38 Spanish institutional repositories. The variables of analysis are presented and explained either if they are coming from bibliography or are a set of new variables. Among the characteristics analyzed are the features of the software, the services of the repository, the features of the information system, the Internet visibility and the licenses of use. Results from Spanish universities ARE provided as a practical example of the assessment and for having a picture of the state of the development of the open access movement in Spain.
Resumo:
Industry and large Agencies needs ¿agile¿ programming resources, to reinforce their own development staff and take advantage of innovative approaches produced by ¿fresh minds¿ all over the world. At the same time they may be reluctant to engage in classical software development call for tenders and contracts. Such contracts are often ¿trusted¿ by large ICT firms, which will deliver according to their own rigid frameworks (often based on alliances with proprietary software vendors), may propose comfortable quality assurances, but will cover their (real) risks and liability with high contingency costs and will charge for any change request in case the original specifications have not fixed all possible issues. Introducing FLOSS in business implies a new contracting philosophy, based on incentives rather than penalties and liability. Based on 2011 experience with a large Space Agency, Patrice-Emmanuel Schmitz pictures the needed legal instruments for a novel approach.
Resumo:
Opinnäytetyö etsii korrelaatiota ohjelmistomittauksella saavutettujen tulosten ja ohjelmasta löytyneiden virheiden väliltä. Työssä käytetään koeryhmänä jo olemassaolevia ohjelmistoja. Työ tutkii olisiko ohjelmistomittareita käyttämällä ollut mahdollista paikallistaa ohjelmistojen ongelmakohdat ja näin saada arvokasta tietoa ohjelmistokehitykseen. Mittausta voitaisiin käyttää resurssien parempaan kohdentamiseen koodikatselmuksissa, koodi-integraatiossa, systeemitestauksessa ja aikataulutuksessa. Mittaamisen avulla nämä tehtävät saisivat enemmän tietoa resurssien kohdistamiseen. Koeryhmänä käytetään erilaisia ohjelmistotuotteita. Yhteistä näille kaikille tuotteille on niiden peräkkäiset julkaisut. Uutta julkaisua tehtäessä, edellistä julkaisua käytetään pohjana, jonka päällekehitetään uutta lähdekoodia. Tämän takia ohjelmistomittauksessa pitää pystyä erottelemaan edellisen julkaisun lähdekoodi uudesta lähdekoodista. Työssä käytettävät ohjelmistomittarit ovat yleisiä ja ohjelmistotekniikassalaajasti käytettyjä mittaamaan erilaisia lähdekoodin ominaisuuksia, joiden arvellaan vaikuttavan virhealttiuteen. Tämän työn tarkoitus on tutkia näiden ohjelmistomittareiden käytettävyyttä koeryhmänä toimivissa ohjelmistoympäristöissä. Käytännön osuus työstä onnistui löytämään korrelaation joidenkinohjelmistomittareiden ja virheiden väliltä, samalla kuin toiset ohjelmistomittarit eivät antaneet vakuuttavia tuloksia. Ohjelmistomittareita käyttämällä näyttää olevan mahdollista tunnistaa virhealttiit kohdat ohjelmasta ja siten parantaa ohjelmistokehityksen tehokkuutta. Ohjelmistomittareiden käyttö tuotekehityksessäon perusteltavaa ja niiden avulla mahdollisesti pystyttäisiin vaikuttamaan ohjelmiston laatuun tulevissa julkaisuissa.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Software projects have proved to be troublesome to be implemented and as the size of software keeps increasing it is more and more important to follow-up the projects. The proportion of succeeded software projects is still quite low in spite of the research and the development of the project control methodologies. The success and failure factors of projects are known, as well as the project risks but nevertheless the projects still have problems with keeping the schedule and the budget and achieving the defined functionality and adequate quality. The purpose of this thesis was to find out what deviations are there in projects at the moment, what causes them, and what is measured in projects. Also project deviation was defined in the viewpoint of literature and field experts. The analysis was made using a qualitative research approach. It was found out that there are still deviations in software projects with schedule, budget, quality, requirements, documenting, effort, and resources. In addition also changes in requirements were identified. It was also found out that for example schedule deviations can be affected by reducing the size of a task and adding measurements.
Resumo:
Calcium sprays have normally improved both the quality and the storage life of apples throughout the world because Ca helps to prevent many fruit disorders and that taken up from the soil does not often reach the fruit in adequate amounts. Since the efficacy of Ca sprays varies according to soil, apple cultivar, and weather conditions, this study was carried out from 1998 to 2004, in the Southern of Brazil, in order to assess the effect of Ca sprays on the quality and storability of 'Gala' fruits. The experiment was set up in an orchard planted in 1988, on a density of 1234 trees/ha. Treatments consisted of 0, 4, 8, and 12 annual sprays of 0.5% CaCl2 regularly distributed 30 days after petal fall until one week before harvest. Fruits of the same size and maturity level were annually analyzed at harvest and after five months of conventional cold storage (-1ºC and 90-95% of RH). In five out of six seasons, fruits from all treatments were free of any physiological disorder, and Ca sprays had no effect on leaf composition and on any fruit attribute (soluble solids, titratable acidity, starch pattern index, flesh firmness, and concentrations of N, K, Ca and Mg). In the season of 2000/2001, however, when yield was 18 t ha-1 and fruits had an average weight of 175 g, the incidence of bitter pit plus lenticel blotch pit on stored fruits was 24% in the treatment with no calcium sprays and it decreased up to 2% in that with 12 sprays. Two seasons later, yield was also low (25 t ha-1) and fruits were large (168 g each), but they did not show any physiological disorder regardless of the number of Ca sprays. It seems that the incidence of Ca related disorders in 'Gala' apples grown on limed soils in Brazil with no excess of any nutrient only occurs on seasons with low crop yield, as a result of large fruits and a high leaf/fruit ratio, associated with some unknown environmental conditions.
Resumo:
Monet ohjelmistoyritykset ovat alkaneet kiinnittää yhä enemmän huomiota ohjelmistotuotteidensa laatuun. Tämä on johtanut siihen, että useimmat niistä ovat valinneet ohjelmistotestauksen välineeksi, jolla tätä laatua voidaan parantaa. Testausta ei pidä rajoittaa ainoastaan ohjelmistotuotteeseen itseensä, vaan sen tulisi kattaa koko ohjelmiston kehitysprosessi. Validaatiotestauksessa keskitytään varmistamaan, että lopputuote täyttää sille asetetut vaatimukset, kun taas verifikaatiotestausta käytetään ennaltaehkäisevänä testauksena, jolla pyritään poistamaan virheitä jo ennenkuin ne pääsevät lähdekoodiin asti. Työ, johon tämä diplomityö perustuu, tehtiin alkukevään ja kesän aikana vuonna 2003 Necsom Oy:n toimeksiannosta. Necsom on pieni suomalainen ohjelmistoyritys, jonka tutkimus- ja kehitysyksikkö toimii Lappeenrannassa.Tässä diplomityössä tutustutaan aluksi ohjelmistotestaukseen sekä eri tapoihin sen organisoimiseksi. Tämän lisäksi annetaan yleisiä ohjeita testisuunnitelmien ja testaustapausten tekoon, joita onnistunut ja tehokas testaus edellyttää. Kun tämä teoria on käyty läpi, esitetään esimerkkinä kuinka sisäinen ohjelmistotestaus toteutettiin Necsomilla. Lopuksi esitetään johtopäätökset, joihin päädyttiin käytännön testausprosessin seuraamisen jälkeen ja annetaan jatkotoimenpide-ehdotuksia.
Resumo:
Objective To develop procedures to ensure consistency of printing quality of digital images, by means of hardcopy quantitative analysis based on a standard image. Materials and Methods Characteristics of mammography DI-ML and general purpose DI-HL films were studied through the QC-Test utilizing different processing techniques in a FujiFilm®-DryPix4000 printer. A software was developed for sensitometric evaluation, generating a digital image including a gray scale and a bar pattern to evaluate contrast and spatial resolution. Results Mammography films showed maximum optical density of 4.11 and general purpose films, 3.22. The digital image was developed with a 33-step wedge scale and a high-contrast bar pattern (1 to 30 lp/cm) for spatial resolution evaluation. Conclusion Mammographic films presented higher values for maximum optical density and contrast resolution as compared with general purpose films. The utilized digital processing technique could only change the image pixels matrix values and did not affect the printing standard. The proposed digital image standard allows greater control of the relationship between pixels values and optical density obtained in the analysis of films quality and printing systems.
Resumo:
Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.
Resumo:
El software lliure està tenint últimament un pes cada cop més important en les empreses, però encara és el gran desconegut per a molta gent. Des de la seva creació als anys 80 fins ara, hi ha hagut un creixement exponencial de software lliure de gran qualitat, oferint eines per a tot tipus de necessitats, eines ofimàtiques, gestors de correu, sistemes de fitxer, sistemes operatius…. Tot aquest moviment no ha passat desapercebut per a molts usuaris i empreses, que s’han aprofitat d’ell per cobrir les seves necessitats. Pel que fa a les empreses, cada cop n’hi ha més que en petita o gran mesura, utilitzen el software lliure, ja sigui per el seu menor cost d’adquisició, o bé per la seva gran fiabilitat o per que és fàcilment adaptable o per no establir cap lligam tecnològic, en definitiva per tenir més llibertat. En el moment de la creació d’una nova empresa, on es parteix de zero en tota la tecnologia informàtica, és el moment menys costòs d’implementar l’arquitectura informàtica amb software lliure, és quan l’impacte que té sobre l’empresa, usuaris i clients és menor. En les empreses que ja tenen un sistema informàtic, caldrà establir un pla de migració, ja sigui total o parcial. La finalitat d’aquest projecte no és la de dir quin software és millor que l’altre o de dir quin s’ha d’instal•lar, sinó el de donar a conèixer el món del software lliure, mostrar part d’aquest software, fer alguna comparativa de software lliure amb software propietari, donant idees i un conjunt de solucions per a empreses, per què una empresa pugui agafar idees d’implementació d’algunes de les solucions informàtiques exposades o seguir algun dels consells proposats. Actualment ja hi ha moltes empreses que utilitzen software lliure. Algunes només n’utilitzen una petita part en les seves instal•lacions, ja que el fet de que una empresa funcioni al 100% amb software lliure, tot i que n’hi comença ha haver, de moment ho considero una mica arriscat, però que en poc temps, aquest fet serà cada cop més habitual.
Resumo:
Software integration is a stage in a software development process to assemble separate components to produce a single product. It is important to manage the risks involved and being able to integrate smoothly, because software cannot be released without integrating it first. Furthermore, it has been shown that the integration and testing phase can make up 40 % of the overall project costs. These issues can be mitigated by using a software engineering practice called continuous integration. This thesis work presents how continuous integration is introduced to the author's employer organisation. This includes studying how the continuous integration process works and creating the technical basis to start using the process on future projects. The implemented system supports software written in C and C++ programming languages on Linux platform, but the general concepts can be applied to any programming language and platform by selecting the appropriate tools. The results demonstrate in detail what issues need to be solved when the process is acquired in a corporate environment. Additionally, they provide an implementation and process description suitable to the organisation. The results show that continuous integration can reduce the risks involved in a software process and increase the quality of the product as well.