974 resultados para outsourcing software testing
Resumo:
RESUMO: Hoje em dia o software tornou-se num elemento útil na vida das pessoas e das empresas. Existe cada vez mais a necessidade de utilização de aplicações de qualidade, com o objectivo das empresas se diferenciarem no mercado. As empresas produtoras de software procuram aumentar a qualidade nos seus processos de desenvolvimento, com o objectivo de garantir a qualidade do produto final. A dimensão e complexidade do software aumentam a probabilidade do aparecimento de não-conformidades nestes produtos, resultando daí o interesse pela actividade de testes de software ao longo de todo o seu processo de concepção, desenvolvimento e manutenção. Muitos projectos de desenvolvimento de software são entregues com atraso por se verificar que na data prevista para a sua conclusão não têm um desempenho satisfatório ou por não serem confiáveis, ou ainda por serem difíceis de manter. Um bom planeamento das actividades de produção de software significa usualmente um aumento da eficiência de todo o processo produtivo, pois poderá diminuir a quantidade de defeitos e os custos que decorrem da sua correcção, aumentando a confiança na utilização do software e a facilidade da sua operação e manutenção. Assim se reconhece a importância da adopção de boas práticas no desenvolvimento do software. Para isso deve-se utilizar uma abordagem sistemática e organizada com o intuito de produzir software de qualidade. Esta tese descreve os principais modelos de desenvolvimento de software, a importância da engenharia dos requisitos, os processos de testes e principais validações da qualidade de software e como algumas empresas utilizam estes princípios no seu dia-a-dia, com o intuito de produzir um produto final mais fiável. Descreve ainda alguns exemplos como complemento ao contexto da tese. ABSTRACT: Nowadays the software has become a useful element in people's lives and it is increasingly a need for the use of quality applications from companies in order to differentiate in the market. The producers of software increase quality in their development processes, in order to ensuring final product quality. The complexity and size of software, increases the probability of the emergence of non-conformities in these products, this reason increases of interest in the business of testing software throughout the process design, development and maintenance. Many software development projects are postpone because in the date for delivered it’s has not performed satisfactorily, not to be trusted, or because it’s harder to maintain. A good planning of software production activities, usually means an increase in the efficiency of all production process, because it can decrease the number of defects and the costs of it’s correction, increasing the reliability of software in use, and make it easy to operate and maintenance. In this manner, it’s recognized the importance of adopting best practices in software development. To produce quality software, a systematic and organized approach must be used. This thesis describes the main models of software development, the importance of requirements engineering, testing processes and key validation of software quality and how some companies use these principles daily, in order to produce a final product more reliable. It also describes some examples in addition to the context of this thesis.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
El objetivo general de este proyecto estratégico es incorporar una actividad de alto valor agregado como es el diseño de circuitos integrados dentro del segmento de alta tecnología de la cadena productiva nacional. Para ello resulta necesario cumplimentar los siguientes objetivos específicos:• Fortalecer los grupos de investigación y desarrollo que realizan tareas dentro de este área temática, tanto en infraestructura como en recursos humanos; • Fortalecer y desarrollar la Industria Electrónica mediante la incorporación de estas nuevas tecnologías en sus productos; • Representar y asistir a los grupos de diseño locales en la búsqueda de oportunidades para realizar “outsourcing” de diseño para compañías del exterior; • Establecer una primer masa crítica de diseñadores, que funcione como impulsora de la actividad en el medio; • Generar una red a nivel local, donde convivan empresas, universidades y profesionales. La mejora continua en las prestaciones de los productos y en los procesos productivos ha llevado a que la microelectrónica esté presente en los más diversos ámbitos de la actividad humana, con la perspectiva de ir incrementando constantemente su participación. Por eso mismo, un país que pretenda insertarse en el mundo de manera soberana no puede menospreciar la necesidad de incrementar la capacidad de su industria en el área. Los componentes de la Cadena de valor de la ME son los siguientes: • Diseño del circuito, con valor agregado de conocimientos y experiencia del diseñador; • Herramientas de Software de diseño (CAD) con verificación y simulación; • Prototipeo de circuitos y ensayo (testing); • Fabricación de chip en línea; •Encapsulado y testeo. Salvo el primer eslabón, los restantes requieren de una gran inversión en infraestructura, con una permanente actualización. Sin embargo, el Diseño de circuitos es perfectamente abordable en la Argentina, dado que solo requiere de conocimiento y experiencia, y se puede realizar sobre computadoras estándar.
Resumo:
El objetivo general de este proyecto estratégico es incorporar una actividad de alto valor agregado como es el diseño de circuitos integrados dentro del segmento de alta tecnología de la cadena productiva nacional. Para ello resulta necesario cumplimentar los siguientes objetivos específicos: • Fortalecer los grupos de investigación y desarrollo que realizan tareas dentro de este área temática, tanto en infraestructura como en recursos humanos; • Fortalecer y desarrollar la Industria Electrónica mediante la incorporación de estas nuevas tecnologías en sus productos; • Representar y asistir a los grupos de diseño locales en la búsqueda de oportunidades para realizar "outsourcing" de diseño para compañías del exterior; • Establecer una primer masa crítica de diseñadores, que funcione como impulsora de la actividad en el medio; • Generar una red a nivel local, donde convivan empresas, universidades y profesionales. La mejora continua en las prestaciones de los productos y en los procesos productivos ha llevado a que la microelectrónica esté presente en los más diversos ámbitos de la actividad humana, con la perspectiva de ir incrementando constantemente su participación. Por eso mismo, un país que pretenda insertarse en el mundo de manera soberana no puede menospreciar la necesidad de incrementar la capacidad de su industria en el área. Los componentes de la Cadena de valor de la ME son los siguientes: • Diseño del circuito, con valor agregado de conocimientos y experiencia del diseñador; • Herramientas de Software de diseño (CAD) con verificación y simulación; • Prototipeo de circuitos y ensayo (testing); • Fabricación de chip en línea; • Encapsulado y testeo.
Resumo:
Today, usability testing in the development of software and systems is essential. A stationary usability lab offers many different possibilities in the evaluation of usability, but it reaches its limits in terms of flexibility and the experimental conditions. Mobile usability studies consider consciously outside influences, and these studies require a specially adapted approach to preparation, implementation and evaluation. Using the example of a mobile eye tracking study the difficulties and the opportunities of mobile testing are considered.
Resumo:
Background: ln Switzerland no HIV test is performed without the patient's consent based on a Voluntary Counseling and Testing policy (VCT). We hypothesized that a substantial proportion of patients going through an elective surgery falsely believed that an HIV test was performed on a routine basis and that the lack of transmission of result was interpreted as being HIV negative. Method: All patients with elective orthopedic surgery during 2007 were contacted by phone in 2008. A structured questionnaire assessed their belief about routine preoperative blood analysis (diabetes, coagulation function, HIV test and cholesterol level) as well as result awareness and interpretation. Variables included age and gender. Analysis were conducted using the software JMP 6.0.3. Results: 1123 patients were included. 130 (12 %) were excluded (Le. unreachable, unable to communicate on the phone, not operated). 993 completed the survey (89 %). Median age was 51 (16-79). 50 % were female. 376 (38 %) patients thought they had an HIV test performed before surgery but none of them had one. 298 (79 %) interpreted the absence of result as a negative HIV test. A predictive factor to believe an HIV test had been done was an age below 50 years old (45 % vs 33 % for 16-49 years old and 50-79 years old respectively, p < 0.001). No difference was observed between genders. Conclusion: ln Switzerland, nearly 40 % of the patients falsely thought an HIV test had been performed on a routine basis before surgery and were erroneously reassured about their HIV status. These results should either improve the information given to the patient regarding preoperative exams, or motivate public health policy to consider HIV opt-out screening instead of VCT strategy.
Resumo:
Interpretability and power of genome-wide association studies can be increased by imputing unobserved genotypes, using a reference panel of individuals genotyped at higher marker density. For many markers, genotypes cannot be imputed with complete certainty, and the uncertainty needs to be taken into account when testing for association with a given phenotype. In this paper, we compare currently available methods for testing association between uncertain genotypes and quantitative traits. We show that some previously described methods offer poor control of the false-positive rate (FPR), and that satisfactory performance of these methods is obtained only by using ad hoc filtering rules or by using a harsh transformation of the trait under study. We propose new methods that are based on exact maximum likelihood estimation and use a mixture model to accommodate nonnormal trait distributions when necessary. The new methods adequately control the FPR and also have equal or better power compared to all previously described methods. We provide a fast software implementation of all the methods studied here; our new method requires computation time of less than one computer-day for a typical genome-wide scan, with 2.5 M single nucleotide polymorphisms and 5000 individuals.
Resumo:
Usability is critical to consider an interactive software system successful. Usability testing and evaluation during product development have gained wide acceptance as a strategy to improve product quality. Early introduction of usability perspectives in a product is very important in order to provide a clear visibility of the quality aspects not only for the developers, but also for the testing users as well. However, usability evaluation and testing are not commonly taken into consideration as an essential element of the software development process. Then, this paper exposes a proposal to introduce usability evaluation and testing within a software development through reuse of software artifacts. Additionally, it suggests the introduction of an auditor within the classification of actors for usability tests. It also proposes an improvement of checklists used for heuristics evaluation, adding quantitative and qualitative aspects to them
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
This paper examines statistical analysis of social reciprocity at group, dyadic, and individual levels. Given that testing statistical hypotheses regarding social reciprocity can be also of interest, a statistical procedure based on Monte Carlo sampling has been developed and implemented in R in order to allow social researchers to describe groups and make statistical decisions.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
Recent reports indicate that of the over 25,000 bridges in Iowa, slightly over 7,000 (29%) are either structurally deficient or functionally obsolete. While many of these bridges may be strengthened or rehabilitated, some simply need to be replaced. Before implementing one of these options, one should consider performing a diagnostic load test on the structure to more accurately assess its load carrying capacity. Frequently, diagnostic load tests reveal strength and serviceability characteristics that exceed the predicted codified parameters. Usually, codified parameters are very conservative in predicting lateral load distribution characteristics and the influence of other structural attributes. As a result, the predicted rating factors are typically conservative. In cases where theoretical calculations show a structural deficiency, it may be very beneficial to apply a "tool" that utilizes a more accurate theoretical model which incorporates field-test data. At a minimum, this approach results in more accurate load ratings and many times results in increased rating factors. Bridge Diagnostics, Inc. (BDI) developed hardware and software that are specially designed for performing bridge ratings based on data obtained from physical testing. To evaluate the BDI system, the research team performed diagnostic load tests on seven "typical" bridge structures: three steel-girder bridges with concrete decks, two concrete slab bridges, and two steel-girder bridges with timber decks. In addition, a steel-girder bridge with a concrete deck previously tested and modeled by BDI was investigated for model verification purposes. The tests were performed by attaching strain transducers on the bridges at critical locations to measure strains resulting from truck loading positioned at various locations on the bridge. The field test results were used to develop and validate analytical rating models. Based on the experimental and analytical results, it was determined that bridge tests could be conducted relatively easy, that accurate models could be generated with the BDI software, and that the load ratings, in general, were greater than the ratings, obtained using the codified LFD Method (according to AASHTO Standard Specifications for Highway Bridges).
Resumo:
Integral abutment bridges are constructed without an expansion joint in the superstructure of the bridge; therefore, the bridge girders, deck, abutment diaphragms, and abutments are monolithically constructed. The abutment piles in an integral abutment bridge are vertically orientated, and they are embedded into the pile cap. When this type of a bridge experiences thermal expansion or contraction, horizontal displacements are induced at the top of the abutment piles. The flexibility of the abutment piles eliminates the need to provide an expansion joint at the inside face to the abutments: Integral abutment bridge construction has been used in Iowa and other states for many years. This research is evaluating the performance of integral abutment bridges by investigating thermally induced displacements, strains, and temperatures in two Iowa bridges. Each bridge has a skewed alignment, contains five prestressed concrete girders that support a 30-ft wide roadway for three spans, and involves a water crossing. The bridges will be monitored for about two years. For each bridge, an instrumentation package includes measurement devices and hardware and software support systems. The measurement devices are displacement transducers, strain gages, and thermocouples. The hardware and software systems include a data-logger; multiplexers; directline telephone service and computer terminal modem; direct-line electrical power; lap-top computer; and an assortment of computer programs for monitoring, transmitting, and management of the data. Instrumentation has been installed on a bridge located in Guthrie County, and similar instrumentation is currently being installed on a bridge located in Story County. Preliminary test results for the bridge located in Guthrie County have revealed that temperature changes of the bridge deck and girders induce both longitudinal and transverse displacements of the abutments and significant flexural strains in the abutment piles. For an average temperature range of 73° F for the superstructure concrete in the bridge located in Guthrie County, the change in the bridge length was about 1 118 in. and the maximum, strong-axis, flexural-strain range for one of the abutment piles was about 400 micro-strains, which corresponds to a stress range of about 11,600 psi.
Resumo:
BACKGROUND: Current bilevel positive-pressure ventilators for home noninvasive ventilation (NIV) provide physicians with software that records items important for patient monitoring, such as compliance, tidal volume (Vt), and leaks. However, to our knowledge, the validity of this information has not yet been independently assessed. METHODS: Testing was done for seven home ventilators on a bench model adapted to simulate NIV and generate unintentional leaks (ie, other than of the mask exhalation valve). Five levels of leaks were simulated using a computer-driven solenoid valve (0-60 L/min) at different levels of inspiratory pressure (15 and 25 cm H(2)O) and at a fixed expiratory pressure (5 cm H(2)O), for a total of 10 conditions. Bench data were compared with results retrieved from ventilator software for leaks and Vt. RESULTS: For assessing leaks, three of the devices tested were highly reliable, with a small bias (0.3-0.9 L/min), narrow limits of agreement (LA), and high correlations (R(2), 0.993-0.997) when comparing ventilator software and bench results; conversely, for four ventilators, bias ranged from -6.0 L/min to -25.9 L/min, exceeding -10 L/min for two devices, with wide LA and lower correlations (R(2), 0.70-0.98). Bias for leaks increased markedly with the importance of leaks in three devices. Vt was underestimated by all devices, and bias (range, 66-236 mL) increased with higher insufflation pressures. Only two devices had a bias < 100 mL, with all testing conditions considered. CONCLUSIONS: Physicians monitoring patients who use home ventilation must be aware of differences in the estimation of leaks and Vt by ventilator software. Also, leaks are reported in different ways according to the device used.