896 resultados para Software testing. Test generation. Grammars
Resumo:
Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
The forging characteristics of an Al-Cu-Mg-Si-Sn alloy are examined using it new testing strategy which incorporates a double truncated cone specimen and finite element modelling. This sample geometry produces controlled strain distributions within a single specimen and can readily identify the specific strain required to achieve a specific microstructural event by matching the metallographic data with the strain profiles calculated from finite element software, The friction conditions were determined using the conventional friction ring test, which was evaluated using finite element software. The rheological properties of the alloy, evaluated from compression testing of right cylinders, are similar to the properties of conventional aluminium forgings. A hoop strain develops at the outer diameter of the truncated cones and this leads to pore opening at the outer few millimetres. The porosity is effectively removed when the total strain equals the net compressive strain. The strain profiles that develop in the truncated cones are largely independent of the processing temperature and the strain rate although the strain required for pore closure increases as the forging temperature is reduced. This suggests that the microstructure and the strain rate sensitivity may also be important factors controlling pore behaviour. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Concussion severity grades according to the Cantu, Colorado Medical Society, and American Academy of Neurology systems were not clearly related to the presence or duration of impaired neuropsychological test performance in 21 professional rugby league athletes. The use of concussion severity guidelines and neuropsychological testing to assist return to play decisions requires further investigation.
Resumo:
Landscape metrics are widely applied in landscape ecology to quantify landscape structure. However, many are poorly tested and require rigorous validation if they are to serve as reliable indicators of habitat loss and fragmentation, such as Montreal Process Indicator 1.1e. We apply a landscape ecology theory, supported by exploratory and confirmatory statistical techniques, to empirically test landscape metrics for reporting Montreal Process Indicator 1.1e in continuous dry eucalypt forests of sub-tropical Queensland, Australia. Target biota examined included: the Yellow-bellied Glider (Petaurus australis); the diversity of nectar and sap feeding glider species including P. australis, the Sugar Glider P. breviceps, the Squirrel Glider P. norfolcensis, and the Feathertail Glider Acrobates pygmaeus; six diurnal forest birds species; total diurnal bird species diversity; and the density of nectar-feeding diurnal bird species. Two scales of influence were considered: the stand-scale (2 ha), and a series of radial landscape extents (500 m - 2 km; 78 - 1250 ha) surrounding each fauna transect. For all biota, stand-scale structural and compositional attributes were found to be more influential than landscape metrics. For the Yellow-bellied Glider, the proportion of trace habitats with a residual element of old spotted-gum/ironbark eucalypt trees was a significant landscape metric at the 2 km landscape extent. This is a measure of habitat loss rather than habitat fragmentation. For the diversity of nectar and sap feeding glider species, the proportion of trace habitats with a high coefficient of variation in patch size at the 750 m extent was a significant landscape metric. None of the landscape metrics tested was important for diurnal forest birds. We conclude that no single landscape metric adequately captures the response of the region's forest biota per se. This poses a major challenge to regional reporting of Montreal Process Indicator 1.1e, fragmentation of forest types.
Resumo:
Solid earth simulations have recently been developed to address issues such as natural disasters, global environmental destruction and the conservation of natural resources. The simulation of solid earth phenomena involves the analysis of complex structures including strata, faults, and heterogeneous material properties. Simulation of the generation and cycle of earthquakes is particularly important, but such simulations require the analysis of complex fault dynamics. GeoFEM is a parallel finite-element analysis system intended for solid earth field phenomena problems. This paper describes recent development in the GeoFEM project for the simulation of earthquake generation and cycles.
Resumo:
The rise of component-based software development has created an urgent need for effective application program interface (API) documentation. Experience has shown that it is hard to create precise and readable documentation. Prose documentation can provide a good overview but lacks precision. Formal methods offer precision but the resulting documentation is expensive to develop. Worse, few developers have the skill or inclination to read formal documentation. We present a pragmatic solution to the problem of API documentation. We augment the prose documentation with executable test cases, including expected outputs, and use the prose plus the test cases as the documentation. With appropriate tool support, the test cases are easy to develop and read. Such test cases constitute a completely formal, albeit partial, specification of input/output behavior. Equally important, consistency between code and documentation is demonstrated by running the test cases. This approach provides an attractive bridge between formal and informal documentation. We also present a tool that supports compact and readable test cases; and generation of test drivers and documentation, and illustrate the approach with detailed case studies. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
A stickiness testing device based on the probe tack test has been designed and tested. It was used to perform in situ characterization of drying hemispherical drops with an initial radius 3.5 mm. Tests were carried out in two drying temperatures, 63 and 95 degreesC. Moisture and temperature histories of the drying drops of fructose, honey, sucrose, maltodextrin and sucrose-maltodextrin mixtures were determined. The rates of moisture evaporation of the fructose solution was the fastest while those of the maltodextrin solution was the lowest. A profile reversal was observed when the temperature profiles of these materials were compared. Different modes of failure were observed during the stickiness tests. Pure fructose and honey solutions remained completely sticky and failed cohesively until the end of drying. Pure sucrose solution remained sticky and failed cohesively until complete crystallization occurred. The surface of the maltodextrin drops formed a skin shortly after the start of drying. It exhibited adhesive failure and reached a state of non-adhesion. Addition of maltodextrin significantly altered the stickiness of sucrose solution. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This study compares the performance of the Quickscreen and Default protocols of the ILO-96 Otodynamics Analyzer in recording transient evoked otoacoustic emissions (TEOAEs) from adults using clinical decision analysis. Data were collected from 25 males (mean age = 29.0 years, SD = 6.8) and 35 females (mean age = 28.1 years, SD = 9.6). The results showed that the mean signal-to-noise ratios obtained from the Quickscreen were significantly greater than those from the Default protocol at 1,2, and 4 kHz. The comparison of the performance of the two protocols, based on the results using receiver operating characteristics curves, revealed a higher performance of the Quickscreen than the Default protocol at 1 and 4 kHz but not at 2 kHz. In view of the enhanced performance of the Quickscreen over the Default protocol in general, the routine use of the Default protocol for testing adults in audiology clinics should be reconsidered.
Resumo:
Despite widespread awareness that children with Down syndrome are particularly susceptible to hearing pathologies, the audiological status of students with Down syndrome in special schools is all too often unknown. Unfortunately, hearing screening for this population is unable to rely on standard, behavioural test batteries. To facilitate future improvements in screening protocols, this study investigated the results of tympanometry and transient evoked otoacoustic emission (TEOAE) testing for a group of children with Down syndrome. Assessments were not conducted in the artificial context of a clinic or laboratory, but within the school environment. Outcomes are reported for 27 subjects with a mean age of 10 years 5 months (SD = 4;11). Tympanometry testing was failed in at least one ear by 41.7% of subjects, while a failure rate of 81.5% of subjects was observed for TEOAE testing. Therefore, it is concluded that immediate review of hearing screening programs for students with Down syndrome is highly advisable.
Resumo:
Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper presents a generic model for language-independent reverse engineering of graphical user interface based applications, and we explore the integration of model-based testing techniques in our approach, thus allowing us to perform fault detection. A prototype tool has been constructed, which is already capable of deriving and testing a user interface behavioral model of applications written in Java/Swing.
Resumo:
RESUMO: A utilização adequada das TIC no ensino da Matemática, nos dias de hoje é considerada por alguns como justificada e inevitável, esperando que a sua utilização melhore o ensino e a aprendizagem da Matemática. Nesta investigação, pretende-se testar o Software Winplot), no ensino e aprendizagem do gráfico da função quadrática com alunos do 10ºano, da Escola do segundo ciclo do Ensino Secundário nº9099, de modo a verificar se melhora o ensino e na aprendizagem desta temática.Para a nossa investigação Seleccionámos dois grupos de alunos do 10º ano que funcionaram como grupo de controlo e grupo experimental; depois de ambos os grupos terem realizado dois pré-testes, o grupo experimental realizou as aprendizagens no laboratório de informática com auxílio do Software Winplot, ao longo de 8 semanas, durante o 2º trimestre do ano lectivo de 2009/2010. O grupo de controlo realizou as aprendizagens, ao mesmo tempo que o grupo experimental, na sala normal de aulas sem auxílio do Software Winplot.Ao compararmos os dois grupos, o teste T de pares para amostras independentes, mostra-nos que estatisticamente não há diferenças significativas entre os dois grupos, porque os níveis de significância são maiores que p=0,05, desta feita podemos dizer que o grupo experimental, não obteve melhores resultados que o grupo de controlo, logo o Software Winplot não resultou o efeito desejado nas aprendizagens com alunos da 10ºano da Escola do segundo ciclo do ensino Secundário nº9099, sita no município de Viana (Luanda/Angola). ABSTRACT:The appropriate use of ICTs in teaching mathematics, today is considered by somo to be justified and inevitable, hoping that their use will improve the teaching and learning of mathematics.In this investigation, we intend to test the Software Winplot, teaching and learning of the graph of quadratic functions with students of grade 10, attending the second cycle of secondary School nº9099 in order to verify that improves teaching and learning of this subject.For our research selected two groups of students in 10th grade who acted as the controlo group and experimental group, after both group had undergone two pre-test, the experimental group performed the learning in the computer lab with the aid of Software Winplot, over 8 weeks during the second quarter of the academic year 2009/2010. Thr control gropu performed the learning, while the experimental group, in rregular class room without help of the Software Winplot.Comparing the two groups, the t test for independent samples pairs, shows us that there is no statistically significant differences between the two groups, because the significance levels are greater than p=0,05, this time we can say that experimental group, not yielded better results than the control group, so the Software did not result the desired effect on the learning with students from 10th grade of the School of the second cycle of Secondary nº9099, located in Viana (Luanda/Angola).
Resumo:
This study develops a theoretical model that explains the effectiveness of the balanced scorecard approach by means of a system dynamics and feedback learning perspective. Presumably, the balanced scorecard leads to a better understanding of context, allowing managers to externalize and improve their mental models. We present a set of hypotheses about the influence of the balanced scorecard approach on mental models and performance. A test based on a simulation experiment that uses a system dynamics model is performed. The experiment included three types of parameters: financial indicators; balanced scorecard indicators; and balanced scorecard indicators with the aid of a strategy map review. Two out of the three hypotheses were confirmed. It was concluded that a strategy map review positively influences mental model similarity, and mental model similarity positively influences performance.
Resumo:
RESUMO: Hoje em dia o software tornou-se num elemento útil na vida das pessoas e das empresas. Existe cada vez mais a necessidade de utilização de aplicações de qualidade, com o objectivo das empresas se diferenciarem no mercado. As empresas produtoras de software procuram aumentar a qualidade nos seus processos de desenvolvimento, com o objectivo de garantir a qualidade do produto final. A dimensão e complexidade do software aumentam a probabilidade do aparecimento de não-conformidades nestes produtos, resultando daí o interesse pela actividade de testes de software ao longo de todo o seu processo de concepção, desenvolvimento e manutenção. Muitos projectos de desenvolvimento de software são entregues com atraso por se verificar que na data prevista para a sua conclusão não têm um desempenho satisfatório ou por não serem confiáveis, ou ainda por serem difíceis de manter. Um bom planeamento das actividades de produção de software significa usualmente um aumento da eficiência de todo o processo produtivo, pois poderá diminuir a quantidade de defeitos e os custos que decorrem da sua correcção, aumentando a confiança na utilização do software e a facilidade da sua operação e manutenção. Assim se reconhece a importância da adopção de boas práticas no desenvolvimento do software. Para isso deve-se utilizar uma abordagem sistemática e organizada com o intuito de produzir software de qualidade. Esta tese descreve os principais modelos de desenvolvimento de software, a importância da engenharia dos requisitos, os processos de testes e principais validações da qualidade de software e como algumas empresas utilizam estes princípios no seu dia-a-dia, com o intuito de produzir um produto final mais fiável. Descreve ainda alguns exemplos como complemento ao contexto da tese. ABSTRACT: Nowadays the software has become a useful element in people's lives and it is increasingly a need for the use of quality applications from companies in order to differentiate in the market. The producers of software increase quality in their development processes, in order to ensuring final product quality. The complexity and size of software, increases the probability of the emergence of non-conformities in these products, this reason increases of interest in the business of testing software throughout the process design, development and maintenance. Many software development projects are postpone because in the date for delivered it’s has not performed satisfactorily, not to be trusted, or because it’s harder to maintain. A good planning of software production activities, usually means an increase in the efficiency of all production process, because it can decrease the number of defects and the costs of it’s correction, increasing the reliability of software in use, and make it easy to operate and maintenance. In this manner, it’s recognized the importance of adopting best practices in software development. To produce quality software, a systematic and organized approach must be used. This thesis describes the main models of software development, the importance of requirements engineering, testing processes and key validation of software quality and how some companies use these principles daily, in order to produce a final product more reliable. It also describes some examples in addition to the context of this thesis.
Resumo:
This paper studies the application of commercial biocides to old maritime pine timber structures (Pinus pinaster Ait.) that have previously been impregnated with other products. A method was developed in the laboratory to be used in situ to determine the impregnation depth achieved by a new generation biocide product applied to timber from an old building. This timber had once been treated with an unknown product difficult to characterize without extensive analysis. The test was initially developed in laboratory conditions and later tested on elements of the roof structure of an 18th century building. In both cases the results were promising and mutually consistent with penetration depths for some treatments reaching 2.0 cm. The application in situ proved the tests viability and simplicity of execution giving a clear indication on the feasibility of possible re-treatments.