896 resultados para Software testing. Test generation. Grammars
Resumo:
The present research represents a coherent approach to understanding the root causes of ethnic group differences in ability test performance. Two studies were conducted, each of which was designed to address a key knowledge gap in the ethnic bias literature. In Study 1, both the LR Method of Differential Item Functioning (DIF) detection and Mixture Latent Variable Modelling were used to investigate the degree to which Differential Test Functioning (DTF) could explain ethnic group test performance differences in a large, previously unpublished dataset. Though mean test score differences were observed between a number of ethnic groups, neither technique was able to identify ethnic DTF. This calls into question the practical application of DTF to understanding these group differences. Study 2 investigated whether a number of non-cognitive factors might explain ethnic group test performance differences on a variety of ability tests. Two factors – test familiarity and trait optimism – were able to explain a large proportion of ethnic group test score differences. Furthermore, test familiarity was found to mediate the relationship between socio-economic factors – particularly participant educational level and familial social status – and test performance, suggesting that test familiarity develops over time through the mechanism of exposure to ability testing in other contexts. These findings represent a substantial contribution to the field’s understanding of two key issues surrounding ethnic test performance differences. The author calls for a new line of research into these performance facilitating and debilitating factors, before recommendations are offered for practitioners to ensure fairer deployment of ability testing in high-stakes selection processes.
Resumo:
This research provides data which investigates the feasibility of using fourth generation evaluation during the process of instruction. A semester length course entitled "Multicultural Communications", (PUR 5406/4934) was designed and used in this study, in response to the need for the communications profession to produce well-trained culturally sensitive practitioners for the work force and the market place. A revised pause model consisting of three one-on-one indepth interviews conducted outside of the class, three reflections periods during the class and a self-reflective essay prepared one week before the end of the course was analyzed. Narrative and graphic summaries of participant responses produced significant results. The revised pause model was found to be an effective evaluation method for use in multicultural education under certain conditions as perceived by the participants in the study. participant self-perceived behavior change and knowledge acquisition was identified through use of the revised pause model. Study results suggest that by using the revised pause model of evaluation, instructors teaching multicultural education in schools of journalism and mass communication is yet another way of enhancing their ability to become both the researcher and the research subject. In addition, the introduction of a qualitative model has been found to be a more useful way of generating participant involvement and introspection. Finally, the instructional design of the course used in the study provides communication educators with a practical way of preparing their students be effective communicators in a multicultural world.
Resumo:
The purpose of this investigation was to develop and implement a general purpose VLSI (Very Large Scale Integration) Test Module based on a FPGA (Field Programmable Gate Array) system to verify the mechanical behavior and performance of MEM sensors, with associated corrective capabilities; and to make use of the evolving System-C, a new open-source HDL (Hardware Description Language), for the design of the FPGA functional units. System-C is becoming widely accepted as a platform for modeling, simulating and implementing systems consisting of both hardware and software components. In this investigation, a Dual-Axis Accelerometer (ADXL202E) and a Temperature Sensor (TMP03) were used for the test module verification. Results of the test module measurement were analyzed for repeatability and reliability, and then compared to the sensor datasheet. Further study ideas were identified based on the study and results analysis. ASIC (Application Specific Integrated Circuit) design concepts were also being pursued.
Resumo:
Abstract not available
Resumo:
Molybdenum is one of the essential micronutrients for soybeans, acting directly on nitrogen metabolism as enzyme cofactor of nitrogenase. Usually, this nutrient is supplied to the plants through seed treatment or foliar application. The aim of this study was to evaluate the molybdenum effects by foliar in the physiological potential of soybean seeds and verify its interference in the enzyme activities involved in nitrogen metabolism. Soybean seeds of BMX Turbo cultivar were used, produced in Erechim, RS, harvest 2013, from plants treated with the following Mo concentrations: 0; 25; 50 and 75 g ha-1, supplied through two commercial products (Biomol and Molybdate) and stored during 0 and 6 months in uncontrolled conditions. The first experiment was conducted in Seedtes Seed Analysis Laboratory in Pato Branco, PR. The used design was completely randomized in a factorial analysis 4 x 2 x 2 with four replications each. The physiological potential of the seeds was evaluated by the germination test, seedling growth, accelerated aging and emergence on the soil. The second experiment was conducted in a greenhouse, where the seeds derived from treatments with different concentrations of Mo: 0; 25; 50 and 75 g ha-1 supplied through two commercial products (Biomol and Molybdate) were grown in vases. The used design was completely randomized in a factorial analysis 4 x 2 with four replications. Evaluations were performed when the plants reached the R1 phenological stage concerning the nodulation, dry matter of root and shoot of the plants and the determination of the activity of the enzymes glutamine synthetase and glutamate synthetase and the content of total soluble proteins. The data were submitted to variance analysis and when significant they were assessed by Tukey’s test for comparison of products and seed storage and with regression study to the concentrations at 5% probability. Analyses were performed using SISVAR statistical software. The soybean seed storage under uncontrolled conditions affected the seed vigour produced with Mo, regardless of the commercial product used during production. The application of Mo through foliar positively influences the production of soya beans which presented increasing responses in the germination and vigour with the application of Mo above 25 g ha-1 . The enrichment of Mo through foliar did not affect the nodulation of plants of the next generation, however, the use of Mo above 25 g ha-1 provided an increase in the activity of enzymes involved in nitrogen metabolism as well as on the total protein content.
Resumo:
The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Resumo:
Ethernet connections, which are widely used in many computer networks, can suffer from electromagnetic interference. Typically, a degradation of the data transmission rate can be perceived as electromagnetic disturbances lead to corruption of data frames on the network media. In this paper a software-based measuring method is presented, which allows a direct assessment of the effects on the link layer. The results can directly be linked to the physical interaction without the influence of software related effects on higher protocol layers. This gives a simple tool for a quantitative analysis of the disturbance of an Ethernet connection based on time domain data. An example is shown, how the data can be used for further investigation of mechanisms and detection of intentional electromagnetic attacks. © 2015 Author(s).
Resumo:
Las líneas de productos software son familias de productos que están íntimamente relacionados entre sí, normalmente formados por combinaciones de un conjunto de características software. Generalmente no es factible testar todos los productos de la familia, ya que el número de productos es muy elevado debido a la explosión combinatoria de características. Por este motivo, se han propuesto criterios de cobertura que pretenden probar al menos todas las interacciones entre características sin necesidad de probar todos los productos, por ejemplo todos los pares de características (emph{pairwise coverage}). Además, es deseable testar primero los productos compuestos por un conjunto de características prioritarias. Este problema es conocido como emph{Prioritized Pairwise Test Data Generation}. En este trabajo proponemos una técnica basada en programación lineal entera para generar este conjunto de pruebas priorizado. Nuestro estudio revela que la propuesta basada en programación lineal entera consigue mejores resultados estadísticamente tanto en calidad como en tiempo de computación con respecto a las técnicas existentes para este problema.
Resumo:
Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Through modelling activity, experimental campaigns, test bench and on-field validation, a complete powertrain for a BEV has been designed, assembled and used in a motorsport competition. The activity can be split in three main subjects, representing the three key components of an BEV vehicle. First of all a model of the entire powertrain has been developed in order to understand how the various design choices will influence the race lap-time. The data obtained was then used to design, build and test a first battery pack. After bench tests and track tests, it was understood that by using all the cell charac-teristics, without breaking the rules limitations, higher energy and power densities could have been achieved. An updated battery pack was then designed, produced and raced with at Motostudent 2018 re-sulting in a third place at debut. The second topic of this PhD was the design of novel inverter topologies. Three inverters have been de-signed, two of them using Gallium Nitride devices, a promising semiconductor technology that can achieve high switching speeds while maintaining low switching losses. High switching frequency is crucial to reduce the DC-Bus capacitor and then increase the power density of 3 phase inverters. The third in-verter uses classic Silicon devices but employs a ZVS (Zero Voltage Switching) topology. Despite the in-creased complexity of both the hardware and the control software, it can offer reduced switching losses by using conventional and established silicon mosfet technology. Finally, the mechanical parts of a three phase permanent magnet motor have been designed with the aim to employ it in UniBo Motorsport’s 2020 Formula Student car.