914 resultados para Testing and Debugging
Resumo:
International research with regard to the intended as well as to the unintended outcomes and effects of high-stakes testing shows that the impact of high-stakes tests has important consequences for the participants involved in the respective educational systems. The purpose of this special issue is to examine the implementation of high-stakes testing in different national school systems and to refer to the effects in view of the concept of Educational Governance. (DIPF/Orig.)
Resumo:
The purpose of this thesis was to examine the mediating effects of job-related negative emotions on the relationship between workplace aggression and outcomes. Additionally, the moderating effects of workplace social support and intensity of workplace aggression are considered. A total 321 of working individuals participated through an online survey. The results of this thesis suggest that job-related negative emotions are a mediator of the relationship between workplace aggression and outcomes, with full and partial mediation supported. Workplace social support was found to be a buffering variable in the relationship between workplace aggression and outcomes, regardless of the source of aggression (supervisor or co-worker) or the source of the social support. Finally, intensity of aggression was found to be a strong moderator of the relationship between workplace aggression and outcomes.
Resumo:
Despite the development of improved performance test protocols by renowned researchers, there are still road networks which experience premature cracking and failure. One area of major concern in asphalt science and technology, especially in cold regions in Canada is thermal (low temperature) cracking. Usually right after winter periods, severe cracks are seen on poorly designed road networks. Quality assurance tests based on improved asphalt performance protocols have been implemented by government agencies to ensure that roads being constructed are at the required standard but asphalt binders that pass these quality assurance tests still crack prematurely. While it would be easy to question the competence of the quality assurance test protocols, it should be noted that performance tests which are being used and were repeated in this study, namely the extended bending beam rheometer (EBBR) test, double edge-notched tension test (DENT), dynamic shear rheometer (DSR) test and X-ray fluorescence (XRF) analysis have all been verified and proven to successfully predict asphalt pavement behaviour in the field. Hence this study looked to probe and test the quality and authenticity of the asphalt binders being used for road paving. This study covered thermal cracking and physical hardening phenomenon by comparing results from testing asphalt binder samples obtained from the storage ‘tank’ prior to paving (tank samples) and recovered samples for the same contracts with aim of explaining why asphalt binders that have passed quality assurance tests are still prone to fail prematurely. The study also attempted to find out if the short testing time and automated procedure of torsion bar experiments can replace the established but tedious procedure of the EBBR. In the end, it was discovered that significant differences in performance and composition exist between tank and recovered samples for the same contracts. Torsion bar experimental data also indicated some promise in predicting physical hardening.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Área de Especialização em Automação e Sistemas
Resumo:
Esta dissertação descreve o sistema de apoio à racionalização da utilização de energia eléctrica desenvolvido no âmbito da unidade curricular de Tese/Dissertação. O domínio de aplicação enquadra-se no contexto da Directiva da União Europeia 2006/32/EC que declara ser necessário colocar à disposição dos consumidores a informação e os meios que promovam a redução do consumo e o aumento da eficiência energética individual. O objectivo é o desenvolvimento de uma solução que permita a representação gráfica do consumo/produção, a definição de tectos de consumo, a geração automática de alertas e alarmes, a comparação anónima com clientes com perfil idêntico por região e a previsão de consumo/produção no caso de clientes industriais. Trata-se de um sistema distribuído composto por front-end e back-end. O front-end é composto pelas aplicações de interface com o utilizador desenvolvidas para dispositivos móveis Android e navegadores Web. O back-end efectua o armazenamento e processamento de informação e encontra-se alojado numa plataforma de cloud computing – o Google App Engine – que disponibiliza uma interface padrão do tipo serviço Web. Esta opção assegura interoperabilidade, escalabilidade e robustez ao sistema. Descreve-se em detalhe a concepção, desenvolvimento e teste do protótipo realizado, incluindo: (i) as funcionalidades de gestão e análise de consumo e produção de energia implementadas; (ii) as estruturas de dados; (iii) a base de dados e o serviço Web; e (iv) os testes e a depuração efectuados. (iv) Por fim, apresenta-se o balanço deste projecto e efectuam-se sugestões de melhoria.
Resumo:
Pretende-se, na presente dissertação, descrever o trabalho desenvolvido e os conhecimentos adquiridos no decorrer do projeto “iCOPE”, realizado no âmbito do curso de Mestrado em Engenharia de Computação e Instrumentação Médica. O projeto consistiu no desenvolvimento de um sistema aplicacional para o auxílio à prestação de serviços e cuidados de saúde a pacientes com doenças psicóticas tanto através de ferramentas de autogestão, como por funcionalidades que permitirão a um terapeuta monitorizar as ocorrências comunicadas pelos respetivos pacientes atribuídos. As tarefas à responsabilidade do autor desta dissertação compreenderam o levantamento e especificação de requisitos funcionais, o desenvolvimento das funcionalidades e interfaces de gestão de utilizadores e administração do sistema, o desenvolvimento das funcionalidades e interfaces para utilização pelos terapeutas e a criação de ferramentas para a instalação do servidor aplicacional central, existindo ainda cooperação no desenvolvimento de funcionalidades e interfaces para utilização pelos pacientes, nomeadamente ao nível da modelização da base de dados e na realização de testes e deteção de erros. Os resultados da avaliação das interfaces desenvolvidas, obtidos por meio da análise de respostas dadas por um grupo de potenciais utilizadores a um inquérito de usabilidade anónimo, demonstraram que estes estão satisfeitos com a solução implementada, havendo, no entanto, margem para futuros melhoramentos e incremento de funcionalidades.
Resumo:
Kern der vorliegenden Arbeit ist die Erforschung von Methoden, Techniken und Werkzeugen zur Fehlersuche in modellbasierten Softwareentwicklungsprozessen. Hierzu wird zuerst ein von mir mitentwickelter, neuartiger und modellbasierter Softwareentwicklungsprozess, der sogenannte Fujaba Process, vorgestellt. Dieser Prozess wird von Usecase Szenarien getrieben, die durch spezielle Kollaborationsdiagramme formalisiert werden. Auch die weiteren Artefakte des Prozess bishin zur fertigen Applikation werden durch UML Diagrammarten modelliert. Es ist keine Programmierung im Quelltext nötig. Werkzeugunterstützung für den vorgestellte Prozess wird von dem Fujaba CASE Tool bereitgestellt. Große Teile der Werkzeugunterstützung für den Fujaba Process, darunter die Toolunterstützung für das Testen und Debuggen, wurden im Rahmen dieser Arbeit entwickelt. Im ersten Teil der Arbeit wird der Fujaba Process im Detail erklärt und unsere Erfahrungen mit dem Einsatz des Prozesses in Industrieprojekten sowie in der Lehre dargestellt. Der zweite Teil beschreibt die im Rahmen dieser Arbeit entwickelte Testgenerierung, die zu einem wichtigen Teil des Fujaba Process geworden ist. Hierbei werden aus den formalisierten Usecase Szenarien ausführbare Testfälle generiert. Es wird das zugrunde liegende Konzept, die konkrete technische Umsetzung und die Erfahrungen aus der Praxis mit der entwickelten Testgenerierung dargestellt. Der letzte Teil beschäftigt sich mit dem Debuggen im Fujaba Process. Es werden verschiedene im Rahmen dieser Arbeit entwickelte Konzepte und Techniken vorgestellt, die die Fehlersuche während der Applikationsentwicklung vereinfachen. Hierbei wurde darauf geachtet, dass das Debuggen, wie alle anderen Schritte im Fujaba Process, ausschließlich auf Modellebene passiert. Unter anderem werden Techniken zur schrittweisen Ausführung von Modellen, ein Objekt Browser und ein Debugger, der die rückwärtige Ausführung von Programmen erlaubt (back-in-time debugging), vorgestellt. Alle beschriebenen Konzepte wurden in dieser Arbeit als Plugins für die Eclipse Version von Fujaba, Fujaba4Eclipse, implementiert und erprobt. Bei der Implementierung der Plugins wurde auf eine enge Integration mit Fujaba zum einen und mit Eclipse auf der anderen Seite geachtet. Zusammenfassend wird also ein Entwicklungsprozess vorgestellt, die Möglichkeit in diesem mit automatischen Tests Fehler zu identifizieren und diese Fehler dann mittels spezieller Debuggingtechniken im Programm zu lokalisieren und schließlich zu beheben. Dabei läuft der komplette Prozess auf Modellebene ab. Für die Test- und Debuggingtechniken wurden in dieser Arbeit Plugins für Fujaba4Eclipse entwickelt, die den Entwickler bestmöglich bei der zugehörigen Tätigkeit unterstützen.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
The article presents a new type of logs merging tool for multiple blade telecommunication systems based on the development of a new approach. The introduction of the new logs merging tool (the Log Merger) can help engineers to build a processes behavior timeline with a flexible system of information structuring used to assess the changes in the analyzed system. This logs merging system based on the experts experience and their analytical skills generates a knowledge base which could be advantageous in further decision-making expert system development. This paper proposes and discusses the design and implementation of the Log Merger, its architecture, multi-board analysis of capability and application areas. The paper also presents possible ways of further tool improvement e.g. - to extend its functionality and cover additional system platforms. The possibility to add an analysis module for further expert system development is also considered.
Supporting Run-time Monitoring of UML-RT through Customizable Monitoring Configurations in PapyrusRT
Resumo:
Model Driven Engineering uses the principle that code can automatically be generated from software models which would potentially save time and cost of development. By this methodology, a systems structure and behaviour can be expressed in more abstract, high level terms without some of the accidental complexity that the use of a general purpose language can bring. Models are the actual implementation of the system unlike in traditional software development where models are often used for documentation purposes only. However once the code is generated from the model, testing and debugging activities tend to happen on the code level and the model is not updated. We believe that monitoring on the model level could potentially facilitate quality assurance activities as the errors are detected in the early phase of development. In this thesis, we create a Monitoring Configuration for an open source model driven engineering tool called PapyrusRT in Eclipse. We support the run-time monitoring of UML-RT elements with a tracing tool called LTTng. We annotate the model with monitoring information to be used by the code generator for adding tracepoint statements for the corresponding elements. We provide the option of a timing specification to discover latency errors on the model. We validate the results by creating and tracing real time models in PapyrusRT.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
OBJECTIVE: Voluntary HIV counseling and testing are provided to all Brazilian pregnant women with the purpose of reducing mother-to-child HIV transmission. The purpose of the study was to assess characteristics of HIV testing and identify factors associated with HIV counseling and testing. METHODS: A cross-sectional study was carried out comprising 1,658 mothers living in Porto Alegre, Brazil. Biological, reproductive and social variables were obtained from mothers by means of a standardized questionnaire. Being counseling about HIV testing was the dependent variable. Confidence intervals, chi-square test and hierarchical logistic model were used to determine the association between counseling and maternal variables. RESULTS: Of 1,658 mothers interviewed, 1,603 or 96.7% (95% CI: 95.7-97.5) underwent HIV testing, and 51 or 3.1% (95% CI: 2.3-4.0) were not tested. Four (0.2%) refused to undergo testing after counseling. Of 51 women not tested in this study, 30 had undergone the testing previously. Of 1,603 women tested, 630 or 39.3% (95% CI: 36.9-41.7) received counseling, 947 or 59.2% (95% CI: 56.6-61.5) did not, and 26 (1.6%) did not inform. Low income, lack of prenatal care, late beginning of prenatal care, use of rapid testing, and receiving prenatal in the public sector were variables independently associated with a lower probability of getting counseling about HIV testing. CONCLUSIONS: The study findings confirmed the high rate of prenatal HIV testing in Porto Alegre. However, women coming from less privileged social groups were less likely to receive information and benefit from counseling.
Resumo:
OBJECTIVE: To assess rates of offering and uptake of HIV testing and their predictors among women who attended prenatal care. METHODS: A population-based cross-sectional study was conducted among postpartum women (N=2,234) who attended at least one prenatal care visit in 12 cities. Independent and probabilistic samples were selected in the cities studied. Sociodemographic data, information about prenatal care and access to HIV prevention interventions during the current pregnancy were collected. Bivariate and multivariate analyses were carried out to assess independent effects of the covariates on offering and uptake of HIV testing. Data collection took place between November 1999 and April 2000. RESULTS: Overall, 77.5% of the women reported undergoing HIV testing during the current pregnancy. Offering of HIV testing was positively associated with: previous knowledge about prevention of mother-to-child transmission of HIV; higher number of prenatal care visits; higher level of education and being white. HIV testing acceptance rate was 92.5%. CONCLUSIONS: The study results indicate that dissemination of information about prevention of mother-to-child transmission among women may contribute to increasing HIV testing coverage during pregnancy. Non-white women with lower level of education should be prioritized. Strategies to increase attendance of vulnerable women to prenatal care and to raise awareness among health care workers are of utmost importance.
Resumo:
In the investigation and diagnosis of damages to historical masonry structures, the state of stress of the masonry is an important characteristic that must be determined with as much accuracy as possible. Flat-jack testing is a traditional method used to determine the state of stress in historical masonry structures. However, when irregular masonry is tested the method can cause damage to the masonry units and the accuracy of the method is reduced. An enhanced technique, called tube-jack testing, is being developed at the University of Minho to reduce the damage caused during testing and improve the accuracy when used on irregular masonry. This method uses multiple cylindrical jacks inserted in a line of holes drilled in the mortar joints of the masonry, avoiding damage to the masonry units. Concurrently with the development of tube-jack testing, the effect of stress state on sonic testing is being studied. Sonic testing is often used to determine locations of voids and damage in masonry. The focus of these studies was to determine if the state of stress is influencing the sonic test results. In this paper the results of tube-jack testing and sonic testing on masonry walls, built for the purpose of this study in the laboratory, loaded in compression is presented. The tube-jack testing is used to estimate the state of stress in the masonry and the sonic test results are evaluated based on the effect of the applied load on the wall. Future testing and study are suggested for continued development of these test methods.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme