952 resultados para user testing, usability testing, system integration, thinking aloud, card sorting
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
This thesis research describes the design and implementation of a Semantic Geographic Information System (GIS) and the creation of its spatial database. The database schema is designed and created, and all textual and spatial data are loaded into the database with the help of the Semantic DBMS's Binary Database Interface currently being developed at the FIU's High Performance Database Research Center (HPDRC). A friendly graphical user interface is created together with the other main system's areas: displaying process, data animation, and data retrieval. All these components are tightly integrated to form a novel and practical semantic GIS that has facilitated the interpretation, manipulation, analysis, and display of spatial data like: Ocean Temperature, Ozone(TOMS), and simulated SeaWiFS data. At the same time, this system has played a major role in the testing process of the HPDRC's high performance and efficient parallel Semantic DBMS.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
During the summer of 2016, Duke University Libraries staff began a project to update the way that research databases are displayed on the library website. The new research databases page is a customized version of the default A-Z list that Springshare provides for its LibGuides content management system. Duke Libraries staff made adjustments to the content and interface of the page. In order to see how Duke users navigated the new interface, usability testing was conducted on August 9th, 2016.
Resumo:
With the importance of renewable energy well-established worldwide, and targets of such energy quantified in many cases, there exists a considerable interest in the assessment of wind and wave devices. While the individual components of these devices are often relatively well understood and the aspects of energy generation well researched, there seems to be a gap in the understanding of these devices as a whole and especially in the field of their dynamic responses under operational conditions. The mathematical modelling and estimation of their dynamic responses are more evolved but research directed towards testing of these devices still requires significant attention. Model-free indicators of the dynamic responses of these devices are important since it reflects the as-deployed behaviour of the devices when the exposure conditions are scaled reasonably correctly, along with the structural dimensions. This paper demonstrates how the Hurst exponent of the dynamic responses of a monopile exposed to different exposure conditions in an ocean wave basin can be used as a model-free indicator of various responses. The scaled model is exposed to Froude scaled waves and tested under different exposure conditions. The analysis and interpretation is carried out in a model-free and output-only environment, with only some preliminary ideas regarding the input of the system. The analysis indicates how the Hurst exponent can be an interesting descriptor to compare and contrast various scenarios of dynamic response conditions.
Testing a gravity-based accessibility instrument to engage stakeholders into integrated LUT planning
Resumo:
The paper starts from the concern that while there is a large body of literature focusing on the theoretical definitions and measurements of accessibility, the extent to which such measures are used in planning practice is less clear. Previous reviews of accessibility instruments have in fact identified a gap between the clear theoretical assumptions and the infrequent applications of accessibility instruments in spatial and transport planning. In this paper we present the results of a structured-workshop involving private and public stakeholders to test usability of gravity-based accessibility measures (GraBaM) to assess integrated land-use and transport policies. The research is part of the COST Action TU1002 “Accessibility Instruments for Planning Practice” during which different accessibility instruments where tested for different case studies. Here we report on the empirical case study of Rome.
Resumo:
The domestication of plants and animals marks one of the most significant transitions in human, and indeed global, history. Traditionally, study of the domestication process was the exclusive domain of archaeologists and agricultural scientists; today it is an increasingly multidisciplinary enterprise that has come to involve the skills of evolutionary biologists and geneticists. Although the application of new information sources and methodologies has dramatically transformed our ability to study and understand domestication, it has also generated increasingly large and complex datasets, the interpretation of which is not straightforward. In particular, challenges of equifinality, evolutionary variance, and emergence of unexpected or counter-intuitive patterns all face researchers attempting to infer past processes directly from patterns in data. We argue that explicit modeling approaches, drawing upon emerging methodologies in statistics and population genetics, provide a powerful means of addressing these limitations. Modeling also offers an approach to analyzing datasets that avoids conclusions steered by implicit biases, and makes possible the formal integration of different data types. Here we outline some of the modeling approaches most relevant to current problems in domestication research, and demonstrate the ways in which simulation modeling is beginning to reshape our understanding of the domestication process.
Resumo:
PCR-based immunoglobulin (Ig)/T-cell receptor (TCR) clonality testing in suspected lymphoproliferations has largely been standardized and has consequently become technically feasible in a routine diagnostic setting. Standardization of the pre-analytical and post-analytical phases is now essential to prevent misinterpretation and incorrect conclusions derived from clonality data. As clonality testing is not a quantitative assay, but rather concerns recognition of molecular patterns, guidelines for reliable interpretation and reporting are mandatory. Here, the EuroClonality (BIOMED-2) consortium summarizes important pre- and post-analytical aspects of clonality testing, provides guidelines for interpretation of clonality testing results, and presents a uniform way to report the results of the Ig/TCR assays. Starting from an immunobiological concept, two levels to report Ig/TCR profiles are discerned: the technical description of individual (multiplex) PCR reactions and the overall molecular conclusion for B and T cells. Collectively, the EuroClonality (BIOMED-2) guidelines and consensus reporting system should help to improve the general performance level of clonality assessment and interpretation, which will directly impact on routine clinical management (standardized best-practice) in patients with suspected lymphoproliferations.
Resumo:
Tumor genomic instability and selective treatment pressures result in clonal disease evolution; molecular stratification for molecularly targeted drug administration requires repeated access to tumor DNA. We hypothesized that circulating plasma DNA (cpDNA) in advanced cancer patients is largely derived from tumor, has prognostic utility, and can be utilized for multiplex tumor mutation sequencing when repeat biopsy is not feasible. We utilized the Sequenom MassArray System and OncoCarta panel for somatic mutation profiling. Matched samples, acquired from the same patient but at different time points were evaluated; these comprised formalin-fixed paraffin-embedded (FFPE) archival tumor tissue (primary and/or metastatic) and cpDNA. The feasibility, sensitivity, and specificity of this high-throughput, multiplex mutation detection approach was tested utilizing specimens acquired from 105 patients with solid tumors referred for participation in Phase I trials of molecularly targeted drugs. The median cpDNA concentration was 17 ng/ml (range: 0.5-1600); this was 3-fold higher than in healthy volunteers. Moreover, higher cpDNA concentrations associated with worse overall survival; there was an overall survival (OS) hazard ratio of 2.4 (95% CI 1.4, 4.2) for each 10-fold increase in cpDNA concentration and in multivariate analyses, cpDNA concentration, albumin, and performance status remained independent predictors of OS. These data suggest that plasma DNA in these cancer patients is largely derived from tumor. We also observed high detection concordance for critical 'hot-spot' mutations (KRAS, BRAF, PIK3CA) in matched cpDNA and archival tumor tissue, and important differences between archival tumor and cpDNA. This multiplex sequencing assay can be utilized to detect somatic mutations from plasma in advanced cancer patients, when safe repeat tumor biopsy is not feasible and genomic analysis of archival tumor is deemed insufficient. Overall, circulating nucleic acid biomarker studies have clinically important multi-purpose utility in advanced cancer patients and further studies to pursue their incorporation into the standard of care are warranted.
Resumo:
Ageing and deterioration of infrastructure is a challenge facing transport authorities. In
particular, there is a need for increased bridge monitoring in order to provide adequate
maintenance and to guarantee acceptable levels of transport safety. The Intelligent
Infrastructure group at Queens University Belfast (QUB) are working on a number of aspects
of infrastructure monitoring and this paper presents summarised results from three distinct
monitoring projects carried out by this group. Firstly the findings from a project on next
generation Bridge Weight in Motion (B-WIM) are reported, this includes full scale field testing
using fibre optic strain sensors. Secondly, results from early phase testing of a computer
vision system for bridge deflection monitoring are reported on. This research seeks to exploit
recent advances in image processing technology with a view to developing contactless
bridge monitoring approaches. Considering the logistical difficulty of installing sensors on a
‘live’ bridge, contactless monitoring has some inherent advantages over conventional
contact based sensing systems. Finally the last section of the paper presents some recent
findings on drive by bridge monitoring. In practice a drive-by monitoring system will likely
require GPS to allow the response of a given bridge to be identified; this study looks at the
feasibility of using low-cost GPS sensors for this purpose, via field trials. The three topics
outlined above cover a spectrum of SHM approaches namely, wired monitoring, contactless
monitoring and drive by monitoring
Resumo:
Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.
Resumo:
Internal curing is a relatively new technique being used to promote hydration of Portland cement concretes. The fundamental concept is to provide reservoirs of water within the matrix such that the water does not increase the initial water/cementitious materials ratio to the mixture, but is available to help continue hydration once the system starts to dry out. The reservoirs used in the US are typically in the form of lightweight fine aggregate (LWFA) that is saturated prior to batching. Considerable work has been conducted both in the laboratory and in the field to confirm that this approach is fundamentally sound and yet practical for construction purposes. A number of bridge decks have been successfully constructed around the US, including one in Iowa in 2013. It is reported that inclusion of about 20% to 30% LWFA will not only improve strength development and potential durability, but, more importantly, will significantly reduce shrinking, thus reducing cracking risk. The aim of this work was to investigate the feasibility of such an approach in a bridge deck.