271 resultados para Computer aided software engineering
Resumo:
This report describes the available functionality and use of the ClusterEval evaluation software. It implements novel and standard measures for the evaluation of cluster quality. This software has been used at the INEX XML Mining track and in the MediaEval Social Event Detection task.
Resumo:
Digital Human Models (DHM) have been used for over 25 years. They have evolved from simple drawing templates, which are nowadays still used in architecture, to complex and Computer Aided Engineering (CAE) integrated design and analysis tools for various ergonomic tasks. DHM are most frequently used for applications in product design and production planning, with many successful implementations documented. DHM from other domains, as for example computer user interfaces, artificial intelligence, training and education, or the entertainment industry show that there is also an ongoing development towards a comprehensive understanding and holistic modeling of human behavior. While the development of DHM for the game sector has seen significant progress in recent years, advances of DHM in the area of ergonomics have been comparatively modest. As a consequence, we need to question if current DHM systems are fit for the design of future mobile work systems. So far it appears that DHM in Ergonomics are rather limited to some traditional applications. According to Dul et al. (2012), future characteristics of Human Factors and Ergonomics (HFE) can be assigned to six main trends: (1) global change of work systems, (2) cultural diversity, (3) ageing, (4) information and communication technology (ICT), (5) enhanced competiveness and the need for innovation, and; (6) sustainability and corporate social responsibility. Based on a literature review, we systematically investigate the capabilities of current ergonomic DHM systems versus the ‘Future of Ergonomics’ requirements. It is found that DHMs already provide broad functionality in support of trends (1) and (2), and more limited options in regards to trend (3). Today’s DHM provide access to a broad range of national and international databases for correct differentiation and characterization of anthropometry for global populations. Some DHM explicitly address social and cultural modeling of groups of people. In comparison, the trends of growing importance of ICT (4), the need for innovation (5) and sustainability (6) are addressed primarily from a hardware-oriented and engineering perspective and not reflected in DHM. This reflects a persistent separation between hardware design (engineering) and software design (information technology) in the view of DHM – a disconnection which needs to be urgently overcome in the era of software defined user interfaces and mobile devices. The design of a mobile ICT-device is discussed to exemplify the need for a comprehensive future DHM solution. Designing such mobile devices requires an approach that includes organizational aspects as well as technical and cognitive ergonomics. Multiple interrelationships between the different aspects result in a challenging setting for future DHM. In conclusion, the ‘Future of Ergonomics’ pose particular challenges for DHM in regards to the design of mobile work systems, and moreover mobile information access.
Resumo:
The relationship between coronal knee laxity and the restraining properties of the collateral ligaments remains unknown. This study investigated correlations between the structural properties of the collateral ligaments and stress angles used in computer-assisted total knee arthroplasty (TKA), measured with an optically based navigation system. Ten fresh-frozen cadaveric knees (mean age: 81 ± 11 years) were dissected to leave the menisci, cruciate ligaments, posterior joint capsule and collateral ligaments. The resected femur and tibia were rigidly secured within a test system which permitted kinematic registration of the knee using a commercially available image-free navigation system. Frontal plane knee alignment and varus-valgus stress angles were acquired. The force applied during varus-valgus testing was quantified. Medial and lateral bone-collateral ligament-bone specimens were then prepared, mounted within a uni-axial materials testing machine, and extended to failure. Force and displacement data were used to calculate the principal structural properties of the ligaments. The mean varus laxity was 4 ± 1° and the mean valgus laxity was 4 ± 2°. The corresponding mean manual force applied was 10 ± 3 N and 11 ± 4 N, respectively. While measures of knee laxity were independent of the ultimate tensile strength and stiffness of the collateral ligaments, there was a significant correlation between the force applied during stress testing and the instantaneous stiffness of the medial (r = 0.91, p = 0.001) and lateral (r = 0.68, p = 0.04) collateral ligaments. These findings suggest that clinicians may perceive a rate of change of ligament stiffness as the end-point during assessment of collateral knee laxity.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.
Resumo:
Software development settings provide a great opportunity for CSCW researchers to study collaborative work. In this paper, we explore a specific work practice called bug reproduction that is a part of the software bug-fixing process. Bug re-production is a highly collaborative process by which software developers attempt to locally replicate the ‘environment’ within which a bug was originally encountered. Customers, who encounter bugs in their everyday use of systems, play an important role in bug reproduction as they provide useful information to developers, in the form of steps for reproduction, software screenshots, trace logs, and other ways to describe a problem. Bug reproduction, however, poses major hurdles in software maintenance as it is often challenging to replicate the contextual aspects that are at play at the customers’ end. To study the bug reproduction process from a human-centered perspective, we carried out an ethnographic study at a multinational engineering company. Using semi-structured interviews, a questionnaire and half-a-day observation of sixteen software developers working on different software maintenance projects, we studied bug reproduction. In this pa-per, we present a holistic view of bug reproduction practices from a real-world set-ting and discuss implications for designing tools to address the challenges developers face during bug reproduction.
Resumo:
We present a method for calculating odome- try in three-dimensions for car-like ground ve- hicles with an Ackerman-like steering model. In our approach we use the information from a single camera to derive the odometry in the plane and fuse it with roll and pitch informa- tion derived from an on-board IMU to extend to three-dimensions, thus providing odometric altitude as well as traditional x and y transla- tion. We have mounted the odometry module on a standard Toyota Prado SUV and present results from a car-park environment as well as from an off-road track.
Resumo:
The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.
Resumo:
This paper describes a software architecture for real-world robotic applications. We discuss issues of software reliability, testing and realistic off-line simulation that allows the majority of the automation system to be tested off-line in the laboratory before deployment in the field. A recent project, the automation of a very large mining machine is used to illustrate the discussion.
Resumo:
In 2005, Ginger Myles and Hongxia Jin proposed a software watermarking scheme based on converting jump instructions or unconditional branch statements (UBSs) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and integrity checks change and the target address will not be computed correctly. In this paper, we present an attack based on tracking stack pointer modifications to break the scheme and provide implementation details. The key element of the attack is to remove the fingerprint and integrity check generating code from the program after disassociating the target address from the fingerprint and integrity value. Using the debugging tools that give vast control to the attacker to track stack pointer operations, we perform both subtractive and watermark replacement attacks. The major steps in the attack are automated resulting in a fast and low-cost attack.
Resumo:
The primary aim of this multidisciplinary project was to develop a new generation of breast implants. Disrupting the currently prevailing paradigm of silicone implants which permanently introduce a foreign body into mastectomy patients, highly porous implants developed as part of this PhD project are biodegradable by the body and augment the growth of natural tissue. Our technology platform leverages computer-assisted-design which allows us to manufacture fully patient-specific implants based on a personalised medicine approach. Multiple animal studies conducted in this project have shown that the polymeric implant slowly degrades within the body harmlessly while the body's own tissue forms concurrently.
Resumo:
Bug fixing is a highly cooperative work activity where developers, testers, product managers and other stake-holders collaborate using a bug tracking system. In the context of Global Software Development (GSD), where software development is distributed across different geographical locations, we focus on understanding the role of bug trackers in supporting software bug fixing activities. We carried out a small-scale ethnographic fieldwork in a software product team distributed between Finland and India at a multinational engineering company. Using semi-structured interviews and in-situ observations of 16 bug cases, we show that the bug tracker 1) supported information needs of different stake holder, 2) established common-ground, and 3) reinforced issues related to ownership, performance and power. Consequently, we provide implications for design around these findings.
Resumo:
I am suspicious of tools without a purpose - tools that are not developed in response to a clearly defined problem. Of course tools without a purpose can still be useful. However the development of first generation CAD was seriously impeded because the solution came before the problem. We are in danger of repeating this mistake if we do not clarify the nature of the problem that we are trying to solve with the next generation of tools. Back in the 1980s I used to add a postscript slide at the end of CAD conference presentations and the applause would invariably turn to concern. The slide simple asked: can anyone remember what it was about design that needed aiding before we had computer aided design?