25 resultados para Software engineering estimation model
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
Resumo:
This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.
Resumo:
Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.
Resumo:
The optical quality of the human eye mainly depends on the refractive performance of the cornea. The shape of the cornea is a mechanical balance between intraocular pressure and tissue intrinsic stiffness. Several surgical procedures in ophthalmology alter the biomechanics of the cornea to provoke local or global curvature changes for vision correction. Legitimated by the large number of surgical interventions performed every day, the demand for a deeper understanding of corneal biomechanics is rising to improve the safety of procedures and medical devices. The aim of our work is to propose a numerical model of corneal biomechanics, based on the stromal microstructure. Our novel anisotropic constitutive material law features a probabilistic weighting approach to model collagen fiber distribution as observed on human cornea by Xray scattering analysis (Aghamohammadzadeh et. al., Structure, February 2004). Furthermore, collagen cross-linking was explicitly included in the strain energy function. Results showed that the proposed model is able to successfully reproduce both inflation and extensiometry experimental data (Elsheikh et. al., Curr Eye Res, 2007; Elsheikh et. al., Exp Eye Res, May 2008). In addition, the mechanical properties calculated for patients of different age groups (Group A: 65-79 years; Group B: 80-95 years) demonstrate an increased collagen cross-linking, and a decrease in collagen fiber elasticity from younger to older specimen. These findings correspond to what is known about maturing fibrous biological tissue. Since the presented model can handle different loading situations and includes the anisotropic distribution of collagen fibers, it has the potential to simulate clinical procedures involving nonsymmetrical tissue interventions. In the future, such mechanical model can be used to improve surgical planning and the design of next generation ophthalmic devices.
Resumo:
Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible.
Resumo:
Automated identification of vertebrae from X-ray image(s) is an important step for various medical image computing tasks such as 2D/3D rigid and non-rigid registration. In this chapter we present a graphical model-based solution for automated vertebra identification from X-ray image(s). Our solution does not ask for a training process using training data and has the capability to automatically determine the number of vertebrae visible in the image(s). This is achieved by combining a graphical model-based maximum a posterior probability (MAP) estimate with a mean-shift based clustering. Experiments conducted on simulated X-ray images as well as on a low-dose low quality X-ray spinal image of a scoliotic patient verified its performance.
Computer model simulation of alveolar phase III slopes: Implications for tidal single-breath washout
Resumo:
Course materials for e-learning are a special type of information system (IS). Thus, in the development of educational material one may learn from principles, methods, and tools that originated in the Software Engineering (SE) discipline and that are relevant in similar ways in "Instructional Engineering". An important SE principle is mo dularization, which supports properties like reusability and adaptability of code. To foster the adaptability of courseware we present a concept in which learning material is organized as a library of modular course objects. A certain lecturer may customize the courseware according to his specific course requirements. He must consider logical dependencies of and relationship integrity between selected course objects. We discuss integrity issues that have to be regarded for the composition of consistent course materials.
Resumo:
A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.
Resumo:
This paper presents our ongoing work on enterprise IT integration of sensor networks based on the idea of service descriptions and applying linked data principles to them. We argue that using linked service descriptions facilitates a better integration of sensor nodes into enterprise IT systems and allows SOA principles to be used within the enterprise IT and within the sensor network itself.