19 resultados para Domain Knowledge
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Software must be constantly adapted due to evolving domain knowledge and unanticipated requirements changes. To adapt a system at run-time we need to reflect on its structure and its behavior. Object-oriented languages introduced reflection to deal with this issue, however, no reflective approach up to now has tried to provide a unified solution to both structural and behavioral reflection. This paper describes Albedo, a unified approach to structural and behavioral reflection. Albedo is a model of fined-grained unanticipated dynamic structural and behavioral adaptation. Instead of providing reflective capabilities as an external mechanism we integrate them deeply in the environment. We show how explicit meta-objects allow us to provide a range of reflective features and thereby evolve both application models and environments at run-time.
Resumo:
Features encapsulate the domain knowledge of a software system and thus are valuable sources of information for a reverse engineer. When analyzing the evolution of a system, we need to know how and which features were modified to recover both the change intention and its extent, namely which source artifacts are affected. Typically, the implementation of a feature crosscuts a number of source artifacts. To obtain a mapping between features to the source artifacts, we exercise the features and capture their execution traces. However this results in large traces that are difficult to interpret. To tackle this issue we compact the traces into simple sets of source artifacts that participate in a feature's runtime behavior. We refer to these compacted traces as feature views. Within a feature view, we partition the source artifacts into disjoint sets of characterized software entities. The characterization defines the level of participation of a source entity in the features. We then analyze the features over several versions of a system and we plot their evolution to reveal how and hich features were affected by changes in the code. We show the usefulness of our approach by applying it to a case study where we address the problem of merging parallel development tracks of the same system.
Plectin interacts with the rod domain of type III intermediate filament proteins desmin and vimentin
Resumo:
Plectin is a versatile cytolinker protein critically involved in the organization of the cytoskeletal filamentous system. The muscle-specific intermediate filament (IF) protein desmin, which progressively replaces vimentin during differentiation of myoblasts, is one of the important binding partners of plectin in mature muscle. Defects of either plectin or desmin cause muscular dystrophies. By cell transfection studies, yeast two-hybrid, overlay and pull-down assays for binding analysis, we have characterized the functionally important sequences for the interaction of plectin with desmin and vimentin. The association of plectin with both desmin and vimentin predominantly depended on its fifth plakin repeat domain and downstream linker region. Conversely, the interaction of desmin and vimentin with plectin required sequences contained within the segments 1A-2A of their central coiled-coil rod domain. This study furthers our knowledge of the interaction between plectin and IF proteins important for maintenance of cytoarchitecture in skeletal muscle. Moreover, binding of plectin to the conserved rod domain of IF proteins could well explain its broad interaction with most types of IFs.
Resumo:
Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
Answering run-time questions in object-oriented systems involves reasoning about and exploring connections between multiple objects. Developer questions exercise various aspects of an object and require multiple kinds of interactions depending on the relationships between objects, the application domain and the differing developer needs. Nevertheless, traditional object inspectors, the essential tools often used to reason about objects, favor a generic view that focuses on the low-level details of the state of individual objects. This leads to an inefficient effort, increasing the time spent in the inspector. To improve the inspection process, we propose the Moldable Inspector, a novel approach for an extensible object inspector. The Moldable Inspector allows developers to look at objects using multiple interchangeable presentations and supports a workflow in which multiple levels of connecting objects can be seen together. Both these aspects can be tailored to the domain of the objects and the question at hand. We further exemplify how the proposed solution improves the inspection process, introduce a prototype implementation and discuss new directions for extending the Moldable Inspector.
Resumo:
Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Resumo:
The implementation of new surgical techniques offers chances but carries risks. Usually, several years pass before a critical appraisal and a balanced opinion of a new treatment method are available and rely on the evidence from the literature and expert's opinion. The frozen elephant trunk (FET) technique has been increasingly used to treat complex pathologies of the aortic arch and the descending aorta, but there still is an ongoing discussion within the surgical community about the optimal indications. This paper represents a common effort of the Vascular Domain of EACTS together with several surgeons with particular expertise in aortic surgery, and summarizes the current knowledge and the state of the art about the FET technique. The majority of the information about the FET technique has been extracted from 97 focused publications already available in the PubMed database (cohort studies, case reports, reviews, small series, meta-analyses and best evidence topics) published in English.
Resumo:
Several theories assume that successful team coordination is partly based on knowledge that helps anticipating individual contributions necessary in a situational task. It has been argued that a more ecological perspective needs to be considered in contexts evolving dynamically and unpredictably. In football, defensive plays are usually coordinated according to strategic concepts spanning all members and large areas of the playfield. On the other hand, fewer people are involved in offensive plays as these are less projectable and strongly constrained by ecological characteristics. The aim of this study is to test the effects of ecological constraints and player knowledge on decision making in offensive game scenarios. It is hypothesized that both knowledge about team members and situational constraints will influence decisional processes. Effects of situational constraints are expected to be of higher magnitude. Two teams playing in the fourth league of the Swiss Football Federation participate in the study. Forty customized game scenarios were developed based on the coaches’ information about player positions and game strategies. Each player was shown in ball possession four times. Participants were asked to take the perspective of the player on the ball and to choose a passing destination and a recipient. Participants then rated domain specific strengths (e.g., technical skills, game intelligence) of each of their teammates. Multilevel models for categorical dependent variables (team members) will be specified. Player knowledge (rated skills) and ecological constraints (operationalized as each players’ proximity and availability for ball reception) are included as predictor variables. Data are currently being collected. Results will yield effects of parameters that are stable across situations as well as of variable parameters that are bound to situational context. These will enable insight into the degree to which ecological constraints and more enduring team knowledge are involved in decisional processes aimed at coordinating interpersonal action.
Resumo:
We introduce gradient-domain rendering for Monte Carlo image synthesis.While previous gradient-domain Metropolis Light Transport sought to distribute more samples in areas of high gradients, we show, in contrast, that estimating image gradients is also possible using standard (non-Metropolis) Monte Carlo algorithms, and furthermore, that even without changing the sample distribution, this often leads to significant error reduction. This broadens the applicability of gradient rendering considerably. To gain insight into the conditions under which gradient-domain sampling is beneficial, we present a frequency analysis that compares Monte Carlo sampling of gradients followed by Poisson reconstruction to traditional Monte Carlo sampling. Finally, we describe Gradient-Domain Path Tracing (G-PT), a relatively simple modification of the standard path tracing algorithm that can yield far superior results.
Resumo:
We propose dual-domain filtering, an image processing paradigm that couples spatial domain with frequency domain filtering. Our dual-domain defined filter removes artifacts like residual noise of other image denoising methods and compression artifacts. Moreover, iterating the filter achieves state-of-the-art image denoising results, but with a much simpler algorithm than competing approaches. The simplicity and versatility of the dual-domain filter makes it an attractive tool for image processing.
Resumo:
We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.