958 resultados para Parallel programming (computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many schools do not begin to introduce college students to software engineering until they have had at least one semester of programming. Since software engineering is a large, complex, and abstract subject it is difficult to construct active learning exercises that build on the students elementary knowledge of programming and still teach basic software engineering principles. It is also the case that beginning students typically know how to construct small programs, but they have little experience with the techniques necessary to produce reliable and long-term maintainable modules. I have addressed these two concerns by defining a local standard (Montana Tech Method (MTM) Software Development Standard for Small Modules Template) that step-by-step directs students toward the construction of highly reliable small modules using well known, best-practices software engineering techniques. Small module is here defined as a coherent development task that can be unit tested, and can be car ried out by a single (or a pair of) software engineer(s) in at most a few weeks. The standard describes the process to be used and also provides a template for the top-level documentation. The instructional modules sequence of mini-lectures and exercises associated with the use of this (and other) local standards are used throughout the course, which perforce covers more abstract software engineering material using traditional reading and writing assignments. The sequence of mini-lectures and hands-on assignments (many of which are done in small groups) constitutes an instructional module that can be used in any similar software engineering course.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only must applications evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment. How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The debugger is an essential tool in any programming environment, as it helps developers understand the dynamic behaviour of software systems. However, traditional debuggers fail in answering domain-specific questions, as the semantics of what they show and do are fixed. In this paper we introduce our work towards a moldable debugger which, unlike traditional debuggers, both adapts itself and can be adapted to a particular debugging context. Thus, it allows developers to answer their questions by using concepts from their own application domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patients wishes and to achieve the desired results. To date, most plastic surgeons rely on either free hand 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patients face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future Internet architecture aims to reformulate the way the content/service is requested to make it location-independent. Information-Centric Networking is a new network paradigm, which tries to achieve this goal by making content objects identified and requested by name instead of address. In this paper, we extend Information-Centric Networking architecture to support services in order to be requested and invoked by names. We present NextServe framework, which is a service framework with a human-readable self-explanatory naming scheme. NextServe is inspired by the object-oriented programming paradigm and is applicable with real-world scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mobile Cloud Networking project develops among others, several virtualized services and applications, in particular: (1) IP Multimedia Subsystem as a Service that gives the possibility to deploy a virtualized and on-demand instance of the IP Multimedia Subsystem platform, (2) Digital Signage Service as a Service that is based on a re-designed Digital Signage Service architecture, adopting the cloud computing principles, and (3) Information Centric Networking/Content Delivery Network as a Service that is used for distributing, caching and migrating content from other services. Possible designs for these virtualized services and applications have been identified and are being implemented. In particular, the architectures of the mentioned services were specified, adopting cloud computing principles, such as infrastructure sharing, elasticity, on-demand and pay-as-you-go. The benefits of Reactive Programming paradigm are presented in the context of Interactive Cloudified Digital Signage services in a Mobile Cloud Platform, as well as the benefit of interworking between different Mobile Cloud Networking Services as Digital Signage Service and Content Delivery Network Service for better performance of Video on Demand content deliver. Finally, the management of Service Level Agreements and the support of rating, charging and billing has also been considered and defined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review our recent work on protein-ligand interactions in vitamin transporters of the Sec-14-like protein. Our studies focused on the cellular-retinaldehyde binding protein (CRALBP) and the alpha-tocopherol transfer protein (alpha-TTP). CRALBP is responsible for mobilisation and photo-protection of short-chain cis-retinoids in the dim-light visual cycle or rod photoreceptors. alpha-TTP is a key protein responsible for selection and retention of RRR-alpha-tocopherol, the most active isoform of vitamin E in superior animals. Our simulation studies evidence how subtle chemical variations in the substrate can lead to significant distortion in the structure of the complex, and how these changes can either lead to new protein function, or be used to model engineered protein variants with tailored properties. Finally, we show how integration of computational and experimental results can contribute in synergy to the understanding of fundamental processes at the biomolecular scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Femoroacetabular impingement (FAI) before or after Periacetabular Osteotomy (PAO) is surprisingly frequent and surgeons need to be aware of the risk preoperatively and be able to avoid it intraoperatively. In this paper we present a novel computer assisted planning and navigation system for PAO with impingement analysis and range of motion (ROM) optimization. Our system starts with a fully automatic detection of the acetabular rim, which allows for quantifying the acetabular morphology with parameters such as acetabular version, inclination and femoral head coverage ratio for a computer assisted diagnosis and planning. The planned situation was optimized with impingement simulation by balancing acetabuar coverage with ROM. Intra-operatively navigation was conducted until the optimized planning situation was achieved. Our experimental results demonstrated: 1) The fully automated acetabular rim detection was validated with accuracy 1.1 0.7mm; 2) The optimized PAO planning improved ROM significantly compared to that without ROM optimization; 3) By comparing the pre-operatively planned situation and the intra-operatively achieved situation, sub-degree accuracy was achieved for all directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.