36 resultados para Parallel programming (computer science)
Resumo:
Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.
Resumo:
In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.
Resumo:
The future Internet architecture aims to reformulate the way the content/service is requested to make it location-independent. Information-Centric Networking is a new network paradigm, which tries to achieve this goal by making content objects identified and requested by name instead of address. In this paper, we extend Information-Centric Networking architecture to support services in order to be requested and invoked by names. We present NextServe framework, which is a service framework with a human-readable self-explanatory naming scheme. NextServe is inspired by the object-oriented programming paradigm and is applicable with real-world scenarios.
Resumo:
The Mobile Cloud Networking project develops among others, several virtualized services and applications, in particular: (1) IP Multimedia Subsystem as a Service that gives the possibility to deploy a virtualized and on-demand instance of the IP Multimedia Subsystem platform, (2) Digital Signage Service as a Service that is based on a re-designed Digital Signage Service architecture, adopting the cloud computing principles, and (3) Information Centric Networking/Content Delivery Network as a Service that is used for distributing, caching and migrating content from other services. Possible designs for these virtualized services and applications have been identified and are being implemented. In particular, the architectures of the mentioned services were specified, adopting cloud computing principles, such as infrastructure sharing, elasticity, on-demand and pay-as-you-go. The benefits of Reactive Programming paradigm are presented in the context of Interactive Cloudified Digital Signage services in a Mobile Cloud Platform, as well as the benefit of interworking between different Mobile Cloud Networking Services as Digital Signage Service and Content Delivery Network Service for better performance of Video on Demand content deliver. Finally, the management of Service Level Agreements and the support of rating, charging and billing has also been considered and defined.
Resumo:
We review our recent work on protein-ligand interactions in vitamin transporters of the Sec-14-like protein. Our studies focused on the cellular-retinaldehyde binding protein (CRALBP) and the alpha-tocopherol transfer protein (alpha-TTP). CRALBP is responsible for mobilisation and photo-protection of short-chain cis-retinoids in the dim-light visual cycle or rod photoreceptors. alpha-TTP is a key protein responsible for selection and retention of RRR-alpha-tocopherol, the most active isoform of vitamin E in superior animals. Our simulation studies evidence how subtle chemical variations in the substrate can lead to significant distortion in the structure of the complex, and how these changes can either lead to new protein function, or be used to model engineered protein variants with tailored properties. Finally, we show how integration of computational and experimental results can contribute in synergy to the understanding of fundamental processes at the biomolecular scale.
Resumo:
Femoroacetabular impingement (FAI) before or after Periacetabular Osteotomy (PAO) is surprisingly frequent and surgeons need to be aware of the risk preoperatively and be able to avoid it intraoperatively. In this paper we present a novel computer assisted planning and navigation system for PAO with impingement analysis and range of motion (ROM) optimization. Our system starts with a fully automatic detection of the acetabular rim, which allows for quantifying the acetabular morphology with parameters such as acetabular version, inclination and femoral head coverage ratio for a computer assisted diagnosis and planning. The planned situation was optimized with impingement simulation by balancing acetabuar coverage with ROM. Intra-operatively navigation was conducted until the optimized planning situation was achieved. Our experimental results demonstrated: 1) The fully automated acetabular rim detection was validated with accuracy 1.1 ± 0.7mm; 2) The optimized PAO planning improved ROM significantly compared to that without ROM optimization; 3) By comparing the pre-operatively planned situation and the intra-operatively achieved situation, sub-degree accuracy was achieved for all directions.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.
Resumo:
The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.
Resumo:
Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.
Resumo:
Extraction of both pelvic and femoral surface models of a hip joint from CT data for computer-assisted pre-operative planning of hip arthroscopy is addressed. We present a method for a fully automatic image segmentation of a hip joint. Our method works by combining fast random forest (RF) regression based landmark detection, atlas-based segmentation, with articulated statistical shape model (aSSM) based hip joint reconstruction. The two fundamental contributions of our method are: (1) An improved fast Gaussian transform (IFGT) is used within the RF regression framework for a fast and accurate landmark detection, which then allows for a fully automatic initialization of the atlas-based segmentation; and (2) aSSM based fitting is used to preserve hip joint structure and to avoid penetration between the pelvic and femoral models. Validation on 30 hip CT images show that our method achieves high performance in segmenting pelvis, left proximal femur, and right proximal femur surfaces with an average accuracy of 0.59 mm, 0.62 mm, and 0.58 mm, respectively.
Resumo:
This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.
Resumo:
Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.
Resumo:
Analyzing how software engineers use the Integrated Development Environment (IDE) is essential to better understanding how engineers carry out their daily tasks. Spotter is a code search engine for the Pharo programming language. Since its inception, Spotter has been rapidly and broadly adopted within the Pharo community. However, little is known about how practitioners employ Spotter to search and navigate within the Pharo code base. This paper evaluates how software engineers use Spotter in practice. To achieve this, we remotely gather user actions called events. These events are then visually rendered using an adequate navigation tool chain. Sequences of events are represented using a visual alphabet. We found a number of usage patterns and identified underused Spotter features. Such findings are essential for improving Spotter.