22 resultados para Visual Basic (Programming Language)

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzing how software engineers use the Integrated Development Environment (IDE) is essential to better understanding how engineers carry out their daily tasks. Spotter is a code search engine for the Pharo programming language. Since its inception, Spotter has been rapidly and broadly adopted within the Pharo community. However, little is known about how practitioners employ Spotter to search and navigate within the Pharo code base. This paper evaluates how software engineers use Spotter in practice. To achieve this, we remotely gather user actions called events. These events are then visually rendered using an adequate navigation tool chain. Sequences of events are represented using a visual alphabet. We found a number of usage patterns and identified underused Spotter features. Such findings are essential for improving Spotter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lint-like program checkers are popular tools that ensure code quality by verifying compliance with best practices for a particular programming language. The proliferation of internal domain-specific languages and models, however, poses new challenges for such tools. Traditional program checkers produce many false positives and fail to accurately check constraints, best practices, common errors, possible optimizations and portability issues particular to domain-specific languages. We advocate the use of dedicated rules to check domain-specific practices. We demonstrate the implementation of domain-specific rules, the automatic fixing of violations, and their application to two case-studies: (1) Seaside defines several internal DSLs through a creative use of the syntax of the host language; and (2) Magritte adds meta-descriptions to existing code by means of special methods. Our empirical validation demonstrates that domain-specific program checking significantly improves code quality when compared with general purpose program checking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Code profiling is an essential activity to increase software quality. It is commonly employed in a wide variety of tasks, such as supporting program comprehension, determining execution bottlenecks, and assessing code coverage by unit tests. Spy is an innovative framework to easily build profilers and visualize profiling information. The profiling information is obtained by inserting dedicated code before or after method execution. The gathered profiling information is structured in line with the application structure in terms of packages, classes, and methods. Spy has been instantiated on four occasions so far. We created profilers dedicated to test coverage, time execution, type feedback, and profiling evolution across version. We also integrated Spy in the Pharo IDE. Spy has been implemented in the Pharo Smalltalk programming language and is available under the MIT license.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The goal of our work was to develop a simple method to evaluate a compensation treatment after unplanned treatment interruptions with respect to their tumour- and normal tissue effect. Methods We developed a software tool in java programming language based on existing recommendations to compensate for treatment interruptions. In order to express and visualize the deviations from the originally planned tumour and normal tissue effects we defined the compensability index. Results The compensability index represents an evaluation of the suitability of compensatory radiotherapy in a single number based on the number of days used for compensation and the preference of preserving the originally planned tumour effect or not exceeding the originally planned normal tissue effect. An automated tool provides a method for quick evaluation of compensation treatments. Conclusions The compensability index calculation may serve as a decision support system based on existing and established recommendations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND In a high proportion of patients with favorable outcome after aneurysmal subarachnoid hemorrhage (aSAH), neuropsychological deficits, depression, anxiety, and fatigue are responsible for the inability to return to their regular premorbid life and pursue their professional careers. These problems often remain unrecognized, as no recommendations concerning a standardized comprehensive assessment have yet found entry into clinical routines. METHODS To establish a nationwide standard concerning a comprehensive assessment after aSAH, representatives of all neuropsychological and neurosurgical departments of those eight Swiss centers treating acute aSAH have agreed on a common protocol. In addition, a battery of questionnaires and neuropsychological tests was selected, optimally suited to the deficits found most prevalent in aSAH patients that was available in different languages and standardized. RESULTS We propose a baseline inpatient neuropsychological screening using the Montreal Cognitive Assessment (MoCA) between days 14 and 28 after aSAH. In an outpatient setting at 3 and 12 months after bleeding, we recommend a neuropsychological examination, testing all relevant domains including attention, speed of information processing, executive functions, verbal and visual learning/memory, language, visuo-perceptual abilities, and premorbid intelligence. In addition, a detailed assessment capturing anxiety, depression, fatigue, symptoms of frontal lobe affection, and quality of life should be performed. CONCLUSIONS This standardized neuropsychological assessment will lead to a more comprehensive assessment of the patient, facilitate the detection and subsequent treatment of previously unrecognized but relevant impairments, and help to determine the incidence, characteristics, modifiable risk factors, and the clinical course of these impairments after aSAH.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When healthy observers make a saccade that is erroneously directed toward a distracter stimulus, they often produce a corrective saccade within 100ms after the end of the primary saccade. Such short inter-saccadic intervals indicate that programming of the secondary saccade has been initiated prior to the execution of the primary saccade and hence that the two saccades have been programmed concurrently. Here we show that concurrent saccade programming is bilaterally impaired in left spatial neglect, a strongly lateralized disorder of visual attention resulting from extensive right cerebral damage. Neglect patients were asked to make saccades to targets presented left or right of fixation while disregarding a distracter presented in the opposite hemifield. We examined those experimental trials on which participants first made a saccade to the distracter, followed by a secondary (corrective) saccade to the target. Compared to healthy and right-hemisphere damaged control participants the proportion of secondary saccades directing gaze to the target instead of bringing it even closer to the distracter was bilaterally reduced in neglect patients. In addition, the characteristic reduction of secondary saccade latency observed in both control groups was absent in neglect patients, whether the secondary saccade was directed to the left or right hemifield. This pattern is consistent with a severe, bilateral impairment of concurrent saccade programming in left spatial neglect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context-dependent behavior is becoming increasingly important for a wide range of application domains, from pervasive computing to common business applications. Unfortunately, mainstream programming languages do not provide mechanisms that enable software entities to adapt their behavior dynamically to the current execution context. This leads developers to adopt convoluted designs to achieve the necessary runtime flexibility. We propose a new programming technique called Context-oriented Programming (COP) which addresses this problem. COP treats context explicitly, and provides mechanisms to dynamically adapt behavior in reaction to changes in context, even after system deployment at runtime. In this paper we lay the foundations of COP, show how dynamic layer activation enables multi-dimensional dispatch, illustrate the application of COP by examples in several language extensions, and demonstrate that COP is largely independent of other commitments to programming style.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study shows that different neural activity during mental imagery and abstract mentation can be assigned to well-defined steps of the brain's information-processing. During randomized visual presentation of single, imagery-type and abstract-type words, 27 channel event-related potential (ERP) field maps were obtained from 25 subjects (sequence-divided into a first and second group for statistics). The brain field map series showed a sequence of typical map configurations that were quasi-stable for brief time periods (microstates). The microstates were concatenated by rapid map changes. As different map configurations must result from different spatial patterns of neural activity, each microstate represents different active neural networks. Accordingly, microstates are assumed to correspond to discrete steps of information-processing. Comparing microstate topographies (using centroids) between imagery- and abstract-type words, significantly different microstates were found in both subject groups at 286–354 ms where imagery-type words were more right-lateralized than abstract-type words, and at 550–606 ms and 606–666 ms where anterior-posterior differences occurred. We conclude that language-processing consists of several, well-defined steps and that the brain-states incorporating those steps are altered by the stimuli's capacities to generate mental imagery or abstract mentation in a state-dependent manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.