502 resultados para runtime assertions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compiler optimizations help to make code run faster at runtime. When the compilation is done before the program is run, compilation time is less of an issue, but how do on-the-fly compilation and optimization impact the overall runtime? If the compiler must compete with the running application for resources, the running application will take more time to complete. This paper investigates the impact of specific compiler optimizations on the overall runtime of an application. A foldover Plackett and Burman design is used to choose compiler optimizations that appear to contribute to shorter overall runtimes. These selected optimizations are compared with the default optimization levels in the Jikes RVM. This method selects optimizations that result in a shorter overall runtime than the default O0, O1, and O2 levels. This shows that careful selection of compiler optimizations can have a significant, positive impact on overall runtime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the Andean highlands, indigenous environmental knowledge is currently undergoing major changes as a result of various external and internal factors. As in other parts of the world, an overall process of erosion of local knowledge can be observed. In response to this trend, some initiatives that adopt a biocultural approach aim at actively strengthening local identities and revalorizing indigenous environmental knowledge and practices, assuming that such practices can contribute to more sustainable management of biodiversity. However, these initiatives usually lack a sound research basis, as few studies have focused on the dynamics of indigenous environmental knowledge in the Andes and on its links with biodiversity management. Against this background, the general objective of this research project was to contribute to the understanding of the dynamics of indigenous environmental knowledge in the Andean highlands of Peru and Bolivia by investigating how local medicinal knowledge is socially differentiated within rural communities, how it is transformed, and which external and internal factors influence these transformation processes. The project adopted an actor-oriented perspective and emphasized the concept of knowledge dialogue by analyzing the integration of traditional and formal medicinal systems within family therapeutic strategies. It also aimed at grasping some of the links between the dynamics of medicinal knowledge and the types of land use systems and biodiversity management. Research was conducted in two case study areas of the Andes, both Quechua-speaking and situated in comparable agro-ecological production belts - Pitumarca District, Department of Cusco (Southern Peruvian Highlands) and the Tunari National Park, Department of Cochabamba (Bolivian inner-Andean valleys). In each case study area, the land use systems and strategies of 18 families from two rural communities, their environmental knowledge related to medicine and to the local therapeutic flora, and an appreciation of the dynamics of this knowledge were assessed. Data were collected through a combination of disciplinary and participatory action-research methods. It was mostly analyzed using qualitative methods, though some quantitative ethnobotanical methods were also used. In both case studies, traditional medicine still constitutes the preferred option for the families interviewed, independently of their age, education level, economic status, religion, or migration status. Surprisingly and contrary to general assertions among local NGOs and researchers, results show that there is a revival of Andean medicine within the younger generation, who have greater knowledge of medicinal plants than the previous one, value this knowledge as an important element of their way of life and relationship with “Mother Earth” (Pachamama), and, at least in the Bolivian case, prefer to consult the traditional healer rather than go to the health post. Migration to the urban centres and the Amazon lowlands, commonly thought to be an important factor of local medicinal knowledge loss, only affects people’s knowledge in the case of families who migrate over half of the year or permanently. Migration does not influence the knowledge of medicinal plants or the therapeutic strategies of families who migrate temporarily for shorter periods of time. Finally, economic status influences neither the status of people’s medicinal knowledge, nor families’ therapeutic strategies, even though the financial factor is often mentioned by practitioners and local people as the main reason for not using the formal health system. The influence of the formal health system on traditional medicinal knowledge varies in each case study area. In the Bolivian case, where it was only introduced in the 1990s and access to it is still very limited, the main impact was to give local communities access to contraceptive methods and to vaccination. In the Peruvian case, the formal system had a much greater impact on families’ health practices, due to local and national policies that, for instance, practically prohibit some traditional practices such as home birth. But in both cases, biomedicine is not considered capable of responding to cultural illnesses such as “fear” (susto), “bad air” (malviento), or “anger” (colerina). As a consequence, Andean farmers integrate the traditional medicinal system and the formal one within their multiple therapeutic strategies, reflecting an inter-ontological dialogue between different conceptions of health and illness. These findings reflect a more general trend in the Andes, where indigenous communities are currently actively revalorizing their knowledge and taking up traditional practices, thus strengthening their indigenous collective identities in a process of cultural resistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context-dependent behavior is becoming increasingly important for a wide range of application domains, from pervasive computing to common business applications. Unfortunately, mainstream programming languages do not provide mechanisms that enable software entities to adapt their behavior dynamically to the current execution context. This leads developers to adopt convoluted designs to achieve the necessary runtime flexibility. We propose a new programming technique called Context-oriented Programming (COP) which addresses this problem. COP treats context explicitly, and provides mechanisms to dynamically adapt behavior in reaction to changes in context, even after system deployment at runtime. In this paper we lay the foundations of COP, show how dynamic layer activation enables multi-dimensional dispatch, illustrate the application of COP by examples in several language extensions, and demonstrate that COP is largely independent of other commitments to programming style.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maintaining object-oriented systems that use inheritance and polymorphism is difficult, since runtime information, such as which methods are actually invoked at a call site, is not visible in the static source code. We have implemented Senseo, an Eclipse plugin enhancing Eclipse's static source views with various dynamic metrics, such as runtime types, the number of objects created, or the amount of memory allocated in particular methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In conventional software applications, synchronization code is typically interspersed with functional code, thereby impacting understandability and maintainability of the code base. At the same time, the synchronization defined statically in the code is not capable of adapting to different runtime situations. We propose a new approach to concurrency control which strictly separates the functional code from the synchronization requirements to be used and which adapts objects to be synchronized dynamically to their environment. First-class synchronization specifications express safety requirements, and a dynamic synchronization system dynamically adapts objects to different runtime situations. We present an overview of a prototype of our approach together with several classical concurrency problems, and we discuss open issues for further research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article investigates barriers to a wider utilization of a Learning Management System (LMS). The study aims to identify the reasons why some tools in the LMS are rarely used, in spite of assertions that the learning experience and students’ performance can be improved by interaction and collaboration, facilitated by the LMS. Lecturers’ perceptions about the use of LMSs over the last four years at the School of Engineering, University of Borås were investigated. Seventeen lecturers who were interviewed in 2006 were interviewed again in 2011. The lecturers’ still use the LMS primarily for distribution of documents and course administration. The results indicate that their attitudes have not changed significantly. The apparent reluctance to utilize interactive features in the LMS is analyzed, by looking at the expected impact on the lecturers’ work situation. The author argues that the main barrier to a wider utilization of LMS is the lecturers’ fear of additional demands on their time. Hence, if educational institutions want a wider utilization of LMS, some kind of incentives for lecturers are needed, in addition to support and training.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Informatik- und insbesondere Programmierunterricht sind heute ein wichtiger Bestandteil der schulischen Ausbildung. Vereinfachte Entwicklungsumgebungen, die auf die Abstraktion typischer Programmierkonzepte in Form von grafischen Bausteinen setzen, unterstützen diesen Trend. Zusätzliche Attraktivität wird durch die Verwendung exotischer Laufzeitumgebungen (z. B. Roboter) geschaffen. Die in diesem Paper vorgestellte Plattform “ScratchDrone” führt ergänzend zu diesen Angeboten eine moderne Flugdrohne als innovative Laufzeitumgebung für Scratch-Programme ein. Die Programmierung kann dabei dank modularer Systemarchitektur auf verschiedenen Abstraktionsebenen erfolgen, abhängig vom Lernfortschritt der Schüler. Kombiniert mit einem mehrstufigen didaktischen Modell, der Herausforderung der Bewegung im 3D-Raum sowie der natürlichen menschlichen Faszination für das Fliegen wird so eine hohe Lernmotivation bei jungen Programmieranfängern erreicht.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis that entities exist in, at, or in relation to logically possible worlds is criticized. The suggestion that actually nonexistent fictional characters might nevertheless exist in nonactual merely logically possible worlds runs afoul of the most general transworld identity requirements. An influential philosophical argument for the concept of world-relativized existence is examined in Alvin Plantinga’s formal development and explanation of modal semantic relations. Despite proposing an attractive unified semantics of alethic modality, Plantinga’s argument is rejected on formal grounds as supporting materially false actual existence assertions in the case of actually nonexistent objects in the framework of Plantinga’s own underlying classical predicate-quantificational logic.