167 resultados para Abstractions
Resumo:
This is the River Yealm Salmon Action Plan Consultation document produced by the Environment Agency in 2003. The report pays attention on the external consultation of the River Yealm Salmon Action Plan (SAP). This strategy represents an entirely new approach to salmon management within the UK and introduces the concept of river-specific salmon spawning targets as a salmon management tool. The River Yealm Salmon Action Plan follows the format of those completed for the rivers Tamar, Lynher, Tavy, Camel, Fowey and Plym. It is the final one of seven action plans that will be produced for salmon rivers managed by Cornwall Area. The River Yealm SAP contains a description of the river catchment and highlights particular features that are relevant to the salmon population and the associated fishery. Notably there are potable water abstractions in the headwaters, workings for china clay on the main tributary, two inert waste landfill sites and an aggregate quarry adjacent to the main river and inputs from several Sewage Treatment Works (STW) and two industrial estates. The main River Yealm has been designated as River Ecosystem Class 1 for its water quality objectives. This is the highest water quality target set for rivers.
Resumo:
While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.
Resumo:
We present some recent developments in automated computational modelling with an emphasis on solid mechanics applications. The automation process permits an abstract mathematical model of a physical problem to be translated into computer code rapidly and trivially, and can lead to computer code which is faster than hand-written and optimised code. Crucial to the approach is ensuring that mathematical abstractions inherent in the mathematical model are inherited by the software library. © Springer Science+Business Media B.V. 2008.
Resumo:
Motivated by the design and development challenges of the BART case study, an approach for developing and analyzing a formal model for reactive systems is presented. The approach makes use of a domain specific language for specifying control algorithms able to satisfy competing properties such as safety and optimality. The domain language, called SPC, offers several key abstractions such as the state, the profile, and the constraint to facilitate problem specification. Using a high-level program transformation system such as HATS being developed at the University of Nebraska at Omaha, specifications in this modelling language can be transformed to ML code. The resulting executable specification can be further refined by applying generic transformations to the abstractions provided by the domain language. Problem dependent transformations utilizing the domain specific knowledge and properties may also be applied. The result is a significantly more efficient implementation which can be used for simulation and gaining deeper insight into design decisions and various control policies. The correctness of transformations can be established using a rewrite-rule based induction theorem prover Rewrite Rule Laboratory developed at the University of New Mexico.
Resumo:
We describe an approach to parallel compilation that seeks to harness the vast amount of fine-grain parallelism that is exposed through partial evaluation of numerically-intensive scientific programs. We have constructed a compiler for the Supercomputer Toolkit parallel processor that uses partial evaluation to break down data abstractions and program structure, producing huge basic blocks that contain large amounts of fine-grain parallelism. We show that this fine-grain prarllelism can be effectively utilized even on coarse-grain parallel architectures by selectively grouping operations together so as to adjust the parallelism grain-size to match the inter-processor communication capabilities of the target architecture.
Resumo:
Recent developments in microfabrication and nanotechnology will enable the inexpensive manufacturing of massive numbers of tiny computing elements with sensors and actuators. New programming paradigms are required for obtaining organized and coherent behavior from the cooperation of large numbers of unreliable processing elements that are interconnected in unknown, irregular, and possibly time-varying ways. Amorphous computing is the study of developing and programming such ultrascale computing environments. This paper presents an approach to programming an amorphous computer by spontaneously organizing an unstructured collection of processing elements into cooperative groups and hierarchies. This paper introduces a structure called an AC Hierarchy, which logically organizes processors into groups at different levels of granularity. The AC hierarchy simplifies programming of an amorphous computer through new language abstractions, facilitates the design of efficient and robust algorithms, and simplifies the analysis of their performance. Several example applications are presented that greatly benefit from the AC hierarchy. This paper introduces three algorithms for constructing multiple levels of the hierarchy from an unstructured collection of processors.
Resumo:
Lee M.H., Model-Based Reasoning: A Principled Approach for Software Engineering, Software - Concepts and Tools,19(4), pp179-189, 2000.
Resumo:
Lee M.H., Qualitative Modelling of Linear Networks in ECAD Applications, Expert Update, Vol. 3, Num. 2, pp23-32, BCS SGES, Summer 2000. Qualitative modeling of linear networks in ecad applications (1999) by M Lee Venue: Pages 146?152 of: Proceedings 13th international workshop on qualitative reasoning, QR ?99
Resumo:
RAE2008
Resumo:
Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into the existing mesh and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are tractable to address via theoretical analyses, especially game-theoretic analysis. Our work unifies these two thrusts first by distilling insights gleaned from clean theoretical models, notably that under natural resource constraints, selfish players can select neighbors so as to efficiently reach near-equilibria that also provide high global performance. Using Egoist, a prototype overlay routing system we implemented on PlanetLab, we demonstrate that our neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics; that Egoist is competitive with an optimal, but unscalable full-mesh approach; and that it remains highly effective under significant churn. We also describe variants of Egoist's current design that would enable it to scale to overlays of much larger scale and allow it to cater effectively to applications, such as P2P file sharing in unstructured overlays, based on the use of primitives such as scoped-flooding rather than routing.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and rewiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist’s neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications. This technical report supersedes BUCS-TR-2007-013.
Resumo:
Model Driven Architecture supports the transformation from reusable models to executable software. Business representations, however, cannot be fully and explicitly represented in such models for direct transformation into running systems. Thus, once business needs change, the language abstractions used by MDA (e.g. Object Constraint Language / Action Semantics), being low level, have to be edited directly. We therefore describe an Agent-oriented Model Driven Architecture (AMDA) that uses a set of business models under continuous maintenance by business people, reflecting the current business needs and being associated with adaptive agents that interpret the captured knowledge to behave dynamically. Three contributions of the AMDA approach are identified: 1) to Agent-oriented Software Engineering, a method of building adaptive Multi-Agent Systems; 2) to MDA, a means of abstracting high level business-oriented models to align executable systems with their requirements at runtime; 3) to distributed systems, the interoperability of disparate components and services via the agent abstraction.
Resumo:
Recent trends in computing systems, such as multi-core processors and cloud computing, expose tens to thousands of processors to the software. Software developers must respond by introducing parallelism in their software. To obtain highest performance, it is not only necessary to identify parallelism, but also to reason about synchronization between threads and the communication of data from one thread to another. This entry gives an overview on some of the most common abstractions that are used in parallel programming, namely explicit vs. implicit expression of parallelism and shared and distributed memory. Several parallel programming models are reviewed and categorized by means of these abstractions. The pros and cons of parallel programming models from the perspective of performance and programmability are discussed.