897 resultados para abstraction


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of space entered architectural history as late as 1893. Studies in art opened up the discussion, and it has been studied in various ways in architecture ever since. This article aims to instigate an additional reading to architectural history, one that is not supported by "isms" but based on space theories in the 20th century. Objectives of the article are to bring the concept of space and its changing paradigms to the attention of architectural researchers, to introduce a conceptual framework to classify and clarify theories of space, and to enrich the discussions on the 20th century architecture through theories that are beyond styles. The introduction of space in architecture will revolve around subject-object relationships, three-dimensionality and senses. Modern space will be discussed through concepts such as empathy, perception, abstraction, and geometry. A scientific approach will follow to study the concept of place through environment, event, behavior, and design methods. Finally, the research will look at contemporary approaches related to digitally supported space via concepts like reality-virtuality, mediated experience, and relationship with machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of exchange and correlation (xc) functionals of the generalized gradient approximation (GGA) type and of the meta-GGA type in the calculation of chemical reactions is related to topological features of the electron density which, in turn, are connected to the orbital structure of chemical bonds within the Kohn-Sham (KS) theory. Seventeen GGA and meta-GGA xc functionals are assessed for 15 hydrogen abstraction reactions and 3 symmetrical S(N)2 reactions. Systems that are problematic for standard GGAs characteristically have enhanced values of the dimensionless gradient argument s(sigma)(2) with local maxima in the bonding region. The origin of this topological feature is the occupation of valence KS orbitals with an antibonding or essentially nonbonding character. The local enhancement of s(sigma)(2) yields too negative exchange-correlation energies with standard GGAs for the transition state of the S(N)2 reaction, which leads to the reduced calculated reaction barriers. The unwarranted localization of the effective xc hole of the standard GGAs, i.e., the nondynamical correlation that is built into them but is spurious in this case, wields its effect by their s(sigma)(2) dependence. Barriers are improved for xc functionals with the exchange functional OPTX as x component, which has a modified dependence on s(sigma)(2). Standard GGAs also underestimate the barriers for the hydrogen abstraction reactions. In this case the barriers are improved by correlation functionals, such as the Laplacian-dependent (LAP3) functional, which has a modified dependence on the Coulomb correlation of the opposite- and like-spin electrons. The best overall performance is established for the combination OLAP3 of OPTX and LAP3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Variation across research ethics boards (REBs) in conditions placed on access to medical records for research purposes raises concerns around negative impacts on research quality and on human subject protection, including privacy. Aim: To study variation in REB consent requirements for retrospective chart review and who may have access to the medical record for data abstraction. Methods: Thirty 90-min face-to-face interviews were conducted with REB chairs and administrators affiliated with faculties of medicine in Canadian universities, using structured questions around a case study with open-ended responses. Interviews were recorded, transcribed and coded manually. Results: Fourteen sites (47%) required individual patient consent for the study to proceed as proposed. Three (10%) indicated that their response would depend on how potentially identifying variables would be managed. Eleven sites (38%) did not require consent. Two (7%) suggested a notification and opt-out process. Most stated that consent would be required if identifiable information was being abstracted from the record. Among those not requiring consent, there was substantial variation in recognising that the abstracted information could potentially indirectly re-identify individuals. Concern over access to medical records by an outside individual was also associated with requirement for consent. Eighteen sites (60%) required full committee review. Sixteen (53%) allowed an external research assistant to abstract information from the health record. Conclusions: Large variation was found across sites in the requirement for consent for research involving access to medical records. REBs need training in best practices for protecting privacy and confidentiality in health research. A forum for REB chairs to confidentially share concerns and decisions about specific studies could also reduce variation in decisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducing automation into a managed environment includes significant initial overhead and abstraction, creating a disconnect between the administrator and the system. In order to facilitate the transition to automated management, this paper proposes an approach whereby automation increases gradually, gathering data from the task deployment process. This stored data is analysed to determine the task outcome status and can then be used for comparison against future deployments of the same task and alerting the administrator to deviations from the expected outcome. Using a machinelearning
approach, the automation tool can learn from the administrator's reaction to task failures and eventually react to faults autonomously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On multiprocessors with explicitly managed memory hierarchies (EMM), software has the responsibility of moving data in and out of fast local memories. This task can be complex and error-prone even for expert programmers. Before we can allow compilers to handle the complexity for us, we must identify the abstractions that are general enough to allow us to write applications with reasonable effort, yet speci?c enough to exploit the vast on-chip memory bandwidth of EMM multi-processors. To this end, we compare two programming models against hand-tuned codes on the STI Cell, paying attention to programmability and performance. The ?rst programming model, Sequoia, abstracts the memory hierarchy as private address spaces, each corresponding to a parallel task. The second, Cellgen, is a new framework which provides OpenMP-like semantics and the abstraction of a shared address spaces divided into private and shared data. We compare three applications programmed using these models against their hand-optimized counterparts in terms of abstractions, programming complexity, and performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Explanations for the causes of famine and food insecurity often reside at a high level of aggregation or abstraction. Popular models within famine studies have often emphasised the role of prime movers such as population stress, or the political-economic structure of access channels, as key determinants of food security. Explanation typically resides at the macro level, obscuring the presence of substantial within-country differences in the manner in which such stressors operate. This study offers an alternative approach to analyse the uneven nature of food security, drawing on the Great Irish famine of 1845–1852. Ireland is often viewed as a classical case of Malthusian stress, whereby population outstripped food supply under a pre-famine demographic regime of expanded fertility. Many have also pointed to Ireland's integration with capitalist markets through its colonial relationship with the British state, and country-wide system of landlordism, as key determinants of local agricultural activity. Such models are misguided, ignoring both substantial complexities in regional demography, and the continuity of non-capitalistic, communal modes of land management long into the nineteenth century. Drawing on resilience ecology and complexity theory, this paper subjects a set of aggregate data on pre-famine Ireland to an optimisation clustering procedure, in order to discern the potential presence of distinctive social–ecological regimes. Based on measures of demography, social structure, geography, and land tenure, this typology reveals substantial internal variation in regional social–ecological structure, and vastly differing levels of distress during the peak famine months. This exercise calls into question the validity of accounts which emphasise uniformity of structure, by revealing a variety of regional regimes, which profoundly mediated local conditions of food security. Future research should therefore consider the potential presence of internal variations in resilience and risk exposure, rather than seeking to characterise cases based on singular macro-dynamics and stressors alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

African coastal regions are expected to experience the highest rates of population growth in coming decades. Fresh groundwater resources in the coastal zone of East Africa (EA) are highly vulnerable to seawater intrusion. Increasing water demand is leading to unsustainable and ill-planned well drilling and abstraction. Wells supplying domestic, industrial and agricultural needs are or have become, in many areas, too saline for use. Climate change, including weather changes and sea level rise, is expected to exacerbate this problem. The multiplicity of physical, demographic and socio-economic driving factors makes this a very challenging issue for management. At present the state and probable evolution of coastal aquifers in EA are not well documented. The UPGro project 'Towards groundwater security in coastal East Africa' brings together teams from Kenya, Tanzania, Comoros Islands and Europe to address this knowledge gap. An integrative multidisciplinary approach, combining the expertise of hydrogeologists, hydrologists and social scientists, is investigating selected sites along the coastal zone in each country. Hydrogeologic observatories have been established in different geologic and climatic settings representative of the coastal EA region, where focussed research will identify the current status of groundwater and identify future threats based on projected demographic and climate change scenarios. Researchers are also engaging with end users as well as local community and stakeholder groups in each area in order to understanding the issues most affecting the communities and searching sustainable strategies for addressing these.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reverse engineering of a skeleton based programming environment and redesign to distribute management activities of the system and thereby remove a potential single point of failure is considered. The Ore notation is used to facilitate abstraction of the design and analysis of its properties. It is argued that Ore is particularly suited to this role as this type of management is essentially an orchestration activity. The Ore specification of the original version of the system is modified via a series of semi-formally justified derivation steps to obtain a specification of the decentralized management version which is then used as a basis for its implementation. Analysis of the two specifications allows qualitative prediction of the expected performance of the derived version with respect to the original, and this prediction is borne out in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scale of BT's operations necessitates the use of very large scale computing systems, and the storage and management of large volumes of data. Customer product portfolios are an important form of data which can be difficult to store in a space efficient way. The difficulties arise from the inherently structured form of product portfolios, and the fact that they change over time as customers add or remove products. This paper introduces a new data-modelling abstraction called the List_Tree. It has been designed specifically to support the efficient storage and manipulation of customer product portfolios, but may also prove useful in other applications with similar general requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:


Abstract Image

Herein a new double O-directed free radical hydrostannation reaction is reported on the structurally complex dialkyldiyne 11. Through our use of a conformation-restraining acetal to help prevent stereocenter-compromising 1,5-H-atom abstraction reactions by vinyl radical intermediates, the two vinylstannanes of 10 were concurrently constructed with high stereocontrol using Ph3SnH/Et3B/O2. Distannane 10 was thereafter elaborated into the bis-vinyl iodide 9 via O-silylation and double I–Sn exchange; double Stille coupling of 9, O-desilylation, and oxidation thereafter furnished 8.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ubiquitous parallel computing aims to make parallel programming accessible to a wide variety of programming areas using deterministic and scale-free programming models built on a task abstraction. However, it remains hard to reconcile these attributes with pipeline parallelism, where the number of pipeline stages is typically hard-coded in the program and defines the degree of parallelism.

This paper introduces hyperqueues, a programming abstraction that enables the construction of deterministic and scale-free pipeline parallel programs. Hyperqueues extend the concept of Cilk++ hyperobjects to provide thread-local views on a shared data structure. While hyperobjects are organized around private local views, hyperqueues require shared concurrent views on the underlying data structure. We define the semantics of hyperqueues and describe their implementation in a work-stealing scheduler. We demonstrate scalable performance on pipeline-parallel PARSEC benchmarks and find that hyperqueues provide comparable or up to 30% better performance than POSIX threads and Intel's Threading Building Blocks. The latter are highly tuned to the number of available processing cores, while programs using hyperqueues are scale-free.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we have developed a low-complexity algorithm for epileptic seizure detection with a high degree of accuracy. The algorithm has been designed to be feasibly implementable as battery-powered low-power implantable epileptic seizure detection system or epilepsy prosthesis. This is achieved by utilizing design optimization techniques at different levels of abstraction. Particularly, user-specific critical parameters are identified at the algorithmic level and are explicitly used along with multiplier-less implementations at the architecture level. The system has been tested on neural data obtained from in-vivo animal recordings and has been implemented in 90nm bulk-Si technology. The results show up to 90 % savings in power as compared to prevalent wavelet based seizure detection technique while achieving 97% average detection rate. Copyright 2010 ACM.