206 resultados para Android, Componenti, Sensori, IPC, Shared memory

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a novel program parallelization technique incorporating with dynamic and static scheduling. It utilizes a problem specific pattern developed from the prior knowledge of the targeted problem abstraction. Suitable for solving complex parallelization problems such as data intensive all-to-all comparison constrained by memory, the technique delivers more robust and faster task scheduling compared to the state-of-the art techniques. Good performance is achieved from the technique in data intensive bioinformatics applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Dynamic Data eXchange (DDX) is our third generation platform for building distributed robot controllers. DDX allows a coalition of programs to share data at run-time through an efficient shared memory mechanism managed by a store. Further, stores on multiple machines can be linked by means of a global catalog and data is moved between the stores on an as needed basis by multi-casting. Heterogeneous computer systems are handled. We describe the architecture of DDX and the standard clients we have developed that let us rapidly build complex control systems with minimal coding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a parallel implementation of an agent-based model applied to electricity distribution grids. A fine-grained shared memory parallel implementation is presented, detailing the way the agents are grouped and executed on a multi-threaded machine, as well as the way the model is built (in a composable manner) which is an aid to the parallelisation. Current results show a medium level speedup of 2.6, but improvements are expected by incor-porating newer distributed or parallel ABM schedulers into this implementa-tion. While domain-specific, this parallel algorithm can be applied to similarly structured ABMs (directed acyclic graphs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software transactional memory has the potential to greatly simplify development of concurrent software, by supporting safe composition of concurrent shared-state abstractions. However, STM semantics are defined in terms of low-level reads and writes on individual memory locations, so implementations are unable to take advantage of the properties of user-defined abstractions. Consequently, the performance of transactions over some structures can be disappointing. ----- ----- We present Modular Transactional Memory, our framework which allows programmers to extend STM with concurrency control algorithms tailored to the data structures they use in concurrent programs. We describe our implementation in Concurrent Haskell, and two example structures: a finite map which allows concurrent transactions to operate on disjoint sets of keys, and a non-deterministic channel which supports concurrent sources and sinks. ----- ----- Our approach is based on previous work by others on boosted and open-nested transactions, with one significant development: transactions are given types which denote the concurrency control algorithms they employ. Typed transactions offer a higher level of assurance for programmers reusing transactional code, and allow more flexible abstract concurrency control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The symbolic and improvisational nature of Livecoding requires a shared networking framework to be flexible and extensible, while at the same time providing support for synchronisation, persistence and redundancy. Above all the framework should be robust and available across a range of platforms. This paper proposes tuple space as a suitable framework for network communication in ensemble livecoding contexts. The role of tuple space as a concurrency framework and the associated timing aspects of the tuple space model are explored through Spaces, an implementation of tuple space for the Impromptu environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transactive memory system (TMS) theory explains how expertise is recognized and coordinated in teams. Extending current TMS research from a group information-processing perspective, our article presents a theoretical model that considers TMS development from a social identity perspective. We discuss how two features of communication (quantity and quality) important to TMS development are linked to TMS through the group identification mechanism of a shared common team identity. Informed by social identity theory, we also differentiate between intragroup and intergroup contexts and outline how, in multidisciplinary teams, professional identification and perceived equality of status among professional subgroups have a role to play in TMS development. We provide a theoretical discussion of future research directions aimed at testing and extending our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In-memory databases have become a mainstay of enterprise computing offering significant performance and scalability boosts for online analytical and (to a lesser extent) transactional processing as well as improved prospects for integration across different applications through an efficient shared database layer. Significant research and development has been undertaken over several years concerning data management considerations of in-memory databases. However, limited insights are available on the impacts of applications and their supportive middleware platforms and how they need to evolve to fully function through, and leverage, in-memory database capabilities. This paper provides a first, comprehensive exposition into how in-memory databases impact Business Pro- cess Management, as a mission-critical and exemplary model-driven integration and orchestration middleware. Through it, we argue that in-memory databases will render some prevalent uses of legacy BPM middleware obsolete, but also open up exciting possibilities for tighter application integration, better process automation performance and some entirely new BPM capabilities such as process-based application customization. To validate the feasibility of an in-memory BPM, we develop a surprisingly simple BPM runtime embedded into SAP HANA and providing for BPMN-based process automation capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memory, time and metaphor are central triggers for artists in exploring and shaping their creative work. This paper examines the place of artists as ‘memory-keepers’, and ‘memory-makers’, in particular through engagement with the time-based art of site-specific performance. Naik Naik (Ascent) was a multi-site performance project in the historic setting of Melaka, Malaysia, and is partially recaptured through the presence and voices of its collaborating artists. Distilled from moments recalled, this paper seeks to uncover the poetics of memory to emerge from the project; one steeped in metaphor rather than narrative. It elicits some of the complex and interdependent layers of experience revealed by the artists in Naik Naik; cultural, ancestral, historical, personal, instinctual and embodied memories connected to sound, smell, touch, sensation and light, in a spatiotemporal context for which site is the catalyst. The liminal nature of memory at the heart of Naik Naik, provides a shared experience of past and present and future, performatively interwoven.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shared Services (SS) involves the convergence and streamlining of an organisation’s functions to ensure timely service delivery as effectively and efficiently as possible. As a management structure designed to promote value generation, cost savings and improved service delivery by leveraging on economies of scale, the idea of SS is driven by cost reduction and improvements in quality levels of service and efficiency. Current conventional wisdom is that the potential for SS is increasing due to the increasing costs of changing systems and business requirements for organisations and in implementing and running information systems. In addition, due to commoditisation of large information systems such as enterprise systems, many common, supporting functions across organisations are becoming more similar than not, leading to an increasing overlap in processes and fuelling the notion that it is possible for organisations to derive benefits from collaborating and sharing their common services through an inter-organisational shared services (IOSS) arrangement. While there is some research on traditional SS, very little research has been done on IOSS. In particular, it is unclear what are the potential drivers and inhibitors of IOSS. As the concepts of IOSS and SS are closely related to that of Outsourcing, and their distinction is sometimes blurred, this research has the first objective of seeking a clear conceptual understanding of the differences between SS and Outsourcing (in motivators, arrangements, benefits, disadvantages, etc) and based on this conceptual understanding, the second objective of this research is to develop a decision model (Shared Services Potential model) which would aid organisations in deciding which arrangement would be more appropriate for them to adopt in pursuit of process improvements for their operations. As the context of the study is on universities in higher education sharing administrative services common to or across them and with the assumption that such services were homogenous in nature, this thesis also reports on a case study. The case study involved face to face interviews from representatives of an Australian university to explore the potential for IOSS. Our key findings suggest that it is possible for universities to share services common across them as most of them were currently using the same systems although independently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For Bakhtin, it is always important to know from where one speaks. The place from which I speak is that of a person who grew up in Italy during the economic miracle (pre-1968) in a working class family, watching film matinees on television during school holidays. All sort of films and genres were shown: from film noir to westerns, to Jean Renoir's films, German expressionism, Italian neorealism and Italian comedy. Cinema has come to represent over time a sort of memory extension that supplements lived memory of events, and one which, especially, mediates the intersection of many cultural discourses. When later in life I moved to Australia and started teaching in film studies, my choice of a film that was emblematic of neorealism went naturally to Roma città aperta (Open city hereafter) by Roberto Rossellini (1945), and not to Paisan or Sciuscà or Bicycle Thieves. My choice was certainly grounded in my personal memory - especially those aspects transmitted to me by my parents, who lived through the war and maintained that Open City had truly made them cry. With a mother who voted for the Christian Democratic Party and a father who was a unionist, I thought that this was normal in Italian families and society. In the early 1960s, the Resistance still offered a narrative of suffering and redemption, shared by Catholics or Communists. This construction of psychological realism is what I believe Open City continues to offer in time.