231 resultados para Software Transactional Memory (STM)
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
Objective Alcohol-related implicit (preconscious) cognitive processes are established and unique predictors of alcohol use, but most research in this area has focused on alcohol-related implicit cognition and anxiety. This study extends this work into the area of depressed mood by testing a cognitive model that combines traditional explicit (conscious and considered) beliefs, implicit alcohol-related memory associations (AMAs), and self-reported drinking behavior. Method Using a sample of 106 university students, depressed mood was manipulated using a musical mood induction procedure immediately prior to completion of implicit then explicit alcohol-related cognition measures. A bootstrapped two-group (weak/strong expectancies of negative affect and tension reduction) structural equation model was used to examine how mood changes and alcohol-related memory associations varied across groups. Results Expectancies of negative affect moderated the association of depressed mood and AMAs, but there was no such association for tension reduction expectancy. Conclusion Subtle mood changes may unconsciously trigger alcohol-related memories in vulnerable individuals. Results have implications for addressing subtle fluctuations in depressed mood among young adults at risk of alcohol problems.
Resumo:
This paper investigates virtual reality representations of performance in London’s late sixteenth-century Rose Theatre, a venue that, by means of current technology, can once again challenge perceptions of space, performance, and memory. The VR model of The Rose becomes a Camillo device in that it represents a virtual recreation of this venue in as much detail as possible and attempts to recover graphic demonstrations of the trace memories of the performance modes of the day. The VR model is based on accurate archeological and theatre historical records and is easy to navigate. The introduction of human figures onto The Rose’s stage via motion capture allows us to explore the relationships between space, actor and environment. The combination of venue and actors facilitates a new way of thinking about how the work of early modern playwrights can be stored and recalled. This virtual theatre is thus activated to intersect productively with contemporary studies in performance; as such, our paper provides a perspective on and embodiment of the relation between technology, memory and experience. It is, at its simplest, a useful archiving project for theatrical history, but it is directly relevant to contemporary performance practice as well. Further, it reflects upon how technology and ‘re-enactments’ of sorts mediate the way in which knowledge and experience are transferred, and even what may be considered ‘knowledge.’ Our work provides opportunities to begin addressing what such intermedial confrontations might produce for ‘remembering, experiencing, thinking and imagining.’ We contend that these confrontations will enhance live theatre performance rather than impeding or disrupting contemporary performance practice. This paper intersects with the CFP’s ‘Performing Memory’ and ‘Memory Lab’ themes. Our presentation (which includes a demonstration of the VR model and the motion capture it requires) takes the form of two closely linked papers that share a single abstract. The two papers will be given by two people, one of whom will be physically present in Utrecht, the other participating via Skype.
Resumo:
A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.
Resumo:
This paper considers the problem of building a software architecture for a human-robot team. The objective of the team is to build a multi-attribute map of the world by performing information fusion. A decentralized approach to information fusion is adopted to achieve the system properties of scalability and survivability. Decentralization imposes constraints on the design of the architecture and its implementation. We show how a Component-Based Software Engineering approach can address these constraints. The architecture is implemented using Orca – a component-based software framework for robotic systems. Experimental results from a deployed system comprised of an unmanned air vehicle, a ground vehicle, and two human operators are presented. A section on the lessons learned is included which may be applicable to other distributed systems with complex algorithms. We also compare Orca to the Player software framework in the context of distributed systems.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
The number of software vendors offering ‘Software-as-a-Service’ has been increasing in recent years. In the Software-as-a-Service model software is operated by the software vendor and delivered to the customer as a service. Existing business models and industry structures are challenged by the changes to the deployment and pricing model compared to traditional software. However, the full implications on the way companies create, deliver and capture value are not yet sufficiently analyzed. Current research is scattered on specific aspects, only a few studies provide a more holistic view of the impact from a business model perspective. For vendors it is, however, crucial to be aware of the potentially far reaching consequences of Software-as-a-Service. Therefore, a literature review and three exploratory case studies of leading software vendors are used to evaluate possible implications of Software-as-a-Service on business models. The results show an impact on all business model building blocks and highlight in particular the often less articulated impact on key activities, customer relationship and key partnerships for leading software vendors and show related challenges, for example, with regard to the integration of development and operations processes. The observed implications demonstrate the disruptive character of the concept and identify future research requirements.
Resumo:
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.
Resumo:
Corporate sponsorship of events contributes significantly to marketing aims, including brand awareness as measured by recall and recognition of sponsor‐event pairings. Unfortunately, resultant advantages accrue disproportionately to brands having a natural or congruent fit with the available sponsorship properties. In three cued‐recall experiments, the effect of articulation of sponsorship fit on memory for sponsor‐event pairings is examined. While congruent sponsors have a natural memory advantage, results demonstrate that memory improvements via articulation are possible for incongruent sponsor‐event pairings. These improvements are, however, affected by the presence of competitor brands and the way in which memory is accessed.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
Teachers are under increasing pressure from government and school management to incorporate technology into lessons. They need to consider which technologies can most effectively enhance subject learning, encourage higher order thinking skills and support the performance of authentic tasks. This chapter reviews the practical and theoretical tools that have been developed to aid teachers in selecting software and reviews the software assessment methodologies from the 1980s to the present day. It concludes that teachers need guidance to structure the evaluation of technology, to consider its educational affordances, its usability, its suitability for the students and the classroom environment and its fit to the teachers’ preferred pedagogies.
Resumo:
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. The spreading activation, spooky-action-at-a-distance and entanglement models have all been used to model the activation of a word. Recently a hypothesis was put forward that the mean activation levels of the respective models are as follows: Spreading � Entanglment � Spooking-action-at-a-distance This article investigates this hypothesis by means of a substantial empirical analysis of each model using the University of South Florida word association, rhyme and word norms.
Resumo:
Self-segregation and compartimentalisation are observed experimentally to occur spontaneously on live membranes as well as reconstructed model membranes. It is believed that many of these processes are caused or supported by anomalous diffusive behaviours of biomolecules on membranes due to the complex and heterogeneous nature of these environments. These phenomena are on the one hand of great interest in biology, since they may be an important way for biological systems to selectively localize receptors, regulate signaling or modulate kinetics; and on the other, they provide an inspiration for engineering designs that mimick natural systems. We present an interactive software package we are developing for the purpose of simulating such processes numerically using a fundamental Monte Carlo approach. This program includes the ability to simulate kinetics and mass transport in the presence of either mobile or immobile obstacles and other relevant structures such as liquid-ordered lipid microdomains. We also present preliminary simulation results regarding the selective spatial localization and chemical kinetics modulating power of immobile obstacles on the membrane, obtained using the program.