19 resultados para SOFTWARE APPLICATIONS
em Aston University Research Archive
Resumo:
Higher education institutions are increasingly using social software tools to support teaching and learning. Despite the fact that social software is often used in a social context, these applications can significantly contribute to the educational experience of a student. However, as the social software domain comprises a considerable diversity of tools, the respective tools can be expected to differ in the way they can contribute to teaching and learning. In this review on the educational use of social software, we systematically analyze and compare the diverse social software tools and identify their contributions to teaching and learning. By integrating established learning theory and the extant literature on the individual social software applications we seek to contribute to a theoretical foundation for social software use and the choice of tools. Case vignettes from several UK higher education institutions are used to illustrate the different applications of social software tools in teaching and learning.
Resumo:
Developers of interactive software are confronted by an increasing variety of software tools to help engineer the interactive aspects of software applications. Not only do these tools fall into different categories in terms of functionality, but within each category there is a growing number of competing tools with similar, although not identical, features. Choice of user interface development tool (UIDT) is therefore becoming increasingly complex.
Resumo:
Research on large firms suggests that dedicated customer relationship management (CRM) software applications play a critical role in creating and sustaining customer relationships. CRM is also of strategic importance to small and medium-sized enterprises (SMEs), but most of them do not employ dedicated CRM software. Instead they use generic Internet-based technologies to manage customer relationships with electronic CRM (eCRM). There has been little research on the extent to which the use of generic Internet technologies contributes to SME performance. The present study fills the gap, building upon the literature on organizational capabilities, marketing, and SMEs to develop a research model with which to explore the relationships between generic Internet technologies, eCRM capabilities, and the resulting performance benefits in the SME context. A survey across 286 SMEs in Ireland finds strong empirical evidence in support of the hypotheses regarding these benefits. The study contributes to managerial decision making by showing how SMEs can use generic Internet technologies to advance their customer relationships and contributes to theory development by conceptualizing eCRM capabilities in an SME context.
Resumo:
Developers of interactive software are confronted by an increasing variety of software tools to help engineer the interactive aspects of software applications. Typically resorting to ad hoc means of tool selection, developers are often dissatisfied with their chosen tool on account of the fact that the tool lacks required functionality or does not fit seamlessly within the context in which it is to be used. This paper describes a system for evaluating the suitability of user interface development tools for use in software development organisations and projects such that the selected tool appears ‘invisible’ within its anticipated context of use. The paper also outlines and presents the results of an informal empirical study and a series of observational case studies of the system.
Resumo:
Dimensional and form inspections are key to the manufacturing and assembly of products. Product verification can involve a number of different measuring instruments operated using their dedicated software. Typically, each of these instruments with their associated software is more suitable for the verification of a pre-specified quality characteristic of the product than others. The number of different systems and software applications to perform a complete measurement of products and assemblies within a manufacturing organisation is therefore expected to be large. This number becomes even larger as advances in measurement technologies are made. The idea of a universal software application for any instrument still appears to be only a theoretical possibility. A need for information integration is apparent. In this paper, a design of an information system to consistently manage (store, search, retrieve, search, secure) measurement results from various instruments and software applications is introduced. Two of the main ideas underlying the proposed system include abstracting structures and formats of measurement files from the data so that complexity and compatibility between different approaches to measurement data modelling is avoided. Secondly, the information within a file is enriched with meta-information to facilitate its consistent storage and retrieval. To demonstrate the designed information system, a web application is implemented. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
A small lathe has been modified to work under microprocessor control to enhance the facilities which the lathe offers and provide a wider operating range with relevant economic gains. The result of these modifications give better operating system characteristics. A system of electronic circuits have been developed, utilising the latest technology, to replace the pegboard with the associated obsolete electrical components. Software for the system includes control programmes for the implementation of the original pegboard operation and several sample machine code programmes are included, covering a wide spectrum of applications, including diagnostic testing of the control system. It is concluded that it is possible to carry out a low cost retrofit on existing machine tools to enhance their range of capabilities.
Resumo:
This thesis describes an investigation into methods for controlling the mode distribution in multimode optical fibres. The major contributions presented in this thesis are summarised below. Emerging standards for Gigabit Ethernet transmission over multimode optical fibre have led to a resurgence of interest in the precise control, and specification, of modal launch conditions. In particular, commercial LED and OTDR test equipment does not, in general, comply with these standards. There is therefore a need for mode control devices, which can ensure compliance with the standards. A novel device consisting of a point-load mode-scrambler in tandem with a mode-filter is described in this thesis. The device, which has been patented, may be tuned to achieve a wide range of mode distributions and has been implemented in a ruggedised package for field use. Various other techniques for mode control have been described in this work, including the use of Long Period Gratings and air-gap mode-filters. Some of the methods have been applied to other applications, such as speckle suppression and in sensor technology. A novel, self-referencing, sensor comprising two modal groups in the Mode Power Distribution has been designed and tested. The feasibility of a two-channel Mode Group Diversity Multiplexed system has been demonstrated over 985m. A test apparatus for measuring mode distribution has been designed and constructed. The apparatus consists of a purpose-built video microscope, and comprehensive control and analysis software written in Visual Basic. The system may be fitted with a Silicon camera or an InGaAs camera, for measurement in the 850nm and 130nm transmission windows respectively. A limitation of the measurement method, when applied to well-filled fibres, has been identified and an improvement to the method has been proposed, based on modelled Laguerre Gauss field solutions.
Resumo:
OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
Resumo:
Contemporary software systems are becoming increasingly large, heterogeneous, and decentralised. They operate in dynamic environments and their architectures exhibit complex trade-offs across dimensions of goals, time, and interaction, which emerges internally from the systems and externally from their environment. This gives rise to the vision of self-aware architecture, where design decisions and execution strategies for these concerns are dynamically analysed and seamlessly managed at run-time. Drawing on the concept of self-awareness from psychology, this paper extends the foundation of software architecture styles for self-adaptive systems to arrive at a new principled approach for architecting self-aware systems. We demonstrate the added value and applicability of the approach in the context of service provisioning to cloud-reliant service-based applications.
Resumo:
Traditionally, research on model-driven engineering (MDE) has mainly focused on the use of models at the design, implementation, and verification stages of development. This work has produced relatively mature techniques and tools that are currently being used in industry and academia. However, software models also have the potential to be used at runtime, to monitor and verify particular aspects of runtime behavior, and to implement self-* capabilities (e.g., adaptation technologies used in self-healing, self-managing, self-optimizing systems). A key benefit of using models at runtime is that they can provide a richer semantic base for runtime decision-making related to runtime system concerns associated with autonomic and adaptive systems. This book is one of the outcomes of the Dagstuhl Seminar 11481 on models@run.time held in November/December 2011, discussing foundations, techniques, mechanisms, state of the art, research challenges, and applications for the use of runtime models. The book comprises four research roadmaps, written by the original participants of the Dagstuhl Seminar over the course of two years following the seminar, and seven research papers from experts in the area. The roadmap papers provide insights to key features of the use of runtime models and identify the following research challenges: the need for a reference architecture, uncertainty tackled by runtime models, mechanisms for leveraging runtime models for self-adaptive software, and the use of models at runtime to address assurance for self-adaptive systems.
Resumo:
This chapter provides the theoretical foundation and background on Data Envelopment Analysis (DEA) method and some variants of basic DEA models and applications to various sectors. Some illustrative examples, helpful resources on DEA, including DEA software package, are also presented in this chapter. DEA is useful for measuring relative efficiency for variety of institutions and has its own merits and limitations. This chapter concludes that DEA results should be interpreted with much caution to avoid giving wrong signals and providing inappropriate recommendations.