483 resultados para SOFTWARE APPLICATIONS
em Queensland University of Technology - ePrints Archive
Resumo:
We do not commonly associate software engineering with philosophical debate. Indeed, software engineers ought to be concerned with building software systems and not settling philosophical questions. I attempt to show that software engineers do, in fact, take philosophical sides when designing software applications. In particular, I look at how the problem of vagueness arises in software engineering and argue that when software engineers solve it, they commit to philosophical views that they are seldom aware of. In the second part of the paper, I suggest a way of dealing with vague predicates without having to confront the problem of vagueness itself. The purpose of my paper is to highlight the currently prevalent disconnect between philosophy and software engineering. I claim that a better knowledge of the philosophical debate is important as it can have ramifications for crucial software design decisions. Better awareness of philosophical issues not only produces better software engineers, it also produces better engineered products.
Resumo:
The role of sustainability in urban design is becoming increasingly important as Australia’s cities continue to grow, putting pressure on existing infrastructure such as water, energy and transport. To optimise an urban design many different aspects such as water, energy, transport, costs need to be taken into account integrally. Integrated software applications assessing urban designs on a large variety of aspects are hardly available. With the upcoming next generation of the Internet often referred to as the Semantic Web, data can become more machine-interpretable by developing ontologies that can support the development of integrated software systems. Software systems can use these ontologies to perform an intelligent task such as assessing an urban design on a particular aspect. When ontologies of different applications are aligned, they can share information resulting in interoperability. Inference such as compliancy checks and classifications can support aligning the ontologies. A proof of concept implementation has been made to demonstrate and validate the usefulness of machine interpretable ontologies for urban designs.
Resumo:
“SOH see significant benefit in digitising its drawings and operation and maintenance manuals. Since SOH do not currently have digital models of the Opera House structure or other components, there is an opportunity for this national case study to promote the application of Digital Facility Modelling using standardized Building Information Models (BIM)”. The digital modelling element of this project examined the potential of building information models for Facility Management focusing on the following areas: • The re-usability of building information for FM purposes • BIM as an Integrated information model for facility management • Extendibility of the BIM to cope with business specific requirements • Commercial facility management software using standardised building information models • The ability to add (organisation specific) intelligence to the model • A roadmap for SOH to adopt BIM for FM The project has established that BIM – building information modelling - is an appropriate and potentially beneficial technology for the storage of integrated building, maintenance and management data for SOH. Based on the attributes of a BIM, several advantages can be envisioned: consistency in the data, intelligence in the model, multiple representations, source of information for intelligent programs and intelligent queries. The IFC – open building exchange standard – specification provides comprehensive support for asset and facility management functions, and offers new management, collaboration and procurement relationships based on sharing of intelligent building data. The major advantages of using an open standard are: information can be read and manipulated by any compliant software, reduced user “lock in” to proprietary solutions, third party software can be the “best of breed” to suit the process and scope at hand, standardised BIM solutions consider the wider implications of information exchange outside the scope of any particular vendor, information can be archived as ASCII files for archival purposes, and data quality can be enhanced as the now single source of users’ information has improved accuracy, correctness, currency, completeness and relevance. SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. There have been remarkably few technical difficulties in converting the House’s existing conventions and standards to the new model based environment. This demonstrates that the IFC model represents world practice for building data representation and management (see Sydney Opera House – FM Exemplar Project Report Number 2005-001-C-3, Open Specification for BIM: Sydney Opera House Case Study). Availability of FM applications based on BIM is in its infancy but focussed systems are already in operation internationally and show excellent prospects for implementation systems at SOH. In addition to the generic benefits of standardised BIM described above, the following FM specific advantages can be expected from this new integrated facilities management environment: faster and more effective processes, controlled whole life costs and environmental data, better customer service, common operational picture for current and strategic planning, visual decision-making and a total ownership cost model. Tests with partial BIM data – provided by several of SOH’s current consultants – show that the creation of a SOH complete model is realistic, but subject to resolution of compliance and detailed functional support by participating software applications. The showcase has demonstrated successfully that IFC based exchange is possible with several common BIM based applications through the creation of a new partial model of the building. Data exchanged has been geometrically accurate (the SOH building structure represents some of the most complex building elements) and supports rich information describing the types of objects, with their properties and relationships.
Resumo:
This paper describes an approach to introducing fraction concepts using generic software tools such as Microsoft Office's PowerPoint to create "virtual" materials for mathematics teaching and learning. This approach replicates existing concrete materials and integrates virtual materials with current non-computer methods of teaching primary students about fractions. The paper reports a case study of a 12-year-old student, Frank, who had an extremely limited understanding of fractions. Frank also lacked motivation for learning mathematics in general and interacted with his peers in a negative way during mathematics lessons. In just one classroom session involving the seamless integration of off-computer and on-computer activities, Frank acquired a basic understanding of simple common equivalent fractions. Further, he was observed as the session progressed to be an enthusiastic learner who offered to share his learning with his peers. The study's "virtual replication" approach for fractions involves the manipulation of concrete materials (folding paper regions) alongside the manipulation of their virtual equivalent (shading screen regions). As researchers have pointed out, the emergence of new technologies does not mean old technologies become redundant. Learning technologies have not replaced print and oral language or basic mathematical understanding. Instead, they are modifying, reshaping, and blending the ways in which humankind speaks, reads, writes, and works mathematically. Constructivist theories of learning and teaching argue that mathematics understanding is developed from concrete to pictorial to abstract and that, ultimately, mathematics learning and teaching is about refinement and expression of ideas and concepts. Therefore, by seamlessly integrating the use of concrete materials and virtual materials generated by computer software applications, an opportunity arises to enhance the teaching and learning value of both materials.
Resumo:
Understanding the future development of interaction design as it applies to learning and training scenarios is crucial to effective development of curriculum and appropriate application of social and mobile communication technologies. As Attewell & Saville-Smith have recognised (2004), the use of mobile communication devices for improved literacy and numeracy is a desirable prospect among young people who represent the average age of undergraduate students. Further, with the growing penetration of broadband internet access, the ubiquity of wireless access in educational locations, the rise of ultra-mobile portable computers and the proliferation of social software applications in educational contexts, there are a growing number of channels for facilitation of learning. Nevertheless, there has been insufficient consideration of the interaction design issues that affect the effective facilitation of such learning. This paper contends that there is a clear need to design mobile and social learning to accommodate the benefits of these diverse channels for interaction. Additionally, there is a need to implement suitable testing processes to ensure participants in mobile and social learning are contributing effectively and maximising their learning. Through the presentation of case studies in mobile and social learning, the paper attempts to demonstrate how considered interaction design techniques can improve the effectiveness of new learning channels.
Resumo:
From location-aware computing to mining the social web, representations of context have promised to make better software applications. The opportunities and challenges of context-aware computing from representational, situated and interactional perspectives have been well documented, but arguments from the perspective of design are somewhat disparate. This paper draws on both theoretical perspectives and a design framing, using the problem of designing a social mobile agile ridesharing system, in order to reflect upon and call for broader design approaches for context-aware computing and human-computer Interaction research in general.
Resumo:
Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.
Resumo:
Advances in mobile telephone technology and available dermoscopic attachments for mobile telephones have created a unique opportunity for consumer-initiated mobile teledermoscopy. At least 2 companies market a dermoscope attachment for an iPhone (Apple), forming a mobile teledermoscope. These devices and the corresponding software applications (apps) enable (1) lesion magnification (at least ×20) and visualization with polarized light; (2) photographic documentation using the telephone camera; (3) lesion measurement (ruler); (4) adding of image and lesion details; and (5) e-mail data to a teledermatologist for review. For lesion assessment, the asymmetry-color (AC) rule has 94% sensitivity and 62 specificity for melanoma identification by consumers [1]. Thus, consumers can be educated to recognize asymmetry and color patterns in suspect lesions. However, we know little about consumers' use of mobile teledermoscopy for lesion assessment.
Resumo:
Process-Aware Information Systems (PAISs) support executions of operational processes that involve people, resources, and software applications on the basis of process models. Process models describe vast, often infinite, amounts of process instances, i.e., workflows supported by the systems. With the increasing adoption of PAISs, large process model repositories emerged in companies and public organizations. These repositories constitute significant information resources. Accurate and efficient retrieval of process models and/or process instances from such repositories is interesting for multiple reasons, e.g., searching for similar models/instances, filtering, reuse, standardization, process compliance checking, verification of formal properties, etc. This paper proposes a technique for indexing process models that relies on their alternative representations, called untanglings. We show the use of untanglings for retrieval of process models based on process instances that they specify via a solution to the total executability problem. Experiments with industrial process models testify that the proposed retrieval approach is up to three orders of magnitude faster than the state of the art.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
Background The capacity to diagnosys, quantify and evaluate movement beyond the general confines of a clinical environment under effectiveness conditions may alleviate rampant strain on limited, expensive and highly specialized medical resources. An iPhone 4® mounted a three dimensional accelerometer subsystem with highly robust software applications. The present study aimed to evaluate the reliability and concurrent criterion-related validity of the accelerations with an iPhone 4® in an Extended Timed Get Up and Go test. Extended Timed Get Up and Go is a clinical test with that the patient get up from the chair and walking ten meters, turn and coming back to the chair. Methods A repeated measure, cross-sectional, analytical study. Test-retest reliability of the kinematic measurements of the iPhone 4® compared with a standard validated laboratory device. We calculated the Coefficient of Multiple Correlation between the two sensors acceleration signal of each subject, in each sub-stage, in each of the three Extended Timed Get Up and Go test trials. To investigate statistical agreement between the two sensors we used the Bland-Altman method. Results With respect to the analysis of the correlation data in the present work, the Coefficient of Multiple Correlation of the five subjects in their triplicated trials were as follows: in sub-phase Sit to Stand the ranged between r = 0.991 to 0.842; in Gait Go, r = 0.967 to 0.852; in Turn, 0.979 to 0.798; in Gait Come, 0.964 to 0.887; and in Turn to Stand to Sit, 0.992 to 0.877. All the correlations between the sensors were significant (p < 0.001). The Bland-Altman plots obtained showed a solid tendency to stay at close to zero, especially on the y and x-axes, during the five phases of the Extended Timed Get Up and Go test. Conclusions The inertial sensor mounted in the iPhone 4® is sufficiently reliable and accurate to evaluate and identify the kinematic patterns in an Extended Timed Get and Go test. While analysis and interpretation of 3D kinematics data continue to be dauntingly complex, the iPhone 4® makes the task of acquiring the data relatively inexpensive and easy to use.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.
Resumo:
The aim of this study was to examine the actions of geographically dispersed process stakeholders (doctors, community pharmacists and RACFs) in order to cope with the information silos that exist within and across different settings. The study setting involved three metropolitan RACFs in Sydney, Australia and employed a qualitative approach using semi-structured interviews, non-participant observations and artefact analysis. Findings showed that medication information was stored in silos which required specific actions by each setting to translate this information to fit their local requirements. A salient example of this was the way in which community pharmacists used the RACF medication charts to prepare residents' pharmaceutical records. This translation of medication information across settings was often accompanied by telephone or face-to-face conversations to cross-check, validate or obtain new information. Findings highlighted that technological interventions that work in silos can negatively impact the quality of medication management processes in RACF settings. The implementation of commercial software applications like electronic medication charts need to be appropriately integrated to satisfy the collaborative information requirements of the RACF medication process.