426 resultados para software creation methodology
Resumo:
This article examines the design of ePortfolios for music postgraduate students utilizing a practice-led design iterative research process. It is suggested that the availability of Web 2.0 technologies such as blogs and social network software potentially provide creative artist with an opportunity to engage in a dialogue about art with artefacts of the artist products and processes present in that discussion. The design process applied Software Development as Research (SoDaR) methodology to simultaneously develop design and pedagogy. The approach to designing ePortfolio systems applied four theoretical protocols to examine the use of digitized artefacts to enable a dynamic and inclusive dialogue around representations of the students work. A negative case analysis identified a disjuncture between university access and control policy, and the relative openness of Web2.0 systems outside the institution that led to the design of an integrated model of ePortfolio.
Resumo:
Addressing possibilities for authentic combinations of diverse media within an installation setting, this research tested hybrid blends of the physical, digital and temporal to explore liminal space and image. The practice led research reflected on creation of artworks from three perspectives – material, immaterial and hybrid – and in doing so, developed a new methodological structure that extends conventional forms of triangulation. This study explored how physical and digital elements each sought hierarchical presence, yet simultaneously coexisted, thereby extending the visual and conceptual potential of the work. Outcomes demonstrated how utilising and recording transitional processes of hybrid imagery achieved a convergence of diverse, experiential forms. "Hybrid authority" – an authentic convergence of disparate elements – was articulated in the creation and public sharing of processual works and the creation of an innovative framework for hybrid art practice.
Resumo:
Software development settings provide a great opportunity for CSCW researchers to study collaborative work. In this paper, we explore a specific work practice called bug reproduction that is a part of the software bug-fixing process. Bug re-production is a highly collaborative process by which software developers attempt to locally replicate the ‘environment’ within which a bug was originally encountered. Customers, who encounter bugs in their everyday use of systems, play an important role in bug reproduction as they provide useful information to developers, in the form of steps for reproduction, software screenshots, trace logs, and other ways to describe a problem. Bug reproduction, however, poses major hurdles in software maintenance as it is often challenging to replicate the contextual aspects that are at play at the customers’ end. To study the bug reproduction process from a human-centered perspective, we carried out an ethnographic study at a multinational engineering company. Using semi-structured interviews, a questionnaire and half-a-day observation of sixteen software developers working on different software maintenance projects, we studied bug reproduction. In this pa-per, we present a holistic view of bug reproduction practices from a real-world set-ting and discuss implications for designing tools to address the challenges developers face during bug reproduction.
Resumo:
The analysis of content and meta–data has long been the subject of most Twitter studies, however such research only tells part of the story of the development of Twitter as a platform. In this work, we introduce a methodology to determine the growth patterns of individual users of the platform, a technique we refer to as follower accession, and through a number of case studies consider the factors which lead to follower growth, and the identification of non–authentic followers. Finally, we consider what such an approach tells us about the history of the platform itself, and the way in which changes to the new user signup process have impacted upon users.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.
Resumo:
New technical and procedural interventions are less likely to be adopted in industry, unless they are smoothly integrated into the existing practices of professionals. In this paper, we provide a case study of the use of ethnographic methods for studying software bug-fixing activities at an industrial engineering conglomerate. We aimed at getting an in-depth understanding of software developers' everyday practices in bug-fixing related projects and in turn inform the design of novel productivity tools. The use of ethnography has allowed us to look at the social side of software maintenance practices. In this paper, we highlight: 1) organizational issues that influence bug-fixing activities; 2) social role of bug tracking systems, and; 3) social issues specific to different phases of bug-fixing activities.
Resumo:
This study seeks to fill in gap in the existing literature by looking at how and whether disclosure of social value creation becomes a part of legitimation strategies of social enterprises. By using legitimacy reasoning, this study informs that three global social organizations, Grameen Bank, Charity Water, and the Bill and Melinda Gates Foundation provide evidence of the use of disclosures of social value creation in order to conform with the expectations of the broader community—the community that wants to see poverty and injustice free world.
Resumo:
Copyright, it is commonly said, matters in society because it encourages the production of socially beneficial, culturally significant expressive content. Our focus on copyright's recent history, however, blinds us to the social information practices that have always existed. In this Article, we examine these social information practices, and query copyright's role within them. We posit a functional model of what is necessary for creative content to move from creator to user. These are the functions dealing with the creation, selection, production, dissemination, promotion, sale, and use of expressive content. We demonstrate how centralized commercial control of information content has been the driving force behind copyright's expansion. All of the functions that copyright industries once controlled, however, are undergoing revolutionary decentralization and disintermediation. Different aspects of information technology, notably the digitization of information, widespread computer ownership, the rise of the Internet, and the development of social software, threaten the viability and desirability of centralized control over every one of the content functions. These functions are increasingly being performed by individuals and disaggregated groups. This raises an issue for copyright as the main regulatory force in information practices: copyright assumes a central control requirement that no longer applies for the development of expressive content. We examine the normative implications of this shift for our information policy in this new post-copyright era. Most notably, we conclude that copyright law needs to be adjusted in order to recognize the opportunity and desirability of decentralized content, and the expanded marketplace of ideas it promises.
Resumo:
Social media tools are starting to become mainstream and those working in the software development industry are often ahead of the game in terms of using current technological innovations to improve their work. With the advent of outsourcing and distributed teams the software industry is ideally placed to take advantage of social media technologies, tools and environments. This paper looks at how social media is being used by early adopters within the software development industry. Current tools and trends in social media tool use are described and critiqued: what works and what doesn't. We use industrial case studies from platform development, commercial application development and government contexts which provide a clear picture of the emergent state of the art. These real world experiences are then used to show how working collaboratively in geographically dispersed teams, enabled by social media, can enhance and improve the development experience.
Resumo:
The Informed Systems Approach offers models for advancing workplace learning within collaboratively designed systems that promote using information to learn through collegial exchange and reflective dialogue. This systemic approach integrates theoretical antecedents and process models, including the learning theories of Peter Checkland (Soft Systems Methodology), which advance systems design and informed action, and Christine Bruce (informed learning), which generate information experiences and professional practices. Ikujiro Nonaka’s systems ideas (SECI model) and Mary Crossan’s learning framework (4i framework) further animate workplace knowledge creation through learning relationships engaging individuals with ideas.
Resumo:
Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.
Resumo:
QUT Software Finder is a searchable repository of metadata describing software and source code, which has been created as a result of QUT research activities. It was launched in December 2013. https://researchdatafinder.qut.edu.au/scf The registry was designed to aid the discovery and visibility of QUT research outputs and encourage sharing and re-use of code and software throughout the research community, both nationally and internationally. The repository platform used is VIVO (an open source product initially developed at Cornell University). QUT Software Finder records that describe software or code are connected to information about researchers involved, the research groups, related publications and related projects. Links to where the software or code can be accessed from are also provided alongside licencing and re-use information.