981 resultados para Software metrics
Resumo:
Enterprise resource planning (ERP) software is a dominant approach for dealing with legacy information system problems. In order to avoid invalidating maintenance and development support from the ERP vendor, most organizations reengineer their business processes in line with those implicit within the software. Regardless, some customization is typically required. This paper presents two case studies of ERP projects where customizations have been performed. The case analysis suggests that while customizations can give true organizational benefits, careful consideration is required to determine whether a customization is viable given its potential impact upon future maintenance. Copyright © 2001 John Wiley & Sons, Ltd.
Resumo:
Over the past few years many organizations that directly or indirectly interact with consumers have invested heavily into a social media presence. As a consequence some success indicators are openly available to users of many social media platforms, such as the number of fans (or followers, members, visitors and others) or the amount of content(tweets, images, shares or other content). Many organizations additionally track their social activities internally to understand audience reach, consumer influence, brand image, consumer preference or other key metrics that make sense for a business. However, most of the immediately available social media success metrics are activity-based and many organizations are struggling with establishing a direct relationship to business success. This paper systematically reviews some of the common social media metrics/ratings used by organisations, critically analyse its business value and identify gaps formulating research questions for empirical study and concluding with recommendations and suggestions for future research.
Resumo:
Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.
Resumo:
Packaged software is pre-built with the intention of licensing it to users in domestic settings and work organisations. This thesis focuses upon the work organisation where packaged software has been characterised as one of the latest ‘solutions’ to the problems of information systems. The study investigates the packaged software selection process that has, to date, been largely viewed as objective and rational. In contrast, this interpretive study is based on a 21⁄2 year long field study of organisational experiences with packaged software selection at T.Co, a consultancy organisation based in the United Kingdom. Emerging from the iterative process of case study and action research is an alternative theory of packaged software selection. The research argues that packaged software selection is far from the rationalistic and linear process that previous studies suggest. Instead, the study finds that aspects of the traditional process of selection incorporating the activities of gathering requirements, evaluation and selection based on ‘best fit’ may or may not take place. Furthermore, even where these aspects occur they may not have equal weight or impact upon implementation and usage as may be expected. This is due to the influence of those multiple realities which originate from the organisational and market environments within which packages are created, selected and used, the lack of homogeneity in organisational contexts and the variously interpreted characteristics of the package in question.
Resumo:
Software as a Service (SaaS) is anticipated to provide significant benefits to small and medium enterprises (SMEs) due to ease of access to high-end applications, 7*24 availability, utility pricing, etc. However, underlying SaaS is the assumption that SMEs will directly interact with the SaaS vendor and use a self-service model. In practice, we see the rise of SaaS intermediaries who support SMEs with using SaaS. This paper reports on an empirical study of the role of intermediaries in terms of how they support SMEs in sourcing and leveraging SaaS for their business. The knowledge contributions of this paper are: (1) the identification and description of the role of SaaS intermediaries and (2) the specification of different roles of SaaS intermediaries, in particular a more basic role with technology orientation and operational alignment perspective and (3) a more added value role with customer orientation and strategic alignment perspective.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
As the systematic investigation of Twitter as a communications platform continues, the question of developing reliable comparative metrics for the evaluation of public, communicative phenomena on Twitter becomes paramount. What is necessary here is the establishment of an accepted standard for the quantitative description of user activities on Twitter. This needs to be flexible enough in order to be applied to a wide range of communicative situations, such as the evaluation of individual users’ and groups of users’ Twitter communication strategies, the examination of communicative patterns within hashtags and other identifiable ad hoc publics on Twitter (Bruns & Burgess, 2011), and even the analysis of very large datasets of everyday interactions on the platform. By providing a framework for quantitative analysis on Twitter communication, researchers in different areas (e.g., communication studies, sociology, information systems) are enabled to adapt methodological approaches and to conduct analyses on their own. Besides general findings about communication structure on Twitter, large amounts of data might be used to better understand issues or events retrospectively, detect issues or events in an early stage, or even to predict certain real-world developments (e.g., election results; cf. Tumasjan, Sprenger, Sandner, & Welpe, 2010, for an early attempt to do so).
Resumo:
Fire incident in buildings is common, so the fire safety design of the framed structure is imperative, especially for the unprotected or partly protected bare steel frames. However, software for structural fire analysis is not widely available. As a result, the performance-based structural fire design is urged on the basis of using user-friendly and conventional nonlinear computer analysis programs so that engineers do not need to acquire new structural analysis software for structural fire analysis and design. The tool is desired to have the capacity of simulating the different fire scenarios and associated detrimental effects efficiently, which includes second-order P-D and P-d effects and material yielding. Also the nonlinear behaviour of large-scale structure becomes complicated when under fire, and thus its simulation relies on an efficient and effective numerical analysis to cope with intricate nonlinear effects due to fire. To this end, the present fire study utilizes a second order elastic/plastic analysis software NIDA to predict structural behaviour of bare steel framed structures at elevated temperatures. This fire study considers thermal expansion and material degradation due to heating. Degradation of material strength with increasing temperature is included by a set of temperature-stress-strain curves according to BS5950 Part 8 mainly, which implicitly allows for creep deformation. This finite element stiffness formulation of beam-column elements is derived from the fifth-order PEP element which facilitates the computer modeling by one member per element. The Newton-Raphson method is used in the nonlinear solution procedure in order to trace the nonlinear equilibrium path at specified elevated temperatures. Several numerical and experimental verifications of framed structures are presented and compared against solutions in literature. The proposed method permits engineers to adopt the performance-based structural fire analysis and design using typical second-order nonlinear structural analysis software.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke. An example of application is given with monocular SLAM estimating the pose of the UGV while smoke is present in the environment. It is shown that the proposed novel quality metric can be used to anticipate situations where the quality of the pose estimate will be significantly degraded due to the input image data. This leads to decisions of advantageously switching between data sources (e.g. using infrared images instead of visual images).
Resumo:
This paper proposes an experimental study of quality metrics that can be applied to visual and infrared images acquired from cameras onboard an unmanned ground vehicle (UGV). The relevance of existing metrics in this context is discussed and a novel metric is introduced. Selected metrics are evaluated on data collected by a UGV in clear and challenging environmental conditions, represented in this paper by the presence of airborne dust or smoke.
Resumo:
IEEE 802.11p is the new standard for intervehicular communications (IVC) using the 5.9 GHz frequency band; it is planned to be widely deployed to enable cooperative systems. 802.11p uses and performance have been studied theoretically and in simulations over the past years. Unfortunately, many of these results have not been confirmed by on-tracks experimentation. In this paper, we describe field trials of 802.11p technology with our test vehicles; metrics such as maximum range, latency and frame loss are examined. Then, we propose a detailed modelisation of 802.11p that can be used to accurately simulate its performance within Cooperative Systems (CS) applications.
Resumo:
The history of impact metrics as a field driven by the sciences presents real problems for Arts and Humanities scholars. Whereas scientists have long depended on journal articles as a primary mechanism for publishing findings, researchers in the Arts and Humanities tend to publish in a much wider range of formats. For many Arts and Humanities scholars, conference presentations, creative works, reports and scholarly monographs are legitimate, valuable and valued forms of publication. Bizarre as it may seem, even the best-established and most respected format for the publication of Humanities scholarship, the scholarly monograph, is often invisible within digital metrics landscapes. As a result, although some information about Arts and Humanities scholars may be captured by impact metrics, academics from these fields always appear to perform less well than colleagues in the Sciences when measured using tools designed for scientists.
Resumo:
Games and the broader interactive entertainment industry are the major ‘born global/born digital’ creative industry. The videogame industry (formally referred to as interactive entertainment) is the economic sector that develops, markets and sells videogames to millions of people worldwide. There are over 11 countries with revenues of over $1 billion. This number was expected to grow 9.1 per cent annually to $48.9 in 2011 and $68 billion in 2012, making it the fastest-growing component of the international media sector (Scanlon, 2007; Caron, 2008).
Resumo:
Introduction This study examines and compares the dosimetric quality of radiotherapy treatment plans for prostate carcinoma across a cohort of 163 patients treated across 5 centres: 83 treated with three-dimensional conformal radiotherapy (3DCRT), 33 treated with intensity-modulated radiotherapy (IMRT) and 47 treated with volumetric-modulated arc therapy (VMAT). Methods Treatment plan quality was evaluated in terms of target dose homogeneity and organ-at-risk sparing, through the use of a set of dose metrics. These included the mean, maximum and minimum doses; the homogeneity and conformity indices for the target volumes; and a selection of dose coverage values that were relevant to each organ-at-risk. Statistical significance was evaluated using two-tailed Welch’s T-tests. The Monte Carlo DICOM ToolKit software was adapted to permit the evaluation of dose metrics from DICOM data exported from a commercial radiotherapy treatment planning system. Results The 3DCRT treatment plans offered greater planning target volume dose homogeneity than the other two treatment modalities. The IMRT and VMAT plans offered greater dose reduction in the organs-at-risk: with increased compliance with recommended organ-at-risk dose constraints, compared to conventional 3DCRT treatments. When compared to each other, IMRT and VMAT did not provide significantly different treatment plan quality for like-sized tumour volumes. Conclusions This study indicates that IMRT and VMAT have provided similar dosimetric quality, which is superior to the dosimetric quality achieved with 3DCRT.