983 resultados para Software Fault Isolation
Resumo:
We do not commonly associate software engineering with philosophical debate. Indeed, software engineers ought to be concerned with building software systems and not settling philosophical questions. I attempt to show that software engineers do, in fact, take philosophical sides when designing software applications. In particular, I look at how the problem of vagueness arises in software engineering and argue that when software engineers solve it, they commit to philosophical views that they are seldom aware of. In the second part of the paper, I suggest a way of dealing with vague predicates without having to confront the problem of vagueness itself. The purpose of my paper is to highlight the currently prevalent disconnect between philosophy and software engineering. I claim that a better knowledge of the philosophical debate is important as it can have ramifications for crucial software design decisions. Better awareness of philosophical issues not only produces better software engineers, it also produces better engineered products.
Resumo:
Background Maintenance of communication is important for people with dementia living in long-term care. The purpose of this study was to assess the feasibility of using “Giraff”, a telepresence robot to enhance engagement between family and a person with dementia living in long-term care. Methods A mixed-methods approach involving semi-structured interviews, call records and video observational data was used. Five people with dementia and their family member participated in a discussion via the Giraff robot for a minimum of six times over a six-week period. A feasibility framework was used to assess feasibility and included video analysis of emotional response and engagement. Results Twenty-six calls with an average duration of 23 mins took place. Residents showed a general state of positive emotions across the calls with a high level of engagement and a minimal level of negative emotions. Participants enjoyed the experience and families reported that the Giraff robot offered the opportunity to reduce social isolation. A number of software and hardware challenges were encountered. Conclusions Participants perceived this novel approach to engage families and people with dementia as a feasible option. Participants were observed and also reported to enjoy the experience. The technical challenges identified have been improved in a newer version of the robot. Future research should include a feasibility trial of longer duration, with a larger sample and a cost analysis.
Resumo:
Software as a Service (SaaS) can provide significant benefits to small and medium enterprises (SMEs) due to advantages like ease of access, 7*24 availability, and utility pricing. However, underlying the SaaS delivery model is often the assumption that SMEs will directly interact with the SaaS vendor and use a self-service approach. In practice, we see the rise of SaaS intermediaries who can support SMEs with sourcing and leveraging SaaS. This paper reports on the roles of intermediaries and how they support SMEs with using SaaS. We conducted an empirical study of two SaaS intermediaries and analysed their business models, in particular their value propositions. We identified orientation (technology or customer) and alignment (operational or strategic) as themes for understanding their roles. The contributions of this paper include: (1) the identification and description of SaaS intermediaries for SMEs based on an empirical study and (2) understanding the different roles of SaaS intermediaries, in particular a more basic role based on technology orientation and operational alignment and a more value adding role based on customer orientation and strategic alignment. We propose that SaaS intermediaries can address SaaS adoption and implementation challenges of SMEs by playing a basic role and can also aim to support SMEs in creating business value with SaaS based solutions by playing an added value role.
Resumo:
Introduction: Research that has focused on the ability of self-report assessment tools to predict crash outcomes has proven to be mixed. As a result, researchers are now beginning to explore whether examining culpability of crash involvement can subsequently improve this predictive efficacy. This study reports on the application of the Manchester Driver Behaviour Questionnaire (DBQ) to predict crash involvement among a sample of general Queensland motorists, and in particular, whether including a crash culpability variable improves predictive outcomes. Surveys were completed by 249 general motorists on-line or via a pen-and-paper format. Results: Consistent with previous research, a factor analysis revealed a three factor solution for the DBQ accounting for 40.5% of the overall variance. However, multivariate analysis using the DBQ revealed little predictive ability of the tool to predict crash involvement. Rather, exposure to the road was found to be predictive of crashes. An analysis into culpability revealed 88 participants reported being “at fault” for their most recent crash. Corresponding between and multi-variate analyses that included the culpability variable did not result in an improvement in identifying those involved in crashes. Conclusions: While preliminary, the results suggest that including crash culpability may not necessarily improve predictive outcomes in self-report methodologies, although it is noted the current small sample size may also have had a deleterious effect on this endeavour. This paper also outlines the need for future research (which also includes official crash and offence outcomes) to better understand the actual contribution of self-report assessment tools, and culpability variables, to understanding and improving road safety.
Resumo:
Limbal microvascular endothelial cells (L-MVEC) contribute to formation of the corneal-limbal stem cell niche and to neovascularization of diseased and injuries corneas. Nevertheless, despite these important roles in corneal health and disease, few attempts have been made to isolate L-MVEC with the view to studying their biology in vitro. We therefore explored the feasibility of generating primary cultures of L-MVEC from cadaveric human tissue. We commenced our study by evaluating growth conditions (MesenCult-XF system) that have been previously found to be associated with expression of the endothelial cell surface marker thrombomodulin/CD141, in crude cultures established from collagenase-digests of limbal stroma. The potential presence of L-MVEC in these cultures was examined by flow cytometry using a more specific marker for vascular endothelial cells, CD31/PECAM-1. These studies demonstrated that the presence of CD141 in crude cultures established using the MesenCult-XF system is unrelated to L-MVEC. Thus we subsequently explored the use of magnetic assisted cell sorting (MACS) for CD31 as a tool for generating cultures of L-MVEC, in conjunction with more traditional endothelial cell growth conditions. These conditions consisted of gelatin-coated tissue culture plastic and MCDB-131 medium supplemented with fetal bovine serum (10% v/v), D-glucose (10 mg/mL), epidermal growth factor (10 ng/mL), heparin (50 μg/mL), hydrocortisone (1 μg/mL) and basic fibroblast growth factor (10 ng/mL). Our studies revealed that use of endothelial growth conditions are insufficient to generate significant numbers of L-MVEC in primary cultures established from cadaveric corneal stroma. Nevertheless, through use of positive-MACS selection for CD31 we were able to routinely observe L-MVEC in cultures derived from collagenase-digests of limbal stroma. The presence of L-MVEC in these cultures was confirmed by immunostaining for von Willebrand factor (vWF) and by ingestion of acetylated low-density lipoprotein. Moreover, the vWF+ cells formed aligned cell-to-cell ‘trains’ when grown on Geltrex™. The purity of L-MVEC cultures was found to be unrelated to tissue donor age (32 to 80 years) or duration in eye bank corneal preservation medium prior to use (3 to 10 days in Optisol) (using multiple regression test). Optimal purity of L-MVEC cultures was achieved through use of two rounds of positive-MACS selection for CD31 (mean ± s.e.m, 65.0 ± 20.8%; p<0.05). We propose that human L-MVEC cultures generated through these techniques, in conjunction with other cell types, will provide a useful tool for exploring the mechanisms of blood vessel cell growth in vitro.
Resumo:
This paper proposes a highly reliable fault diagnosis approach for low-speed bearings. The proposed approach first extracts wavelet-based fault features that represent diverse symptoms of multiple low-speed bearing defects. The most useful fault features for diagnosis are then selected by utilizing a genetic algorithm (GA)-based kernel discriminative feature analysis cooperating with one-against-all multicategory support vector machines (OAA MCSVMs). Finally, each support vector machine is individually trained with its own feature vector that includes the most discriminative fault features, offering the highest classification performance. In this study, the effectiveness of the proposed GA-based kernel discriminative feature analysis and the classification ability of individually trained OAA MCSVMs are addressed in terms of average classification accuracy. In addition, the proposedGA- based kernel discriminative feature analysis is compared with four other state-of-the-art feature analysis approaches. Experimental results indicate that the proposed approach is superior to other feature analysis methodologies, yielding an average classification accuracy of 98.06% and 94.49% under rotational speeds of 50 revolutions-per-minute (RPM) and 80 RPM, respectively. Furthermore, the individually trained MCSVMs with their own optimal fault features based on the proposed GA-based kernel discriminative feature analysis outperform the standard OAA MCSVMs, showing an average accuracy of 98.66% and 95.01% for bearings under rotational speeds of 50 RPM and 80 RPM, respectively.
Resumo:
Critical to the research of urban morphologists is the availability of historical records that document the urban transformation of the study area. However, thus far little work has been done towards an empirical approach to the validation of archival data in this field. Outlined in this paper, therefore, is a new methodology for validating the accuracy of archival records and mapping data, accrued through the process of urban morphological research, so as to establish a reliable platform from which analysis can proceed. The paper particularly addresses the problems of inaccuracies in existing curated historical information, as well as errors in archival research by student assistants, which together give rise to unacceptable levels of uncertainty in the documentation. The paper discusses the problems relating to the reliability of historical information, demonstrates the importance of data verification in urban morphological research, and proposes a rigorous method for objective testing of collected archival data through the use of qualitative data analysis software.
Resumo:
BIM as a suite of technologies has been enabled by the significant improvements in IT infrastructure, the capabilities of computer hardware and software, the increasing adoption of BIM, and the development of Industry Foundation Classes (IFC) which facilitate the sharing of information between firms. The report highlights the advantages of BIM, particularly the increased utility and speed, better data quality and enhanced fault finding in all construction phases. Additionally BIM promotes enhanced collaborations and visualisation of data mainly in the design and construction phase. There are a number of barriers to the effective implementation of BIM. These include, somewhat paradoxically, a single detailed model (which precludes scenarios and development of detailed alternative designs); the need for three different interoperability standards for effective implementation; added work for the designer which needs to be recognised and remunerated; the size and complexity of BIM, which requires significant investment in human capital to enable the realisation of its full potential. There are also a number of challenges to implementing BIM. The report has identified these as a range of issues concerning: IP, liability, risks and contracts, and the authenticity of users. Additionally, implementing BIM requires investment in new technology, skills training and development of news ways of collaboration. Finally, there are likely to be Trade Practices concerns as requiring certain technology owned by relatively few firms may limit
Resumo:
This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.
Resumo:
The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.