924 resultados para Customer Relation Method (CRM)
Resumo:
The vibration serviceability limit state is an important design consideration for two-way, suspended concrete floors that is not always well understood by many practicing structural engineers. Although the field of floor vibration has been extensively developed, at present there are no convenient design tools that deal with this problem. Results from this research have enabled the development of a much-needed, new method for assessing the vibration serviceability of flat, suspended concrete floors in buildings. This new method has been named, the Response Coefficient-Root Function (RCRF) method. Full-scale, laboratory tests have been conducted on a post-tensioned floor specimen at Queensland University of Technology’s structural laboratory. Special support brackets were fabricated to perform as frictionless, pinned connections at the corners of the specimen. A series of static and dynamic tests were performed in the laboratory to obtain basic material and dynamic properties of the specimen. Finite-element-models have been calibrated against data collected from laboratory experiments. Computational finite-element-analysis has been extended to investigate a variety of floor configurations. Field measurements of floors in existing buildings are in good agreement with computational studies. Results from this parametric investigation have led to the development of new approach for predicting the design frequencies and accelerations of flat, concrete floor structures. The RCRF method is convenient tool to assist structural engineers in the design for the vibration serviceability limit-state of in-situ concrete floor systems.
Resumo:
Objective: Given the increasing popularity of motorcycle riding and heightened risk of injury or death associated with being a rider, this study explored rider behaviour as a determinant of rider safety and, in particular, key beliefs and motivations which influence such behaviour. To enhance the effectiveness of future education and training interventions, it is important to understand riders’ own views about what influences how they ride. Specifically, this study sought to identify key determinants of riders’ behaviour in relation to the social context of riding including social and identity-related influences relating to the group (group norms and group identity) as well as the self (moral/personal norm and self-identity). ----- ----- Method: Qualitative research was undertaken via group discussions with motorcycle riders (n = 41). Results: The findings revealed that those in the group with which one rides represent an important source of social influence. Also, the motorcyclist (group) identity was associated with a range of beliefs, expectations, and behaviours considered to be normative. Exploration of the construct of personal norm revealed that riders were most cognizant of the “wrong things to do” when riding; among those issues raised was the importance of protective clothing (albeit for the protection of others and, in particular, pillion passengers). Finally, self-identity as a motorcyclist appeared to be important to a rider’s self-concept and was likely to influence their on-road behaviour. ----- ----- Conclusion: Overall, the insight provided by the current study may facilitate the development of interventions including rider training as well as public education and mass media messages. The findings suggest that these interventions should incorporate factors associated with the social nature of riding in order to best align it with some of the key beliefs and motivations underpinning riders’ on-road behaviours.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
Background and purpose: The appropriate fixation method for hemiarthroplasty of the hip as it relates to implant survivorship and patient mortality is a matter of ongoing debate. We examined the influence of fixation method on revision rate and mortality.----- ----- Methods: We analyzed approximately 25,000 hemiarthroplasty cases from the AOA National Joint Replacement Registry. Deaths at 1 day, 1 week, 1 month, and 1 year were compared for all patients and among subgroups based on implant type.----- ----- Results: Patients treated with cemented monoblock hemiarthroplasty had a 1.7-times higher day-1 mortality compared to uncemented monoblock components (p < 0.001). This finding was reversed by 1 week, 1 month, and 1 year after surgery (p < 0.001). Modular hemiarthroplasties did not reveal a difference in mortality between fixation methods at any time point.----- ----- Interpretation: This study shows lower (or similar) overall mortality with cemented hemiarthroplasty of the hip.
Resumo:
Spatially offset Raman spectroscopy (SORS) is a powerful new technique for the non-invasive detection and identification of concealed substances and drugs. Here, we demonstrate the SORS technique in several scenarios that are relevant to customs screening, postal screening, drug detection and forensics applications. The examples include analysis of a multi-layered postal package to identify a concealed substance; identification of an antibiotic capsule inside its plastic blister pack; analysis of an envelope containing a powder; and identification of a drug dissolved in a clear solvent, contained in a non-transparent plastic bottle. As well as providing practical examples of SORS, the results highlight several considerations regarding the use of SORS in the field, including the advantages of different analysis geometries and the ability to tailor instrument parameters and optics to suit different types of packages and samples. We also discuss the features and benefits of SORS in relation to existing Raman techniques, including confocal microscopy, wide area illumination and the conventional backscattered Raman spectroscopy. The results will contribute to the recognition of SORS as a promising method for the rapid, chemically-specific analysis and detection of drugs and pharmaceuticals.
Resumo:
While the importance of literature studies in the IS discipline is well recognized, little attention has been paid to the underlying structure and method of conducting effective literature reviews. Despite the fact that literature is often used to refine the research context and direct the pathways for successful research outcomes, there is very little evidence of the use of resource management tools to support the literature review process. In this paper we want to contribute to advancing the way in which literature studies in Information Systems are conducted, by proposing a systematic, pre-defined and tool-supported method to extract, analyse and report literature. This paper presents how to best identify relevant IS papers to review within a feasible and justifiable scope, how to extract relevant content from identified papers, how to synthesise and analyse the findings of a literature review and what are ways to effectively write and present the results of a literature review. The paper is specifically targeted towards novice IS researchers, who would seek to conduct a systematic detailed literature review in a focused domain. Specific contributions of our method are extensive tool support, the identification of appropriate papers including primary and secondary paper sets and a pre-codification scheme. We use a literature study on shared services as an illustrative example to present the proposed approach.
Researching employment relations : a self-reflexive analysis of a multi-method, school-based project
Resumo:
Drawing on primary data and adjunct material, this article adopts a critical self-reflexive approach to a three-year, Australian Research Council-funded projectthat explored themes around 'employment citizenship'for high school students in Queensland. The article addresses three overlapping areas that reflect some of the central dilemmas and challenges arising through the project- consent in the context of research ethics, questionnaire administration in schools, and focus group research practice. It contributes to the broader methodological literature addressing research with young people by canvassing pragmatic suggestions for future school-based research, and research addressing adolescent employment.
Resumo:
Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size.
Analytical Solution for the Time-Fractional Telegraph Equation by the Method of Separating Variables