271 resultados para Human Factor Analysis
Resumo:
In order to drive sustainable financial profitability, service firms make significant investments in creating service environments that consumers will prefer over the environments of their competitors. To date, servicescape research is over-focused on understanding consumers’ emotional and physiological responses to servicescape attributes, rather than taking a holistic view of how consumers cognitively interpret servicescapes. This thesis argues that consumers will cognitively ascribe symbolic meanings to servicescapes and then evaluate if those meanings are congruent with their sense of Self in order to form a preference for a servicescape. Consequently, this thesis takes a Self Theory approach to servicescape symbolism to address the following broad research question: How do ascribed symbolic meanings influence servicescape preference? Using a three-study, mixed-method approach, this thesis investigates the symbolic meanings consumers ascribe to servicescapes and empirically tests whether the joint effects of congruence between consumer Self and the symbolic meanings ascribed to servicescapes influence consumers’ servicescape preference. First, Study One identifies the symbolic meanings ascribed to salient servicescape attributes using a combination of repertory tests and laddering techniques within 19 semi-structured individual depth interviews. Study Two modifies an existing scale to create a symbolic servicescape meaning scale in order to measure the symbolic meanings ascribed to servicescapes. Finally, Study Three utilises the Self-Congruity Model to empirically examine the joint effects of consumer Self and servicescape on consumers’ preference for servicescapes. Using polynomial regression with response surface analysis, 14 joint effect models demonstrate that both Self-Servicescape incongruity and congruity influence consumers’ preference for servicescapes. Combined, the findings of three studies suggest that the symbolic meanings ascribed to servicescapes and their (in)congruities with consumers’ sense of self can be used to predict consumers’ preferences for servicescapes. These findings have several key theoretical and practical contributions to services marketing.
Resumo:
The availability of health information is rapidly increasing; its expansion and proliferation is inevitable. At the same time, breeding of health information silos is an unstoppable and relentless exercise. Information security and privacy concerns are therefore major barriers in the eHealth socio-eco system. We proposed Information Accountability as a measurable human factor that should eliminate and mitigate security concerns. Information accountability measures would be practicable and feasible if legislative requirements are also embedded. In this context, information accountability constitutes a key component for the development of effective information technology requirements for health information system. Our conceptual approach to measuring human factors related to information accountability in eHealth is presented in this paper with some limitations.
Resumo:
The latest paradigm shift in government, termed Transformational Government, puts the citizen in the centre of attention. Including citizens in the design of online one-stop portals can help governmental organisations to become more customer focussed. This study describes the initial efforts of an Australian state government to develop an information architecture to structure the content of their future one-stop portal. Hereby, card sorting exercises have been conducted and analysed, utilising contemporary approaches found in academic and non-scientific literature. This paper describes the findings of the card sorting exercises in this particular case and discusses the suitability of the applied approaches in general. These are distinguished into non-statistical, statistical, and hybrid approaches. Thus, on the one hand, this paper contributes to academia by describing the application of different card sorting approaches and discussing their strengths and weaknesses. On the other hand, this paper contributes to practice by explaining the approach that has been taken by the authors’ research partner in order to develop a customer-focussed governmental one-stop portal. Thus, they provide decision support for practitioners with regard to different analysis methods that can be used to complement recent approaches in Transformational Government.
Resumo:
There is no doubt that social engineering plays a vital role in compromising most security defenses, and in attacks on people, organizations, companies, or even governments. It is the art of deceiving and tricking people to reveal critical information or to perform an action that benefits the attacker in some way. Fraudulent and deceptive people have been using social engineering traps and tactics using information technology such as e-mails, social networks, web sites, and applications to trick victims into obeying them, accepting threats, and falling victim to various crimes and attacks such as phishing, sexual abuse, financial abuse, identity theft, impersonation, physical crime, and many other forms of attack. Although organizations, researchers, practitioners, and lawyers recognize the severe risk of social engineering-based threats, there is a severe lack of understanding and controlling of such threats. One side of the problem is perhaps the unclear concept of social engineering as well as the complexity of understand human behaviors in behaving toward, approaching, accepting, and failing to recognize threats or the deception behind them. The aim of this paper is to explain the definition of social engineering based on the related theories of the many related disciplines such as psychology, sociology, information technology, marketing, and behaviourism. We hope, by this work, to help researchers, practitioners, lawyers, and other decision makers to get a fuller picture of social engineering and, therefore, to open new directions of collaboration toward detecting and controlling it.
Resumo:
The internationalisation process of firms has attracted much research interest since the 1970s. It is noted, however, that a significant research gap exists in studies with a primary focus on the pre-internationalisation behaviour of firms. This paper proposes the incorporation of a pre-internationalisation phase into the traditional Uppsala model of firm internationalisation to address the issue of export readiness. Through extensive literature review, the concepts fundamental to the ability of an Uppsala type firm to begin internationalisation through an export entry mode are identified: exposure to stimuli factors, attitudinal commitment of decision makers towards exporting, the firm’s resource capabilities, as well as the moderating effect of lateral rigidity. The concept of export readiness is operationalised in this study through the construction of an export readiness index (ERI) using exploratory and confirmatory factor analysis. The index is then applied to some representative cases and tested using logistic regression to establish its validity as a diagnostic tool. The proposed ERI presents not only a more practical approach towards analysing firms’ export readiness but has also major public policy implications as a possible tool for government export promotion agencies.
Resumo:
Engineering design processes are necessary to attain the requisite standards of integrity for high-assurance safety-related systems. Additionally, human factors design initiatives can provide critical insights that parameterise their development. Unfortunately, the popular perception of human factors as a “forced marriage” between engineering and psychology often provokes views where the ‘human factor’ is perceived as a threat to systems design. Some popular performance-based standards for developing safety-related systems advocate identifying and managing human factors throughout the system lifecycle. However, they also have a tendency to fall short in their guidance on the application of human factors methods and tools, let alone how the outputs generated can be integrated in to various stages of the design process. This case study describes a project that converged engineering with human factors to develop a safety argument for new low-cost railway level crossing technology for system-wide implementation in Australia. The paper enjoins the perspectives of a software engineer and cognitive psychologist and their involvement in the project over two years of collaborative work to develop a safety argument for low-cost level crossing technology. Safety and reliability requirements were informed by applying human factors analytical tools that supported the evaluation and quantification of human reliability where users interfaced with the technology. The project team was confronted with significant challenges in cross-disciplinary engagement, particularly with the complexities of dealing with incongruences in disciplinary language. They were also encouraged to think ‘outside the box’ as to how users of a system interpreted system states and ehaviour. Importantly, some of these states, while considered safe within the boundary of the constituent systems that implemented safety-related functions, could actually lead the users to engage in deviant behaviour. Psychology explained how user compliance could be eroded to levels that effectively undermined levels of risk reduction afforded by systems. Linking the engineering and psychology disciplines intuitively, overall safety performance was improved by introducing technical requirements and making design decisions that minimized the system states and behaviours that led to user deviancy. As a commentary on the utility of transdisciplinary collaboration for technical specification, the processes used to bridge the two disciplines are conceptualised in a graphical model.
Resumo:
This study reports on the utilisation of the Manchester Driver Behaviour Questionnaire (DBQ) to examine the self-reported driving behaviours of a large sample of Australian fleet drivers (N = 3414). Surveys were completed by employees before they commenced a one day safety workshop intervention. Factor analysis techniques identified a three factor solution similar to previous research, which was comprised of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Two items traditionally related with highway-code violations were found to be associated with aggressive driving behaviours among the current sample. Multivariate analyses revealed that exposure to the road, errors and self-reported offences predicted crashes at work in the last 12 months, while gender, highway violations and crashes predicted offences incurred while at work. Importantly, those who received more fines at work were at an increased risk of crashing the work vehicle. However, overall, the DBQ demonstrated limited efficacy at predicting these two outcomes. This paper outlines the major findings of the study in regards to identifying and predicting aberrant driving behaviours and also highlights implications regarding the future utilisation of the DBQ within fleet settings.
Resumo:
Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.
Resumo:
This is a methodological paper describing when and how manifest items dropped from a latent construct measurement model (e.g., factor analysis) can be retained for additional analysis. Presented are protocols for assessment for retention in the measurement model, evaluation of dropped items as potential items separate from the latent construct, and post hoc analyses that can be conducted using all retained (manifest or latent) variables. The protocols are then applied to data relating to the impact of the NAPLAN test. The variables examined are teachers’ achievement goal orientations and teachers’ perceptions of the impact of the test on curriculum and pedagogy. It is suggested that five attributes be considered before retaining dropped manifest items for additional analyses. (1) Items can be retained when employed in service of an established or hypothesized theoretical model. (2) Items should only be retained if sufficient variance is present in the data set. (3) Items can be retained when they provide a rational segregation of the data set into subsamples (e.g., a consensus measure). (4) The value of retaining items can be assessed using latent class analysis or latent mean analysis. (5) Items should be retained only when post hoc analyses with these items produced significant and substantive results. These suggested exploratory strategies are presented so that other researchers using survey instruments might explore their data in similar and more innovative ways. Finally, suggestions for future use are provided.
Resumo:
There are currently 23,500 level crossings in Australia, broadly divided active level crossings with flashing lights; and passive level crossings controlled by stop and give way signs. The current strategy is to annually upgrade passive level crossings with active controls within a given budget, but the 5,900 public passive crossings are too numerous to be upgraded all. The rail industry is considering alternative options to treat more crossings. One of them is to use lower cost equipment with reduced safety integrity level, but with a design that would fail to a safe state: in case of the impossibility for the system to know whether a train is approaching, the crossing changes to a passive crossing. This is implemented by having a STOP sign coming in front of the flashing lights. While such design is considered safe in terms of engineering design, questions remain on human factors. In order to evaluate whether such approach is safe, we conducted a driving simulator study where participants were familiarized with the new active crossing, before changing the signage to a passive crossing. Our results show that drivers treated the new crossing as an active crossing after the novelty effect had passed. While most participants did not experience difficulties with the crossing being turned back to a passive crossing, a number of participants experienced difficulties stopping in time at the first encounter of such passive crossing. Worse, a number of drivers never realized the signage had changed, highlighting the link between the decision to brake and stop at an active crossing to the lights flashing. Such results show the potential human factor issues of changing an active crossing to a passive crossing in case of failure of the detection of the train.
Resumo:
Human factors such as distraction, fatigue, alcohol and drug use are generally ignored in car-following (CF) models. Such ignorance overestimates driver capability and leads to most CF models’ inability in realistically explaining human driving behaviors. This paper proposes a novel car-following modeling framework by introducing the difficulty of driving task measured as the dynamic interaction between driving task demand and driver capability. Task difficulty is formulated based on the famous Task Capability Interface (TCI) model, which explains the motivations behind driver’s decision making. The proposed method is applied to enhance two popular CF models: Gipps’ model and IDM, and named as TDGipps and TDIDM respectively. The behavioral soundness of TDGipps and TDIDM are discussed and their stabilities are analyzed. Moreover, the enhanced models are calibrated with the vehicle trajectory data, and validated to explain both regular and human factor influenced CF behavior (which is distraction caused by hand-held mobile phone conversation in this paper). Both the models show better performance than their predecessors, especially in presence of human factors.
Resumo:
Objective. To undertake a systematic wholegenome screen to identify regions exhibiting genetic linkage to rheumatoid arthritis (RA). Methods. Two hundred fifty-two RA-affected sibling pairs from 182 UK families were genotyped using 365 highly informative microsatellite markers. Microsatellite genotyping was performed using fluorescent polymerase chain reaction primers and semiautomated DNA sequencing technology. Linkage analysis was undertaken using MAPMAKER/SIBS for single-point and multipoint analysis. Results. Significant linkage (maximum logarithm of odds score 4.7 [P = 0.000003] at marker D6S276, 1 cM from HLA-DRB1) was identified around the major histocompatibility complex (MHC) region on chromosome 6. Suggestive linkage (P < 7.4 × 10-4) was identified on chromosome 6q by single- and multipoint analysis. Ten other sites of nominal linkage (P < 0.05) were identified on chromosomes 3p, 4q, 7p, 2 regions of 10q, 2 regions of 14q, 16p, 21q, and Xq by single-point analysis and on 3 sites (1q, 14q, and 14q) by multipoint analysis. Conclusion. Linkage to the MHC region was confirmed. Eleven non-HLA regions demonstrated evidence of suggestive or nominal linkage, but none reached the genome-wide threshold for significant linkage (P = 2.2 × 10-5). Results of previous genome screens have suggested that 6 of these regions may be involved in RA susceptibility.
Resumo:
It has been 10 years since the seminal paper by Morrison and colleagues reporting the association of alleles of the vitamin D receptor and bone density [1], a paper which arguably kick-started the study of osteoporosis genetics. Since that report there have been literally thousands of osteoporosis genetic studies published, and large numbers of genes have been reported to be associated with the condition [2]. Although some of these reported associations are undoubtedly true, this snow-storm of papers and abstracts has clouded the field to such a great extent that it is very difficult to be certain of the veracity of most genetic associations reported hereto. The field needs to take stock and reconsider the best way forward, taking into account the biology of skeletal development and technological and statistical advances in human genetics, before more effort and money is wasted on continuing a process in which the primary achievement could be said to be a massive paper mountain. I propose in this review that the primary reasons for the paucity of success in osteoporosis genetics has been: •the absence of a major gene effect on bone mineral density (BMD), the most commonly studied bone phenotype; •failure to consider issues such as genetic heterogeneity, gene–environment interaction, and gene–gene interaction; •small sample sizes and over-optimistic data interpretation; and •incomplete assessment of the genetic variation in candidate genes studied.
Resumo:
This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.