896 resultados para Trustworthiness judgment
Resumo:
Guided by theory in both the trust and leadership domains, the overarching aim of this thesis was to answer a fundamental question. Namely, how and when does trust-building between leaders and followers enhance leader-member exchange (LMX) development and organisational trust? Although trust is considered to be at the crux of the leader-follower relationship, surprisingly little theoretical or empirical attention has been devoted to understanding the precise nature of this relationship. By integrating both a typology of trustworthy behaviour and a process model of trust development with LMX theory, study one developed and tested a new model of LMX development with leader-follower trust-building as the primary mechanism. In a three wave cross-lagged design, 294 student dyads in a business simulation completed measures of trust perceptions and LMX across the first 6 months of the LMX relationship. Trust-building was found to account for unexplained variance in the LMX construct over time, while controlling for initial relationship quality, thus confirming the critical role of the trust-building process in LMX development. The strongest evidence was found for the role of integrity-based trust-building behaviour, albeit only when such behaviour was not attributed to insincere motives. The results for ability and benevolence-based trustworthy behaviour revealed valued insights into the developmental nature of trustworthiness perceptions within LMX relationships. Thus, the pattern of results in study one provided a more comprehensive and nuanced understanding of the dynamic interplay between trust and LMX. In study two, leader trust-building was investigated cross-sectionally within an organisational sample of 201 employees. The central aim of this study was to investigate whether leader trust-building within leader-follower relationships could be leveraged for organisational trust. As expected, the trust-building process instigated by members in study one was replicated for leaders in study two. In addition, the results were most consistent for benevolence-based trust building, whereas both integrity- and ability-based trust-building were moderated by the position of the leader within the organisation’s hierarchy. Overall, the findings of this thesis shed considerable light on the richness of trusting perceptions in organisations, and the critical role of trust-building in LMX development and organisational trust.
Resumo:
Trust is a critical component of business to consumer (B2C) e-Commerce success. In the absence of typical environmental cues that consumers use to assess vendor trustworthiness in the offline retail context, online consumers often rely on trust triggers embedded within e-Commerce websites to contribute to the establishment of sufficient trust to make an online purchase. This paper presents and discusses the results of a study which took an initial look at the extent to which the context or manner in which trust triggers are evaluated may exert influence on the importance attributed to individual triggers.
Resumo:
Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.
Resumo:
This article explores the implications of how US family physicians make decisions about ordering diagnostic tests for their patients. Data is based on a study of 256 physicians interviewed after viewing a video vignette of a presenting patient. The qualitative analysis of 778 statements relating to trustworthiness of evidence for their decision making, the use of any kind of technology and diagnostic testing suggests a range of internal and external constraints on physician decision making. Test-ordering for family physicians in the United States is significantly influenced by both hidden cognitive processes related to the physician's calculation of patient resources and a health insurance system that requires certain types of evidence in order to permit further tests or particular interventions. The consequence of the need for physicians to meet multiple forms of proof that may not always relate to relevant treatment delays a diagnosis and treatment plan agreed not only by the physician and patient but also the insurance company. This results in a patient journey that is made up of stuttering steps to a confirmed diagnosis and treatment undermining patient-centred practice, compromising patient care, constraining physician autonomy and creating additional expense. © 2014 Elsevier Ltd.
Resumo:
Purpose. The purpose of this study was to evaluate the longitudinal changes in ocular physiology, tear film characteristics, and symptomatology experienced by neophyte silicone hydrogel (SiH) contact lens wearers in a daily-wear compared with a continuous-wear modality and with the different commercially available lenses over an 18-month period. Methods. Forty-five neophyte subjects were enrolled in the study and randomly assigned to wear one of two SiH materials: lotrafilcon A or balafilcon A lenses on either a daily- (LDW; BDW) or continuous-wear (LCW; BCW) basis. Additionally, a group of noncontact lens-wearing subjects (control group) was also recruited and followed over the same study period. Objective and subjective grading of ocular physiology were carried out together with tear meniscus height (TMH) and noninvasive tear breakup time (NITBUT). Subjects also subjectively rated symptoms and judgments with lens wear. After initial screening, subsequent measurements were taken after 1, 3, 6, 12, and 18 months. Results. Subjective and objective grading of ocular physiology revealed a small increase in bulbar, limbal, and palpebral hyperemia as well as corneal staining over time with both lens materials and regimes of wear (p < 0.05). No significant changes in NITBUT or TMH were found (p > 0.05). Subjective symptoms and judgment were not material- or modality-specific. Conclusions. Daily and continuous wear of SiH contact lenses induced small but statistically significant changes in ocular physiology and symptomatology. Clinical measures of tear film characteristics were unaffected by lens wear. Both materials and regimes of wear showed similar clinical performance. Long-term SiH contact lens wear is shown to be a successful option for patients. Copyright © 2006 American Academy of Optometry.
Resumo:
The rhythm created by spacing a series of brief tones in a regular pattern can be disguised by interleaving identical distractors at irregular intervals. The disguised rhythm can be unmasked if the distractors are allocated to a separate stream from the rhythm by integration with temporally overlapping captors. Listeners identified which of 2 rhythms was presented, and the accuracy and rated clarity of their judgment was used to estimate the fusion of the distractors and captors. The extent of fusion depended primarily on onset asynchrony and degree of temporal overlap. Harmonic relations had some influence, but only an extreme difference in spatial location was effective (dichotic presentation). Both preattentive and attentionally driven processes governed performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.
Resumo:
Case: Beardsley Theobalds Retirement Benefit Scheme Trustees v Yardley [2011] EWHC 1380 (QB) (QBD). The recent case of Beardsley Theobalds Retirement Benefit Scheme Trustees v Yardley, nicely illustrates, inter alia, the impact of the contractual defences of undue influence and the plea of non est factum in the context of avoiding liability under leasehold guarantees, within the setting of the landlord and tenant relationship. Additionally, the case also gives us an insight into the possible application of other technical defences relating to the law of formalities for leases. Judgment in this case was handed down on September 30, 2011.
Resumo:
Trust is a critical component of successful e-Commerce. Given the impersonality, anonymity, and automation of transactions, online vendor trustworthiness cannot be assessed by means of body language and other environmental cues that consumers typically use when deciding to trust offline retailers. It is therefore essential that the design of e-Commerce websites compensate by incorporating circumstantial cues in the form of appropriate trust triggers. This paper presents and discusses the results of a study which took an initial look at whether consumers with different personality types (a) are generally more trusting and (b) rely on different trust cues during their assessment of first impression vendor trustworthiness in B2C e-Commerce.
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
The paper analyzes auctions which are not completely enforceable. In such auctions, economic agents may fail to carry out their obligations, and parties involved cannot rely on external enforcement or control mechanisms for backing up a transaction. We propose two mechanisms that make bidders directly or indirectly reveal their trustworthiness. The first mechanism is based on discriminating bidding schedules that separate trustworthy from untrustworthy bidders. The second mechanism is a generalization of the Vickrey auction to the case of untrustworthy bidders. We prove that, if the winner is considered to have the trustworthiness of the second-highest bidder, truthfully declaring one's trustworthiness becomes a dominant strategy. We expect the proposed mechanisms to reduce the cost of trust management and to help agent designers avoid many market failures caused by lack of trust.
Resumo:
Examines the Court of Appeal judgment in Tesla Motors Ltd v BBC on whether the claim that a review of a vehicle on the BBC "Top Gear" programme constituted malicious falsehood should be struck out under CPR 3.4(2) on the ground there was insufficient evidence to show that any loss in revenue suffered by the manufacturer was attributable to the review. Considers the implications of the decision for commercial claimants seeking to establish that defamation caused them "serious harm", which, pursuant to the Defamation Act 2013 s.1(2), requires evidence of actual or likely serious financial loss.
Resumo:
The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection.
Resumo:
The Semantic Web has come a long way since its inception in 2001, especially in terms of technical development and research progress. However, adoption by non- technical practitioners is still an ongoing process, and in some areas this process is just now starting. Emergency response is an area where reliability and timeliness of information and technologies is of essence. Therefore it is quite natural that more widespread adoption in this area has not been seen until now, when Semantic Web technologies are mature enough to support the high requirements of the application area. Nevertheless, to leverage the full potential of Semantic Web research results for this application area, there is need for an arena where practitioners and researchers can meet and exchange ideas and results. Our intention is for this workshop, and hopefully coming workshops in the same series, to be such an arena for discussion. The Extended Semantic Web Conference (ESWC - formerly the European Semantic Web conference) is one of the major research conferences in the Semantic Web field, whereas this is a suitable location for this workshop in order to discuss the application of Semantic Web technology to our specific area of applications. Hence, we chose to arrange our first SMILE workshop at ESWC 2013. However, this workshop does not focus solely on semantic technologies for emergency response, but rather Semantic Web technologies in combination with technologies and principles for what is sometimes called the "social web". Social media has already been used successfully in many cases, as a tool for supporting emergency response. The aim of this workshop is therefore to take this to the next level and answer questions like: "how can we make sense of, and furthermore make use of, all the data that is produced by different kinds of social media platforms in an emergency situation?" For the first edition of this workshop the chairs collected the following main topics of interest: • Semantic Annotation for understanding the content and context of social media streams. • Integration of Social Media with Linked Data. • Interactive Interfaces and visual analytics methodologies for managing multiple large-scale, dynamic, evolving datasets. • Stream reasoning and event detection. • Social Data Mining. • Collaborative tools and services for Citizens, Organisations, Communities. • Privacy, ethics, trustworthiness and legal issues in the Social Semantic Web. • Use case analysis, with specific interest for use cases that involve the application of Social Media and Linked Data methodologies in real-life scenarios. All of these, applied in the context of: • Crisis and Disaster Management • Emergency Response • Security and Citizen Journalism The workshop received 6 high-quality paper submissions and based on a thorough review process, thanks to our program committee, the decision was made to accept four of these papers for the workshop (67% acceptance rate). These four papers can be found later in this proceedings volume. Three out of four of these papers particularly discuss the integration and analysis of social media data, using Semantic Web technologies, e.g. for detecting complex events in social media streams, for visualizing and analysing sentiments with respect to certain topics in social media, or for detecting small-scale incidents entirely through the use of social media information. Finally, the fourth paper presents an architecture for using Semantic Web technologies in resource management during a disaster. Additionally, the workshop featured an invited keynote speech by Dr. Tomi Kauppinen from Aalto university. Dr. Kauppinen shared experiences from his work on applying Semantic Web technologies to application fields such as geoinformatics and scientific research, i.e. so-called Linked Science, but also recent ideas and applications in the emergency response field. His input was also highly valuable for the roadmapping discussion, which was held at the end of the workshop. A separate summary of the roadmapping session can be found at the end of these proceedings. Finally, we would like to thank our invited speaker Dr. Tomi Kauppinen, all our program committee members, as well as the workshop chair of ESWC2013, Johanna Völker (University of Mannheim), for helping us to make this first SMILE workshop a highly interesting and successful event!