78 resultados para One-shot information theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of 4 experiments examined the performance of rats with retrohippocampal lesions on a spatial water-maze task. The animals were trained to find and escape onto a hidden platform after swimming in a large pool of opaque water. The platform was invisible and could not be located using olfactory cues. Successful escape performance required the rats to develop strategies of approaching the correct location with reference solely to distal extramaze cues. The lesions encompassed the entire rostro-caudal extent of the lateral and medial entorhinal cortex, and included parts of the pre- and para-subiculum, angular bundle and subiculum. Groups ECR 1 and 2 sustained only partial damage of the subiculum, while Group ECR+S sustained extensive damage. These groups were compared with sham-lesion and unoperated control groups. In Expt 1A, a profound deficit in spatial localisation was found in groups ECR 1 and ECR+S, the rats receiving all training postoperatively. In Expt 1B, these two groups showed hyperactivity in an open-field. In Expt 2, extensive preoperative training caused a transitory saving in performance of the spatial task by group ECR 2, but comparisons with the groups of Expt 1A revealed no sustained improvement, except on one measure of performance in a post-training transfer test. All rats were then given (Expt 3) training on a cueing procedure using a visible platform. The spatial deficit disappeared but, on returning to the normal hidden platform procedure, it reappeared. Nevertheless, a final transfer test, during which the platform was removed from the apparatus, revealed a dissociation between two independent measures of performance: the rats with ECR lesions failed to search for the hidden platform but repeatedly crossed its correct location accurately during traverses of the entire pool. This partial recovery of performance was not (Expt 4) associated with any ability to discriminate between two locations in the pool. The apparently selective recovery of aspects of spatial memory is discussed in relation to O'Keefe and Nadel's (1978) spatial mapping theory of hippocampal function. We propose a modification of the theory in terms of a dissociation between procedural and declarative subcomponents of spatial memory. The declarative component is a flexible access system in which information is stored in a form independent of action. It is permanently lost after the lesion. The procedural component is "unmasked" by the retrohippocampal lesion giving rise to the partial recovery of spatial localisation performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bacterial reporter cells (i.e. strains engineered to produce easily measurable signals in response to one or more chemical targets) can principally be used to quantify chemical signals and analytes, physicochemical conditions and gradients on a microscale (i.e. micrometer to submillimeter distances), when the reporter signal is determined in individual cells. This makes sense, as bacterial life essentially thrives in microheterogenic environments and single-cell reporter information can help us to understand the microphysiology of bacterial cells and its importance for macroscale processes like pollutant biodegradation, beneficial bacteria-eukaryote interactions, and infection. Recent findings, however, showed that clonal bacterial populations are essentially always physiologically, phenotypically and genotypically heterogeneous, thus emphasizing the need for sound statistical approaches for the interpretation of reporter response in individual bacterial cells. Serious attempts have been made to measure and interpret single-cell reporter gene expression and to understand variability in reporter expression among individuals in a population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Dietary supplement (DS) use increased rapidly in recent years. However, evidence of benefits of many DSs for healthy users is scarce and may not equate with known risks of overdose, drug interaction and recently discovered negative long-term effects. This exploratory study aimed to investigate the perceptions and motivations of DS users in Lausanne, Switzerland. METHOD: A convenience sample (n = 147) was recruited at the entrances of local sales points. Data were collected in on-site semistructured interviews that assessed dietary supplementation habits. RESULTS: The majority of DSs were all-in-one products, containing a mixture of minerals and vitamins, or products containing only minerals. Among the 147 users, 72 (49%) used one all-in-one product and 3 (2%) used two all-in-one products. Thirty-one (21%) consumers did not know for at least one product what the purpose of their DS use was. Seventy-five percent of participants thought that DS use presents no risk or nearly no risk. Only 49% of participants stated that their physicians were informed about their consumption. Although men searched more often for potential risks (p <0.001), they turned less frequently to health professionals to get this information (p = 0.007). DISCUSSION: As in other surveys performed elsewhere, our study shows that, in Lausanne (Switzerland), DSs are commonly used as mixed products. Risk perception seems generally low among DS users. Physicians should be trained to evaluate patients' perceived needs and DS consumption in order to provide good evidence-based information or to propose alternatives to DS use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Our analysis assessed the impact of information on patients' preferences in prescription versus over-the-counter (OTC) delivery systems. METHODS: A contingent valuation (CV) study was implemented, randomly assigning 534 lay people into the receipt of limited or extended information concerning new influenza drugs. In each information arm, people answered two questions: the first asked about willingness to pay (WTP) for the new prescription drug; the second asked about WTP for the same drug sold OTC. RESULTS: We show that WTP is higher for the OTC scenario and that the level of information plays a significant role in the evaluation of the OTC scenario, with more information being associated with an increase in the WTP. In contrast, the level of information provided has no impact on WTP for prescription medicine. Thus, for the kind of drug considered here (i.e. safe, not requiring medical supervision), a switch to OTC status can be expected to be all the more beneficial, as the patient is provided with more information concerning the capability of the drug. CONCLUSIONS: Our results shed light on one of the most challenging issues that health policy makers are currently faced with, namely the threat of a bird flu pandemic. Drug delivery is a critical component of pandemic influenza preparedness. Furthermore, the congruence of our results with the agency and demand theories provides an important test of the validity of using WTP based on CV methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study aimed to explore the validity of an adaptation into French of the self-rated form of the Health of the Nation Outcome Scales for Children and Adolescents (F-HoNOSCA-SR) and to test its usefulness in a clinical routine use. One hundred and twenty nine patients, admitted into two inpatient units, were asked to participate in the study. One hundred and seven patients filled out the F-HoNOSCA-SR (for a subsample (N=17): at two occasions, one week apart) and the strengths and difficulties questionnaire (SDQ). In addition, the clinician rated the clinician-rated form of the HoNOSCA (HoNOSCA-CR, N=82). The reliability (assessed with split-half coefficient, item response theory (IRT) models and intraclass correlations (ICC) between the two occasions) revealed that the F-HoNSOCA-SR provides reliable measures. The concurrent validity assessed by correlating the F-HoNOSCA-SR and the SDQ revealed a good convergent validity of the instrument. The relationship analyses between the F-HoNOSCA-SR and the HoNOSCA-CR revealed weak but significant correlations. The comparison between the F-HoNOSCA-SR and the HoNOSCA-CR with paired sample t-tests revealed a higher score for the self-rated version. The F-HoNSOCA-SR was reported to provide reliable measures. In addition, it allows us to measure complementary information when used together with the HoNOSCA-CR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1 Summary This dissertation deals with two major aspects of corporate governance that grew in importance during the last years: the internal audit function and financial accounting education. In three essays, I contribute to research on these topics which are embedded in the broader corporate governance literature. The first two essays consist of experimental investigations of internal auditors' judgments. They deal with two research issues for which accounting research lacks evidence: The effectiveness of internal controls and the potentially conflicting role of the internal audit function between management and the audit committee. The findings of the first two essays contribute to the literature on internal auditors' judgment and the role of the internal audit function as a major cornerstone of corporate governance. The third essay theoretically examines a broader issue but also relates to the overall research question of this dissertation: What contributes to effective corporate governance? This last essay takes the perspective that the root for quality corporate governance is appropriate financial accounting education. r develop a public interest approach to accounting education that contributes to the literature on adequate accounting education with respect to corporate governance and accounting harmonization. The increasing importance of both the internal audit function and accounting education for corporate governance can be explained by the same recent fundamental changes that still affect accounting research and practice. First, the Sarbanes-Oxley Act of 2002 (SOX, 2002) and the 8th EU Directive (EU, 2006) have led to a bigger role for the internal audit function in corporate governance. Their implications regarding the implementation of audit committees and their oversight over internal controls are extensive. As a consequence, the internal audit function has become increasingly important for corporate governance and serves a new master (i.e. the audit committee) within the company in addition to management. Second, the SOX (2002) and the 8th EU Directive introduced additional internal control mechanisms that are expected to contribute to the reliability of financial information. As a consequence, the internal audit function is expected to contribute to a greater extent to the reliability of financial statements. Therefore, effective internal control mechanisms that strengthen objective judgments and independence become important. This is especially true when external- auditors rely on the work of internal auditors in the context of the International Standard on Auditing (ISA) 610 and the equivalent US Statement on Auditing Standards (SAS) 65 (see IFAC, 2009 and AICPA, 1990). Third, the harmonization of international reporting standards is increasingly promoted by means of a principles-based approach. It is the leading approach since a study of the SEC (2003) that was required by the SOX (2002) in section 108(d) was in favor of this approach. As a result, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) commit themselves to the development of compatible accounting standards based on a principles-based approach. Moreover, since the Norwalk Agreement of 2002, the two standard setters have developed exposure drafts for a common conceptual framework that will be the basis for accounting harmonization. The new .framework will be in favor of fair value measurement and accounting for real-world economic phenomena. These changes in terms of standard setting lead to a trend towards more professional judgment in the accounting process. They affect internal and external auditors, accountants, and managers in general. As a consequence, a new competency set for preparers and users of financial statements is required. The basil for this new competency set is adequate accounting education (Schipper, 2003). These three issues which affect corporate governance are the initial point of this dissertation and constitute its motivation. Two broad questions motivated a scientific examination in three essays: 1) What are major aspects to be examined regarding the new role of the internal audit function? 2) How should major changes in standard setting affect financial accounting education? The first question became apparent due to two published literature reviews by Gramling et al. (2004) and Cohen, Krishnamoorthy & Wright (2004). These studies raise various questions for future research that are still relevant and which motivate the first two essays of my dissertation. In the first essay, I focus on the role of the internal audit function as one cornerstone of corporate governance and its potentially conflicting role of serving both management and the audit committee (IIA, 2003). In an experimental study, I provide evidence on the challenges for internal auditors in their role as servant for two masters -the audit committee and management -and how this influences internal auditors' judgment (Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004). I ask if there is an expectation gap between what internal auditors should provide for corporate governance in theory compared to what internal auditors are able to provide in practice. In particular, I focus on the effect of serving two masters on the internal auditor's independence. I argue that independence is hardly achievable if the internal audit function serves two masters with conflicting priorities. The second essay provides evidence on the effectiveness of accountability as an internal control mechanism. In general, internal control mechanisms based on accountability were enforced by the SOX (2002) and the 8th EU Directive. Subsequently, many companies introduced sub-certification processes that should contribute to an objective judgment process. Thus, these mechanisms are important to strengthen the reliability of financial statements. Based on a need for evidence on the effectiveness of internal control mechanisms (Brennan & Solomon, 2008; Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004; Solomon & Trotman, 2003), I designed an experiment to examine the joint effect of accountability and obedience pressure in an internal audit setting. I argue that obedience pressure potentially can lead to a negative influence on accountants' objectivity (e.g. DeZoort & Lord, 1997) whereas accountability can mitigate this negative effect. My second main research question - How should major changes in standard setting affect financial accounting education? - is investigated in the third essay. It is motivated by the observation during my PhD that many conferences deal with the topic of accounting education but very little is published about what needs to be done. Moreover, the Endings in the first two essays of this thesis and their literature review suggest that financial accounting education can contribute significantly to quality corporate governance as argued elsewhere (Schipper, 2003; Boyce, 2004; Ghoshal, 2005). In the third essay of this thesis, I therefore focus on approaches to financial accounting education that account for the changes in standard setting and also contribute to corporate governance and accounting harmonization. I argue that the competency set that is required in practice changes due to major changes in standard setting. As the major contribution of the third article, I develop a public interest approach for financial accounting education. The major findings of this dissertation can be summarized as follows. The first essay provides evidence to an important research question raised by Gramling et al. (2004, p. 240): "If the audit committee and management have different visions for the corporate governance role of the IAF, which vision will dominate?" According to the results of the first essay, internal auditors do follow the priorities of either management or the audit committee based on the guidance provided by the Chief Audit executive. The study's results question whether the independence of the internal audit function is actually achievable. My findings contribute to research on internal auditors' judgment and the internal audit function's independence in the broader frame of corporate governance. The results are also important for practice because independence is a major justification for a positive contribution of the internal audit function to corporate governance. The major findings of the second essay indicate that the duty to sign work results - a means of holding people accountable -mitigates the negative effect of obedience pressure on reliability. Hence, I found evidence that control .mechanisms relying on certifications may enhance the reliability of financial information. These findings contribute to the literature on the effectiveness of internal control mechanisms. They are also important in the light of sub-certification processes that resulted from the Sarbanes-Oxley Act and the 8th EU Directive. The third essay contributes to the literature by developing a measurement framework that accounts for the consequences of major trends in standard setting. Moreovér, it shows how these trends affect the required .competency set of people dealing with accounting issues. Based on this work, my main contribution is the development of a public interest approach for the design of adequate financial accounting curricula. 2 Serving two masters: Experimental evidence on the independence of internal auditors Abstract Twenty nine internal auditors participated in a study that examines the independence of internal auditors in their potentially competing roles of serving two masters: the audit committee and management. Our main hypothesis suggests that internal auditors' independence is not achievable in an institutional setting in which internal auditors are accountable to two different parties with potentially differing priorities. We test our hypothesis in an experiment in which the treatment consisted of two different instructions of the Chief audit executive; one stressing the priority of management (cost reduction) and one stressing the priority of the audit committee (effectiveness). Internal auditors had to evaluate internal controls and their inherent costs of different processes which varied in their degree of task complexity. Our main results indicate that internal auditors' evaluation of the processes is significantly different when task complexity is high. Our findings suggest that internal auditors do follow the priorities of either management or the audit committee depending on the instructions of a superior internal auditor. The study's results question whether the independence of the internal audit function is actually achievable. With our findings, we contribute to research on internal auditors' judgment and the internal audit function's independence in the frame of corporate governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotion communication research strongly focuses on the face and voice as expressive modalities, leaving the rest of the body relatively understudied. Contrary to the early assumption that body movement only indicates emotional intensity, recent studies show that body movement and posture also convey emotion specific information. However, a deeper understanding of the underlying mechanisms is hampered by a lack of production studies informed by a theoretical framework. In this research we adopted the Body Action and Posture (BAP) coding system to examine the types and patterns of body movement that are employed by 10 professional actors to portray a set of 12 emotions. We investigated to what extent these expression patterns support explicit or implicit predictions from basic emotion theory, bi-dimensional theory, and componential appraisal theory. The overall results showed partial support for the different theoretical approaches. They revealed that several patterns of body movement systematically occur in portrayals of specific emotions, allowing emotion differentiation. While a few emotions were prototypically encoded by one particular pattern, most emotions were variably expressed by multiple patterns, many of which can be explained as reflecting functional components of emotion such as modes of appraisal and action readiness. It is concluded that further work in this largely underdeveloped area should be guided by an appropriate theoretical framework to allow a more systematic design of experiments and clear hypothesis testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective information for the groups exposed to the disease and the public in general is the only step that is currently possible in the prevention of AIDS. A certain number of information and support actions have been developed as a consequence of the appearance of AIDS in Switzerland. The AIDS information hot-line at the CHUV is one of these actions with the aim of orienting the information according to demand and examining the utility of this means, we made a prospective evaluation of the calls (between 23 October 1985-inception of the line and 31 March 1986). Out of a total of 535 calls, 317 requests for appointments (tests, consultation) or written documentation, and 218 (41%) were transferred to the doctor; 39% of the calls came from people who were directly concerned (ill, with a positive test, exposed groups), 11% from health professionals, and 47% from the general public. 56% of the calls were concerned with transmission of the disease (sexual, blood, indirect), 22% with the meaning of the detection test, 22% referred to the symptoms of the disease. According to the doctor's estimate, although the standard of knowledge is satisfactory in 55% of the cases, a considerable number of false ideas, that generate irrational fear, still persist. This hot-line thus provides a sympathetic ear and individual support, particularly for the exposed groups, rather than information about the disease. The existence of this action, therefore, appears justified, but must be integrated into a global strategy of information promotion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To prospectively assess the stiffness of incidentally discovered focal liver lesions (FLL) with no history of chronic liver disease or extrahepatic cancer using shearwave elastography (SWE). METHODS: Between June 2011 and May 2012, all FLL fortuitously discovered on ultrasound examination were prospectively included. For each lesion, stiffness was measured (kPa). Characterization of the lesion relied on magnetic resonance imaging (MRI) and/or contrast-enhanced ultrasound, or biopsy. Tumour stiffness was analysed using ANOVA and non-parametric Mann-Whitney tests. RESULTS: 105 lesions were successfully evaluated in 73 patients (61 women, 84%) with a mean age of 44.8 (range: 20‒75). The mean stiffness was 33.3 ± 12.7 kPa for the 60 focal nodular hyperplasia (FNH), 19.7 ± 9.8 k Pa for the 17 hepatocellular adenomas (HCA), 17.1 ± 7 kPa for the 20 haemangiomas, 11.3 ± 4.3 kPa for the five focal fatty sparing, 34.1 ± 7.3 kPa for the two cholangiocarcinomas, and 19.6 kPa for one hepatocellular carcinoma (p < 0.0001). There was no difference between the benign and the malignant groups (p = 0.64). FNHs were significantly stiffer than HCAs (p < 0.0001). Telangiectatic/inflammatory HCAs were significantly stiffer than the steatotic HCAs (p = 0.014). The area under the ROC curve (AUROC) for differentiating FNH from other lesions was 0.86 ± 0.04. CONCLUSION: SWE may provide additional information for the characterization of FFL, and may help in differentiating FNH from HCAs, and in subtyping HCAs. KEY POINTS: ? SWE might be helpful for the characterization of solid focal liver lesions ? SWE cannot differentiate benign from malignant liver lesions ? FNHs are significantly stiffer than other benign lesions ? Telangiectatic/inflammatory HCA are significantly stiffer than steatotic ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'évolution de l'environnement économique, des chaînes de valeur et des modèles d'affaires des organisations augmentent l'importance de la coordination, qui peut être définie comme la gestion des interdépendances entre des tâches réalisées par des acteurs différents et concourants à un objectif commun. De nombreux moyens sont mis en oeuvre au sein des organisations pour gérer ces interdépendances. A cet égard, les activités de coordination bénéficient massivement de l'appui des technologies de l'information et de communication (TIC) qui sont désormais disséminées, intégrées et connectées sous de multiples formes tant dans l'environnement privé que professionnel. Dans ce travail, nous avons investigué la question de recherche suivante : comment l'ubiquité et l'interconnec- tivité des TIC modifient-elles les modes de coordination ? A travers quatre études en systèmes d'information conduites selon une méthodologie design science, nous avons traité cette question à deux niveaux : celui de l'alignement stratégique entre les affaires et les systèmes d'information, où la coordination porte sur les interdépendances entre les activités ; et celui de la réalisation des activités, où la coordination porte sur les interdépendances des interactions individuelles. Au niveau stratégique, nous observons que l'ubiquité et l'interconnectivité permettent de transposer des mécanismes de coordination d'un domaine à un autre. En facilitant différentes formes de coprésence et de visibilité, elles augmentent aussi la proximité dans les situations de coordination asynchrone ou distante. Au niveau des activités, les TIC présentent un très fort potentiel de participation et de proximité pour les acteurs. De telles technologies leur donnent la possibilité d'établir les responsabilités, d'améliorer leur compréhension commune et de prévoir le déroulement et l'intégration des tâches. La contribution principale qui émerge de ces quatre études est que les praticiens peuvent utiliser l'ubiquité et l'interconnectivité des TIC pour permettre aux individus de communi-quer et d'ajuster leurs actions pour définir, atteindre et redéfinir les objectifs du travail commun. -- The evolution of the economic environment and of the value chains and business models of organizations increases the importance of coordination, which can be defined as the management of interdependences between tasks carried out by different actors and con-tributing to a common goal. In organizations, a considerable number of means are put into action in order to manage such interdependencies. In this regard, information and communication technologies (ICT), whose various forms are nowadays disseminated, integrated and connected in both private and professional environments, offer important support to coordination activities. In this work, we have investigated the following research question: how do the ubiquity and the interconnectivity of ICT modify coordination mechanisms? Throughout four information systems studies conducted according to a design science methodology, we have looked into this question at two different levels: the one of strategic alignment between business and information systems strategy, where coordination is about interdependencies between activities; and the one of tasks, where coordination is about interdependencies between individual interactions. At the strategic level, we observe that ubiquity and interconnectivity allow for transposing coordination mechanisms from one field to another. By facilitating various forms of copresence and visibility, they also increase proximity in asynchronous or distant coordination situations. At the tasks level, ICTs offer the actors a very high potential for participation and proximity. Such technologies make it possible to establish accountability, improve common understanding and anticipate the unfolding and integration of tasks. The main contribution emerging from these four studies is that practitioners can use the ubiquity and interconnectivity of ICT in order to allow individuals to communicate and adjust their actions to define, reach and redefine the goals of common work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2×2×3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).