820 resultados para stress based approach
Resumo:
Rectangular hollow section (RHS) members are components widely used in engineering applications because of their good-looking, good properties in engineering areas and inexpensive cost comparing to members with other sections. The increasing use of RHS in load bearing structures makes it necessary to analyze the fatigue behavior of the RHS members. In this thesis, concentration will be given to the fatigue behavior of the RHS members under variable amplitude pure torsional loading. For the RHS members, failure will normally occur in the corner region if the welded regions are under full penetration. This is because of the complicated stress components' distributions at the RHScorners, where all of three fracture mechanics modes will happen. Mode I is mainly caused by the residual stresses that caused by the manufacturing process. Modes II and III are caused by the applied torsional loading. Stress based Findleymodel is also used to analyze the stress components. Constant amplitude fatigue tests have been done as well as variable amplitude fatigue tests. The specimens under variable amplitude loading gave longer fatigue lives than those under constant amplitude loading. Results from tests show an S-N curvewith slope around 5.
Resumo:
1 Summary This dissertation deals with two major aspects of corporate governance that grew in importance during the last years: the internal audit function and financial accounting education. In three essays, I contribute to research on these topics which are embedded in the broader corporate governance literature. The first two essays consist of experimental investigations of internal auditors' judgments. They deal with two research issues for which accounting research lacks evidence: The effectiveness of internal controls and the potentially conflicting role of the internal audit function between management and the audit committee. The findings of the first two essays contribute to the literature on internal auditors' judgment and the role of the internal audit function as a major cornerstone of corporate governance. The third essay theoretically examines a broader issue but also relates to the overall research question of this dissertation: What contributes to effective corporate governance? This last essay takes the perspective that the root for quality corporate governance is appropriate financial accounting education. r develop a public interest approach to accounting education that contributes to the literature on adequate accounting education with respect to corporate governance and accounting harmonization. The increasing importance of both the internal audit function and accounting education for corporate governance can be explained by the same recent fundamental changes that still affect accounting research and practice. First, the Sarbanes-Oxley Act of 2002 (SOX, 2002) and the 8th EU Directive (EU, 2006) have led to a bigger role for the internal audit function in corporate governance. Their implications regarding the implementation of audit committees and their oversight over internal controls are extensive. As a consequence, the internal audit function has become increasingly important for corporate governance and serves a new master (i.e. the audit committee) within the company in addition to management. Second, the SOX (2002) and the 8th EU Directive introduced additional internal control mechanisms that are expected to contribute to the reliability of financial information. As a consequence, the internal audit function is expected to contribute to a greater extent to the reliability of financial statements. Therefore, effective internal control mechanisms that strengthen objective judgments and independence become important. This is especially true when external- auditors rely on the work of internal auditors in the context of the International Standard on Auditing (ISA) 610 and the equivalent US Statement on Auditing Standards (SAS) 65 (see IFAC, 2009 and AICPA, 1990). Third, the harmonization of international reporting standards is increasingly promoted by means of a principles-based approach. It is the leading approach since a study of the SEC (2003) that was required by the SOX (2002) in section 108(d) was in favor of this approach. As a result, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) commit themselves to the development of compatible accounting standards based on a principles-based approach. Moreover, since the Norwalk Agreement of 2002, the two standard setters have developed exposure drafts for a common conceptual framework that will be the basis for accounting harmonization. The new .framework will be in favor of fair value measurement and accounting for real-world economic phenomena. These changes in terms of standard setting lead to a trend towards more professional judgment in the accounting process. They affect internal and external auditors, accountants, and managers in general. As a consequence, a new competency set for preparers and users of financial statements is required. The basil for this new competency set is adequate accounting education (Schipper, 2003). These three issues which affect corporate governance are the initial point of this dissertation and constitute its motivation. Two broad questions motivated a scientific examination in three essays: 1) What are major aspects to be examined regarding the new role of the internal audit function? 2) How should major changes in standard setting affect financial accounting education? The first question became apparent due to two published literature reviews by Gramling et al. (2004) and Cohen, Krishnamoorthy & Wright (2004). These studies raise various questions for future research that are still relevant and which motivate the first two essays of my dissertation. In the first essay, I focus on the role of the internal audit function as one cornerstone of corporate governance and its potentially conflicting role of serving both management and the audit committee (IIA, 2003). In an experimental study, I provide evidence on the challenges for internal auditors in their role as servant for two masters -the audit committee and management -and how this influences internal auditors' judgment (Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004). I ask if there is an expectation gap between what internal auditors should provide for corporate governance in theory compared to what internal auditors are able to provide in practice. In particular, I focus on the effect of serving two masters on the internal auditor's independence. I argue that independence is hardly achievable if the internal audit function serves two masters with conflicting priorities. The second essay provides evidence on the effectiveness of accountability as an internal control mechanism. In general, internal control mechanisms based on accountability were enforced by the SOX (2002) and the 8th EU Directive. Subsequently, many companies introduced sub-certification processes that should contribute to an objective judgment process. Thus, these mechanisms are important to strengthen the reliability of financial statements. Based on a need for evidence on the effectiveness of internal control mechanisms (Brennan & Solomon, 2008; Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004; Solomon & Trotman, 2003), I designed an experiment to examine the joint effect of accountability and obedience pressure in an internal audit setting. I argue that obedience pressure potentially can lead to a negative influence on accountants' objectivity (e.g. DeZoort & Lord, 1997) whereas accountability can mitigate this negative effect. My second main research question - How should major changes in standard setting affect financial accounting education? - is investigated in the third essay. It is motivated by the observation during my PhD that many conferences deal with the topic of accounting education but very little is published about what needs to be done. Moreover, the Endings in the first two essays of this thesis and their literature review suggest that financial accounting education can contribute significantly to quality corporate governance as argued elsewhere (Schipper, 2003; Boyce, 2004; Ghoshal, 2005). In the third essay of this thesis, I therefore focus on approaches to financial accounting education that account for the changes in standard setting and also contribute to corporate governance and accounting harmonization. I argue that the competency set that is required in practice changes due to major changes in standard setting. As the major contribution of the third article, I develop a public interest approach for financial accounting education. The major findings of this dissertation can be summarized as follows. The first essay provides evidence to an important research question raised by Gramling et al. (2004, p. 240): "If the audit committee and management have different visions for the corporate governance role of the IAF, which vision will dominate?" According to the results of the first essay, internal auditors do follow the priorities of either management or the audit committee based on the guidance provided by the Chief Audit executive. The study's results question whether the independence of the internal audit function is actually achievable. My findings contribute to research on internal auditors' judgment and the internal audit function's independence in the broader frame of corporate governance. The results are also important for practice because independence is a major justification for a positive contribution of the internal audit function to corporate governance. The major findings of the second essay indicate that the duty to sign work results - a means of holding people accountable -mitigates the negative effect of obedience pressure on reliability. Hence, I found evidence that control .mechanisms relying on certifications may enhance the reliability of financial information. These findings contribute to the literature on the effectiveness of internal control mechanisms. They are also important in the light of sub-certification processes that resulted from the Sarbanes-Oxley Act and the 8th EU Directive. The third essay contributes to the literature by developing a measurement framework that accounts for the consequences of major trends in standard setting. Moreovér, it shows how these trends affect the required .competency set of people dealing with accounting issues. Based on this work, my main contribution is the development of a public interest approach for the design of adequate financial accounting curricula. 2 Serving two masters: Experimental evidence on the independence of internal auditors Abstract Twenty nine internal auditors participated in a study that examines the independence of internal auditors in their potentially competing roles of serving two masters: the audit committee and management. Our main hypothesis suggests that internal auditors' independence is not achievable in an institutional setting in which internal auditors are accountable to two different parties with potentially differing priorities. We test our hypothesis in an experiment in which the treatment consisted of two different instructions of the Chief audit executive; one stressing the priority of management (cost reduction) and one stressing the priority of the audit committee (effectiveness). Internal auditors had to evaluate internal controls and their inherent costs of different processes which varied in their degree of task complexity. Our main results indicate that internal auditors' evaluation of the processes is significantly different when task complexity is high. Our findings suggest that internal auditors do follow the priorities of either management or the audit committee depending on the instructions of a superior internal auditor. The study's results question whether the independence of the internal audit function is actually achievable. With our findings, we contribute to research on internal auditors' judgment and the internal audit function's independence in the frame of corporate governance.
Resumo:
Tutkielman tavoitteena on teoriaosassa esitellä pankkien vakavaraisuussäännöksien ja riskienhallinnan perusperiaatteet, perehtyä nykyiseen Basel I -järjestelmään ja sen uudistukseen eli Basel II -kehikkoon. Tutkielmassa keskitytään uuden järjestelmän ensimmäiseen pilariin ja sen mukaisiin minipääomavaatimuksiin. Tarkastelussa on tarkemmin luottoriskin vakavaraisuusvaatimusten mukaiset minimipääoman laskentamenetelmät, standardimenetelmä ja sisäisten luottoluokitusten menetelmä. Standardi-menetelmä käyttää hyväkseen ulkoisia luottoluokituksia, kun taas kehittyneempi sisäisten luottoluokitusten menetelmä hyödyntää pankkien omia tietojärjestelmiä ja näiden tuottamia estimaattejaasiakkaiden luottokelpoisuudesta. Tutkielman empiirisessä osassa tutkitaan esimerkkipankin avulla luottoriskin vakavaraisuusvaatimusten laskentaa Basel I -järjestelmällä ja Basel II -laskentamenetelmillä. Sisäisten luottoluokitusten menetelmän mukaisesti pankille määritetään tase-erien riskipainot ja tutkitaan myös, olisiko pankin mahdollista nykyisellä taserakenteellaan saavuttaa suurempi tulos optimoimalla riskiprofiiliaan käyttäessään kehittyneempää sisäisten luottoluokitusten menetelmää standardimenetelmän sijaan.
Resumo:
Tutkielman tavoitteena oli selvittää, millaisia kysymyksiä patenttien arvonmääritykseen liittyy verotuksen kannalta yritysjärjestelytilanteissa. Tutkimus on deskriptiivinen, kvalitatiivinen ja normatiivinen. Yritysjärjestelytilanteet toteutettuina verolakien mukaan ovat purkautumista lukuun ottamatta veroneutraaleja tapahtumia, jolloin osapuolille ei synny verotettavaa tuloa. Jos puolestaan yritysjärjestelyjä ei toteuteta elinkeinoverolainmukaan, realisoituu verotettavaa tuloa. Tällöin patentitkin arvostetaan elinkeinoverolain mukaan käypään arvoon. Patenttien käyvän arvon määritykseen ei ole yhtä ja oikeaa tapaa. Kuitenkin tuottoarvoon perustuvia arvonmääritystapoja pidetään parhaimpina. Patenttien arvonmääritykseen liittyviä kysymyksiä yritysjärjestelyiden verotuksen kannalta ovatkin, miten säilyttää veroneutraalius sekä miten käypä arvo määritetään, jos veroneutraaliutta ei voida säilyttää.
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
The advent of new advances in mobile computing has changed the manner we do our daily work, even enabling us to perform collaborative activities. However, current groupware approaches do not offer an integrating and efficient solution that jointly tackles the flexibility and heterogeneity inherent to mobility as well as the awareness aspects intrinsic to collaborative environments. Issues related to the diversity of contexts of use are collected under the term plasticity. A great amount of tools have emerged offering a solution to some of these issues, although always focused on individual scenarios. We are working on reusing and specializing some already existing plasticity tools to the groupware design. The aim is to offer the benefits from plasticity and awareness jointly, trying to reach a real collaboration and a deeper understanding of multi-environment groupware scenarios. In particular, this paper presents a conceptual framework aimed at being a reference for the generation of plastic User Interfaces for collaborative environments in a systematic and comprehensive way. Starting from a previous conceptual framework for individual environments, inspired on the model-based approach, we introduce specific components and considerations related to groupware.
Resumo:
NlmCategory="UNASSIGNED">This Perspective discusses the pertinence of variable dosing regimens with anti-vascular endothelial growth factor (VEGF) for neovascular age-related macular degeneration (nAMD) with regard to real-life requirements. After the initial pivotal trials of anti-VEGF therapy, the variable dosing regimens pro re nata (PRN), Treat-and-Extend, and Observe-and-Plan, a recently introduced regimen, aimed to optimize the anti-VEGF treatment strategy for nAMD. The PRN regimen showed good visual results but requires monthly monitoring visits and can therefore be difficult to implement. Moreover, application of the PRN regimen revealed inferior results in real-life circumstances due to problems with resource allocation. The Treat-and-Extend regimen uses an interval based approach and has become widely accepted for its ease of preplanning and the reduced number of office visits required. The parallel development of the Observe-and-Plan regimen demonstrated that the future need for retreatment (interval) could be reliably predicted. Studies investigating the observe-and-plan regimen also showed that this could be used in individualized fixed treatment plans, allowing for dramatically reduced clinical burden and good outcomes, thus meeting the real life requirements. This progressive development of variable dosing regimens is a response to the real-life circumstances of limited human, technical, and financial resources. This includes an individualized treatment approach, optimization of the number of retreatments, a minimal number of monitoring visits, and ease of planning ahead. The Observe-and-Plan regimen achieves this goal with good functional results. Translational Relevance: This perspective reviews the process from the pivotal clinical trials to the development of treatment regimens which are adjusted to real life requirements. The article discusses this translational process which- although not the classical interpretation of translation from fundamental to clinical research, but a subsequent process after the pivotal clinical trials - represents an important translational step from the clinical proof of efficacy to optimization in terms of patients' and clinics' needs. The related scientific procedure includes the exploration of the concept, evaluation of security, and finally proof of efficacy.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
Tutkielman tavoitteena on kuvata ja analysoida valmisteilla olevan pankkien vakavaraisuuskehikon uudistuksen eri osa-alueita erityisesti markkinakurin edellytysten parantamisen kannalta keskittyen julkistamisvaatimuksiin. Lisäksi tutkielmassa arvioidaan koko uudistuksen pankille ja sen eri intressiryhmille aiheuttamia seurauksia, uudistuksen hyviä ja huonoja puolia sekä mahdollisia ongelmakohtia. Tutkielma perustuu Baselin pankkivalvontakomitean ja Euroopan komission toisiin ehdotuksiin vakavaraisuuskehikon uudistamiseksi. Lisää perspektiiviä antavat aiheesta kirjoitetut artikkelit ja julkaisut sekä haastattelut. Pankkien vakavaraisuuskehikon uudistus muodostuu kolmesta toisiaan täydentävästä niin kutsutusta pilarista, jotka ovat 1) vähimmäisvakavaraisuuden laskentatapa, 2) valvontaprosessin vahvistaminen ja 3) markkinakurin edellytysten parantaminen. Uudistus on vielä kesken ja sen sisältö muuttuu jatkuvasti eri tahojen kantojen kirkastuessa. Varsinaisten johtopäätösten teko on siis vielä liian aikaista, mutta jo nyt on selvää, että kyseessä on laaja ja merkittävä uudistus. Se muun muassa mahdollistaa sisäisten riskiluokitusten käytön ja kannustaa pankkeja tehokkaampaan riskien hallintaan sekä moninkertaistaa julkistettavan tiedon määrän nykyiseen säännöstöön verrattuna. Uudistuksen suuntalinjoista vallitsee kansainvälinen yhteisymmärrys, mutta monia ongelmia on vielä ratkaistava. Sen vuoksi laatijatahot ovat päättäneet antaa vielä kolmannen ehdotuksen ennen lopullista päätöstä. Suurimpia huolenaiheita ovat tällä hetkellä yhdenmukainen kansainvälinen täytäntöönpano ja säännösten tasapuolinen noudattaminen. Myös kehikon yksityiskohtaisuus arveluttaa monia.
Resumo:
This review presents the evolution of steroid analytical techniques, including gas chromatography coupled to mass spectrometry (GC-MS), immunoassay (IA) and targeted liquid chromatography coupled to mass spectrometry (LC-MS), and it evaluates the potential of extended steroid profiles by a metabolomics-based approach, namely steroidomics. Steroids regulate essential biological functions including growth and reproduction, and perturbations of the steroid homeostasis can generate serious physiological issues; therefore, specific and sensitive methods have been developed to measure steroid concentrations. GC-MS measuring several steroids simultaneously was considered the first historical standard method for analysis. Steroids were then quantified by immunoassay, allowing a higher throughput; however, major drawbacks included the measurement of a single compound instead of a panel and cross-reactivity reactions. Targeted LC-MS methods with selected reaction monitoring (SRM) were then introduced for quantifying a small steroid subset without the problems of cross-reactivity. The next step was the integration of metabolomic approaches in the context of steroid analyses. As metabolomics tends to identify and quantify all the metabolites (i.e., the metabolome) in a specific system, appropriate strategies were proposed for discovering new biomarkers. Steroidomics, defined as the untargeted analysis of the steroid content in a sample, was implemented in several fields, including doping analysis, clinical studies, in vivo or in vitro toxicology assays, and more. This review discusses the current analytical methods for assessing steroid changes and compares them to steroidomics. Steroids, their pathways, their implications in diseases and the biological matrices in which they are analysed will first be described. Then, the different analytical strategies will be presented with a focus on their ability to obtain relevant information on the steroid pattern. The future technical requirements for improving steroid analysis will also be presented.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Lapsen karieshoidon kustannuskertymän muutokset ja karieshoidon toimintakäytäntöjen yhteys kustannuksiin Tutkimuksen tavoitteena oli mitata terveyskeskuksessa hoidettavien lasten karieshoidon kumulatiivisia kustannuksia ja verrata niitä kahden erilaisen toimintatavan välillä. Lisäksi tarkasteltiin lasten hampaiden terveyttä. Tutkimus tehtiin julkisen palvelutuottajan näkökulmasta. Tutkimusaineisto kerättiin Kemin ja Tornion terveyskeskusten suun terveydenhuollon potilaskertomuksista. Kemin kohortit 1980, 1983 ja 1986 (n = 600) ja Tornion kohortit 1980 ja 1992 (n = 400) edustivat perinteistä ja Kemin kohortit 1989, 1992 ja 1995 (n = 600) uutta toimintatapaa työnjaon ja ehkäisyn ajoituksen suhteen. Kohortteja ja kaupunkeja verrattiin hampaiden terveyden (dmft/DMFT = 0 ja dmft ja DMFT keskiarvot 5 ja 12 vuoden iässä) ja voimavarojen käytön suhteen. Panoskäyttö johdettiin käyntimäärien avulla laskennallisen työajan kautta. Kustannuskertymät muodostettiin käyttämällä henkilöstömenoista laskettuja suorittajakohtaisia yksikkökustannuksia. Panoskäytön ja yksikkökustannusten kautta muodostettiin kustannuskertymät. Kustannusten ja terveysvaikutusten suhteita arvioitiin kustannus-vaikuttavuusanalyysissä. Suuhygienistien työpanosta hyödyntävällä varhaisen ehkäisyn toimintamallilla saavutettiin vähäisemmin kustannuksin alle kouluiässä parempi ja kouluiässä yhtä hyvä hammasterveys kuin perinteisellä, enemmän hammaslääkärien työpanokseen perustuvalla tavalla. Karieksen hoitoon liittyvien käyntien määrä oli nuorimmissa syntymävuosikohorteissa pienempi kuin vanhimmissa kohorteissa. Käynnit hammaslääkärissä vähenivät eniten. Toimintatavalla oli merkittävä vaikutus lapsen karieshoidon kokonaiskustannuksiin. Herkkyysanalyysin mukaan karieshoidon kustannukset olivat työnjakoa hyödyntämällä kolmanneksen pienemmät, kuin jos hoidon suorittajana olisi ollut ainoastaan hammaslääkäri-hoitaja työpari. Lasten karieshoidon kustannusvaikuttavuus kohentui molemmissa terveyskeskuksissa nuoremmissa kohorteissa vanhempiin verrattuna. Suun terveydenhuollon potilaskertomuksia olisi hyödynnettävä toiminnan kehittämisessä. Varhaisen ehkäisyn avulla voitaisiin kaikkien suun terveydenhuollon ammattihenkilöiden työpanos kohdentaa kustannustehokkaasti.
Resumo:
The scientific community has been suffering from peer review for decades. This process (also called refereeing) subjects an author's scientific work or ideas to the scrutiny of one or more experts in the field. Publishers use it to select and screen manuscript submissions, and funding agencies use it to award research funds. The goal is to get authors to meet their discipline's standards and thus achieve scientific objectivity. Publications and awards that haven't undergone peer review are often regarded with suspicion by scholars and professionals in many fields. However, peer review, although universally used, has many drawbacks. We propose replacing peer review with an auction-based approach: the better the submitted paper, the more scientific currency the author likely bid to have it published. If the bid correctly reflects the paper's quality, the author is rewarded in this new scientific currency; otherwise, the author loses this currency. We argue that citations are an appropriate currency for all scientists. We believe that citation auctions encourage scientists to better control their submissions' quality. It also inspire them to prepare more exciting talks for accepted papers and to invite discussion of their results at congresses and conferences and among their colleagues. In the long run, citation auctions could have the power to greatly improve scientific research
Resumo:
Tutkielmassa tarkastellaan Sarbanes-Oxley -lain Auditing Standard 5:n (AS 5) vaikutusta suomalaisten yhtiöiden tilintarkastukseen. Teoriaosuudessa käsitellään corporate governancea ja tilintarkastusta sen osana, COSO ERM -viitekehystä ja kontrolleja sekä Sarbanes Oxley -lakia. Tutkielman painopisteenä ovat AS 5:n sisältökokonaisuudet: mm. ylhäältä alas- ja riskiperusteinen lähestymistapa sekä turhien tarkastusprosessien poistaminen. Tutkimusaineisto kerättiin vuosina 2007–2009 erilaisista ammattilehdistä ja blogeista sekä haastattelemalla kolmea suurten tilintarkastusyhteisöjen tilintarkastajaa. AS 5 näyttää korostaneen riskiperusteisuutta tilintarkastuksessa sekä tehostaneen tilintarkastusprosessia mm. sallimalla aiempien vuosien tarkastustietojen entistä laajemmin käytön. Näyttää myös siltä, että AS 5 on alentanut tilintarkastuspalkkioita.