109 resultados para strategy based organization
Resumo:
Rapid amplification of cDNA ends (RACE) is a widely used approach for transcript identification. Random clone selection from the RACE mixture, however, is an ineffective sampling strategy if the dynamic range of transcript abundances is large. To improve sampling efficiency of human transcripts, we hybridized the products of the RACE reaction onto tiling arrays and used the detected exons to delineate a series of reverse-transcriptase (RT)-PCRs, through which the original RACE transcript population was segregated into simpler transcript populations. We independently cloned the products and sequenced randomly selected clones. This approach, RACEarray, is superior to direct cloning and sequencing of RACE products because it specifically targets new transcripts and often results in overall normalization of transcript abundance. We show theoretically and experimentally that this strategy leads indeed to efficient sampling of new transcripts, and we investigated multiplexing the strategy by pooling RACE reactions from multiple interrogated loci before hybridization.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Résumé Métropolisation, morphologie urbaine et développement durable. Transformations urbaines et régulation de l'étalement : le cas de l'agglomération lausannoise. Cette thèse s'inscrit clans la perspective d'une analyse stratégique visant à un définir et à expliciter les liens entre connaissance, expertise et décision politique. L'hypothèse fondamentale qui oriente l'ensemble de ce travail est la suivante : le régime d'urbanisation qui s'est imposé au cours des trente dernières années correspond à une transformation du principe morphogénétique de développement spatial des agglomérations qui tend à alourdir leurs bilans écologiques et à péjorer la qualité du cadre de vie des citadins. Ces enjeux environnementaux liés aux changements urbains et singulièrement ceux de la forme urbaine constituent un thème de plus en plus important dans la recherche de solutions d'aménagement urbain dans une perspective de développement durable. Dans ce contexte, l'aménagement urbain devient un mode d'action et une composante de tout premier ordre des politiques publiques visant un développement durable à l'échelle locale et globale. Ces modalités de développement spatial des agglomérations émergent indiscutablement au coeur de la problématique environnementale. Or si le concept de développement durable nous livre une nouvelle de de lecture des territoires et de ses transformations, en prônant le modèle de la ville compacte et son corollaire la densification, la traduction à donner à ce principe stratégique reste controversée, notamment sous l'angle de l'aménagement du territoire et des stratégies de développement urbain permettant une mise en oeuvre adéquate des solutions proposées. Nous avons ainsi tenté dans ce travail de répondre à un certain nombre de questions : quelle validité accorder au modèle de la ville compacte ? La densification est-elle une réponse adéquate ? Si oui, sous quelles modalités ? Quelles sont, en termes de stratégies d'aménagement, les alternatives durables au modèle de la ville étalée ? Faut-il vraiment densifier ou simplement maîtriser la dispersion ? Notre objectif principal étant in fine de déterminer les orientations et contenus urbanistiques de politiques publiques visant à réguler l'étalement urbain, de valider la faisabilité de ces principes et à définir les conditions de leur mise en place dans le cas d'une agglomération. Pour cela, et après avoir choisi l'agglomération lausannoise comme terrain d'expérimentation, trois approches complémentaires se sont révélées indispensables dans ce travail 1. une approche théorique visant à définir un cadre conceptuel interdisciplinaire d'analyse du phénomène urbain dans ses rapports à la problématique du développement durable liant régime d'urbanisation - forme urbaine - développement durable ; 2. une approche méthodologique proposant des outils d'analyse simples et efficaces de description des nouvelles morphologies urbaines pour une meilleure gestion de l'environnement urbain et de la pratique de l'aménagement urbain ; 3. une approche pragmatique visant à approfondir la réflexion sur la ville étalée en passant d'une approche descriptive des conséquences du nouveau régime d'urbanisation à une approche opérationnelle, visant à identifier les lignes d'actions possibles dans une perspective de développement durable. Cette démarche d'analyse nous a conduits à trois résultats majeurs, nous permettant de définir une stratégie de lutte contre l'étalement. Premièrement, si la densification est acceptée comme un objectif stratégique de l'aménagement urbain, le modèle de la ville dense ne peut être appliqué saris la prise en considération d'autres objectifs d'aménagement. Il ne suffit pas de densifier pour réduire l'empreinte écologique de la ville et améliorer la qualité de vie des citadins. La recherche d'une forme urbaine plus durable est tributaire d'une multiplicité de facteurs et d'effets de synergie et la maîtrise des effets négatifs de l'étalement urbain passe par la mise en oeuvre de politiques urbaines intégrées et concertées, comme par exemple prôner la densification qualifiée comme résultante d'un processus finalisé, intégrer et valoriser les transports collectifs et encore plus la métrique pédestre avec l'aménagement urbain, intégrer systématiquement la diversité à travers les dimensions physique et sociale du territoire. Deuxièmement, l'avenir de ces territoires étalés n'est pas figé. Notre enquête de terrain a montré une évolution des modes d'habitat liée aux modes de vie, à l'organisation du travail, à la mobilité, qui font que l'on peut penser à un retour d'une partie de la population dans les villes centres (fin de la toute puissance du modèle de la maison individuelle). Ainsi, le diagnostic et la recherche de solutions d'aménagement efficaces et viables ne peuvent être dissociés des demandes des habitants et des comportements des acteurs de la production du cadre bâti. Dans cette perspective, tout programme d'urbanisme doit nécessairement s'appuyer sur la connaissance des aspirations de la population. Troisièmement, la réussite de la mise en oeuvre d'une politique globale de maîtrise des effets négatifs de l'étalement urbain est fortement conditionnée par l'adaptation de l'offre immobilière à la demande de nouveaux modèles d'habitat répondant à la fois à la nécessité d'une maîtrise des coûts de l'urbanisation (économiques, sociaux, environnementaux), ainsi qu'aux aspirations émergentes des ménages. Ces résultats nous ont permis de définir les orientations d'une stratégie de lutte contre l'étalement, dont nous avons testé la faisabilité ainsi que les conditions de mise en oeuvre sur le territoire de l'agglomération lausannoise. Abstract This dissertation participates in the perspective of a strategic analysis aiming at specifying the links between knowledge, expertise and political decision, The fundamental hypothesis directing this study assumes that the urban dynamics that has characterized the past thirty years signifies a trans-formation of the morphogenetic principle of agglomerations' spatial development that results in a worsening of their ecological balance and of city dwellers' quality of life. The environmental implications linked to urban changes and particularly to changes in urban form constitute an ever greater share of research into sustainable urban planning solutions. In this context, urban planning becomes a mode of action and an essential component of public policies aiming at local and global sustainable development. These patterns of spatial development indisputably emerge at the heart of environmental issues. If the concept of sustainable development provides us with new understanding into territories and their transformations, by arguing in favor of densification, its concretization remains at issue, especially in terms of urban planning and of urban development strategies allowing the appropriate implementations of the solutions offered. Thus, this study tries to answer a certain number of questions: what validity should be granted to the model of the dense city? Is densification an adequate answer? If so, under what terms? What are the sustainable alternatives to urban sprawl in terms of planning strategies? Should densification really be pursued or should we simply try to master urban sprawl? Our main objective being in fine to determine the directions and urban con-tents of public policies aiming at regulating urban sprawl, to validate the feasibility of these principles and to define the conditions of their implementation in the case of one agglomeration. Once the Lausanne agglomeration had been chosen as experimentation field, three complementary approaches proved to be essential to this study: 1. a theoretical approach aiming at definying an interdisciplinary conceptual framework of the ur-ban phenomenon in its relation to sustainable development linking urban dynamics - urban form - sustainable development ; 2. a methodological approach proposing simple and effective tools for analyzing and describing new urban morphologies for a better management of the urban environment and of urban planning practices 3. a pragmatic approach aiming at deepening reflection on urban sprawl by switching from a descriptive approach of the consequences of the new urban dynamics to an operational approach, aiming at identifying possible avenues of action respecting the principles of sustainable development. This analysis approach provided us with three major results, allowing us to define a strategy to cur-tail urban sprawl. First, if densification is accepted as a strategic objective of urban planning, the model of the dense city can not be applied without taking into consideration other urban planning objectives. Densification does not suffice to reduce the ecological impact of the city and improve the quality of life of its dwellers. The search for a more sustainable urban form depends on a multitude of factors and effects of synergy. Reducing the negative effects of urban sprawl requires the implementation of integrated and concerted urban policies, like for example encouraging densification qualified as resulting from a finalized process, integrating and developing collective forms of transportation and even more so the pedestrian metric with urban planning, integrating diversity on a systematic basis through the physical and social dimensions of the territory. Second, the future of such sprawling territories is not fixed. Our research on the ground revea-led an evolution in the modes of habitat related to ways of life, work organization and mobility that suggest the possibility of the return of a part of the population to the center of cities (end of the rule of the model of the individual home). Thus, the diagnosis and the search for effective and sustainable solutions can not be conceived of independently of the needs of the inhabitants and of the behavior of the actors behind the production of the built territory. In this perspective, any urban program must necessarily be based upon the knowledge of the population's wishes. Third, the successful implementation of a global policy of control of urban sprawl's negative effects is highly influenced by the adaptation of property offer to the demand of new habitat models satisfying both the necessity of urbanization cost controls (economical, social, environ-mental) and people's emerging aspirations. These results allowed us to define a strategy to cur-tail urban sprawl. Its feasibility and conditions of implementation were tested on the territory of the Lausanne agglomeration.
Resumo:
BACKGROUND: Drug therapy in high-risk individuals has been advocated as an important strategy to reduce cardiovascular disease in low income countries. We determined, in a low-income urban population, the proportion of persons who utilized health services after having been diagnosed as hypertensive and advised to seek health care for further hypertension management. METHODS: A population-based survey of 9254 persons aged 25-64 years was conducted in Dar es Salaam. Among the 540 persons with high blood pressure (defined here as BP >or= 160/95 mmHg) at the initial contact, 253 (47%) had high BP on a 4th visit 45 days later. Among them, 208 were untreated and advised to attend health care in a health center of their choice for further management of their hypertension. One year later, 161 were seen again and asked about their use of health services during the interval. RESULTS: Among the 161 hypertensive persons advised to seek health care, 34% reported to have attended a formal health care provider during the 12-month interval (63% public facility; 30% private; 7% both). Antihypertensive treatment was taken by 34% at some point of time (suggesting poor uptake of health services) and 3% at the end of the 12-month follow-up (suggesting poor long-term compliance). Health services utilization tended to be associated with older age, previous history of high BP, being overweight and non-smoking, but not with education or wealth. Lack of symptoms and cost of treatment were the reasons reported most often for not attending health care. CONCLUSION: Low utilization of health services after hypertension screening suggests a small impact of a patient-centered screen-and-treat strategy in this low-income population. These findings emphasize the need to identify and address barriers to health care utilization for non-communicable diseases in this setting and, indirectly, the importance of public health measures for primary prevention of these diseases.
Resumo:
A crucial step in the arenavirus life cycle is the biosynthesis of the viral envelope glycoprotein (GP) responsible for virus attachment and entry. Processing of the GP precursor (GPC) by the cellular proprotein convertase site 1 protease (S1P), also known as subtilisin-kexin-isozyme 1 (SKI-1), is crucial for cell-to-cell propagation of infection and production of infectious virus. Here, we sought to evaluate arenavirus GPC processing by S1P as a target for antiviral therapy using a recently developed peptide-based S1P inhibitor, decanoyl (dec)-RRLL-chloromethylketone (CMK), and the prototypic arenavirus lymphocytic choriomeningitis virus (LCMV). To control for off-target effects of dec-RRLL-CMK, we employed arenavirus reverse genetics to introduce a furin recognition site into the GPC of LCMV. The rescued mutant virus grew to normal titers, and the processing of its GPC critically depended on cellular furin, but not S1P. Treatment with the S1P inhibitor dec-RRLL-CMK resulted in specific blocking of viral spread and virus production of LCMV. Combination of the protease inhibitor with ribavirin, currently used clinically for treatment of human arenavirus infections, resulted in additive drug effects. In cells deficient in S1P, the furin-dependent LCMV variant established persistent infection, whereas wild-type LCMV underwent extinction without the emergence of S1P-independent escape variants. Together, the potent antiviral activity of an inhibitor of S1P-dependent GPC cleavage, the additive antiviral effect with ribavirin, and the low probability of emergence of S1P-independent viral escape variants make S1P-mediated GPC processing by peptide-derived inhibitors a promising strategy for the development of novel antiarenaviral drugs.
Resumo:
BACKGROUND: Predicting outcome of breast cancer (BC) patients based on sentinel lymph node (SLN) status without axillary lymph node dissection (ALND) is an area of uncertainty. It influences the decision-making for regional nodal irradiation (RNI). The aim of the NORA (NOdal RAdiotherapy) survey was to examine the patterns of RNI. METHODS: A web-questionnaire, including several clinical scenarios, was distributed to 88 EORTC-affiliated centers. Responses were received between July 2013 and January 2014. RESULTS: A total of 84 responses were analyzed. While three-dimensional (3D) radiotherapy (RT) planning is carried out in 81 (96%) centers, nodal areas are delineated in only 51 (61%) centers. Only 14 (17%) centers routinely link internal mammary chain (IMC) and supraclavicular node (SCN) RT indications. In patients undergoing total mastectomy (TM) with ALND, SCN-RT is recommend by 5 (6%), 53 (63%) and 51 (61%) centers for patients with pN0(i+), pN(mi) and pN1, respectively. Extra-capsular extension (ECE) is the main factor influencing decision-making RNI after breast conserving surgery (BCS) and TM. After primary systemic therapy (PST), 49 (58%) centers take into account nodal fibrotic changes in ypN0 patients for RNI indications. In ypN0 patients with inner/central tumors, 23 (27%) centers indicate SCN-RT and IMC-RT. In ypN1 patients, SCN-RT is delivered by less than half of the centers in patients with ypN(i+) and ypN(mi). Twenty-one (25%) of the centers recommend ALN-RT in patients with ypN(mi) or 1-2N+ after ALND. Seventy-five (90%) centers state that age is not considered a limiting factor for RNI. CONCLUSION: The NORA survey is unique in evaluating the impact of SLNB/ALND status on adjuvant RNI decision-making and volumes after BCS/TM with or without PST. ALN-RT is often indicated in pN1 patients, particularly in the case of ECE. Besides the ongoing NSABP-B51/RTOG and ALLIANCE trials, NORA could help to design future specific RNI trials in the SLNB era without ALND in patients receiving or not PST.
Resumo:
NlmCategory="UNASSIGNED">This Perspective discusses the pertinence of variable dosing regimens with anti-vascular endothelial growth factor (VEGF) for neovascular age-related macular degeneration (nAMD) with regard to real-life requirements. After the initial pivotal trials of anti-VEGF therapy, the variable dosing regimens pro re nata (PRN), Treat-and-Extend, and Observe-and-Plan, a recently introduced regimen, aimed to optimize the anti-VEGF treatment strategy for nAMD. The PRN regimen showed good visual results but requires monthly monitoring visits and can therefore be difficult to implement. Moreover, application of the PRN regimen revealed inferior results in real-life circumstances due to problems with resource allocation. The Treat-and-Extend regimen uses an interval based approach and has become widely accepted for its ease of preplanning and the reduced number of office visits required. The parallel development of the Observe-and-Plan regimen demonstrated that the future need for retreatment (interval) could be reliably predicted. Studies investigating the observe-and-plan regimen also showed that this could be used in individualized fixed treatment plans, allowing for dramatically reduced clinical burden and good outcomes, thus meeting the real life requirements. This progressive development of variable dosing regimens is a response to the real-life circumstances of limited human, technical, and financial resources. This includes an individualized treatment approach, optimization of the number of retreatments, a minimal number of monitoring visits, and ease of planning ahead. The Observe-and-Plan regimen achieves this goal with good functional results. Translational Relevance: This perspective reviews the process from the pivotal clinical trials to the development of treatment regimens which are adjusted to real life requirements. The article discusses this translational process which- although not the classical interpretation of translation from fundamental to clinical research, but a subsequent process after the pivotal clinical trials - represents an important translational step from the clinical proof of efficacy to optimization in terms of patients' and clinics' needs. The related scientific procedure includes the exploration of the concept, evaluation of security, and finally proof of efficacy.
Resumo:
Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.
Resumo:
In the context of globalized competition among territories, cities, regions and countries have to find new ways to be attractive to companies, investors, tourists and residents. In that perspective, major sports events (such as the Olympic Games or the FIFA World Cup) are often seen as a lever for territorial development. Based on that idea, many sports events hosting strategies have emerged in the 1980s and 1990s. However, the growing competition in the sports events' market and the gigantism of those major events, forced some territories to turn to smaller events. This necessary resize of their strategy raises the question of their capacity to meet the initial objectives, which aim usually at developing the economy and promoting the image of the host destination. This essay sketches out the evolution of a sports events hosting strategy in a city that does not have the resources (either financial, human or in terms of infrastructures) to attract major international sports events. The challenges they have to face and a possible solution based on the event portfolio perspective are discussed through the article.
Resumo:
The advent of multiparametric MRI has made it possible to change the way in which prostate biopsy is done, allowing to direct biopsies to suspicious lesions rather than randomly. The subject of this review relates to a computer-assisted strategy, the MRI/US fusion software-based targeted biopsy, and to its performance compared to the other sampling methods. Different devices with different methods to register MR images to live TRUS are currently in use to allow software-based targeted biopsy. Main clinical indications of MRI/US fusion software-based targeted biopsy are re-biopsy in men with persistent suspicious of prostate cancer after first negative standard biopsy and the follow-up of patients under active surveillance. Some studies have compared MRI/US fusion software-based targeted versus standard biopsy. In men at risk with MRI-suspicious lesion, targeted biopsy consistently detects more men with clinically significant disease as compared to standard biopsy; some studies have also shown decreased detection of insignificant disease. Only two studies directly compared MRI/US fusion software-based targeted biopsy with MRI/US fusion visual targeted biopsy, and the diagnostic ability seems to be in favor of the software approach. To date, no study comparing software-based targeted biopsy against in-bore MRI biopsy is available. The new software-based targeted approach seems to have the characteristics to be added in the standard pathway for achieving accurate risk stratification. Once reproducibility and cost-effectiveness will be verified, the actual issue will be to determine whether MRI/TRUS fusion software-based targeted biopsy represents anadd-on test or a replacement to standard TRUS biopsy.
Resumo:
The increase in seafood production, especially in mariculture worldwide, has brought out the need of continued monitoring of shellfish production areas in order to ensure safety to human consumption. The purpose of this research was to evaluate pathogenic protozoa, viruses and bacteria contamination in oysters before and after UV depuration procedure, in brackish waters at all stages of cultivation and treatment steps and to enumerate microbiological indicators of fecal contamination from production site up to depuration site in an oyster cooperative located at the Southeastern estuarine area of Brazil. Oysters and brackish water were collected monthly from September 2009 to November 2010. Four sampling sites were selected for enteropathogens analysis: site 1- oyster growth, site 2- catchment water (before UV depuration procedure), site 3 - filtration stage of water treatment (only for protozoa analysis) and site 4- oyster's depuration tank. Three microbiological indicators ! were examined at sites 1, 2 and 4. The following pathogenic microorganisms were searched: Giardia cysts, Cryptosporidium oocysts, Human Adenovirus (HAdV), Hepatitis A virus (HAV), Human Norovirus (HnoV) (genogroups I and II), JC strain Polyomavirus (JCPyV) and Salmonella sp. Analysis consisted of molecular detection (qPCR) for viruses (oysters and water samples); immunomagnetic separation followed by direct immunofluorescence assay for Cryptosporidium oocysts and Giardia cysts and also molecular detection (PCR) for the latter (oysters and water samples); commercial kit (Reveal-Neogee (R)) for Salmonella analysis (oysters). Giardia was the most prevalent pathogen in all sites where it was detected: 36.3%, 18.1%, 36.3% and 27.2% of water from sites 1, 2, 3 and 4 respectively; 36.3% of oysters from site 1 and 54.5% of depurated oysters were harboring Giardia cysts. The huge majority of contaminated samples were classified as Giardia duodenalis. HAdv was detected in water and o! ysters from growth site and HnoV GI in two batches of oysters ! (site 1) in huge concentrations (2.11 x 10(13), 3.10 x 10(12) gc/g). In depuration tank site, Salmonella sp., HAV (4.84 x 10(3)) and HnoV GII (7.97 x 10(14)) were detected once in different batches of oysters. Cryptosporidium spp. oocysts were present in 9.0% of water samples from site four. These results reflect the contamination of oysters even when UV depuration procedures are employed in this shellfish treatment plant. Moreover, the molecular comprehension of the sources of contamination is necessary to develop an efficient management strategy allied to shellfish treatment improvement to prevent foodborne illnesses. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This review presents the evolution of steroid analytical techniques, including gas chromatography coupled to mass spectrometry (GC-MS), immunoassay (IA) and targeted liquid chromatography coupled to mass spectrometry (LC-MS), and it evaluates the potential of extended steroid profiles by a metabolomics-based approach, namely steroidomics. Steroids regulate essential biological functions including growth and reproduction, and perturbations of the steroid homeostasis can generate serious physiological issues; therefore, specific and sensitive methods have been developed to measure steroid concentrations. GC-MS measuring several steroids simultaneously was considered the first historical standard method for analysis. Steroids were then quantified by immunoassay, allowing a higher throughput; however, major drawbacks included the measurement of a single compound instead of a panel and cross-reactivity reactions. Targeted LC-MS methods with selected reaction monitoring (SRM) were then introduced for quantifying a small steroid subset without the problems of cross-reactivity. The next step was the integration of metabolomic approaches in the context of steroid analyses. As metabolomics tends to identify and quantify all the metabolites (i.e., the metabolome) in a specific system, appropriate strategies were proposed for discovering new biomarkers. Steroidomics, defined as the untargeted analysis of the steroid content in a sample, was implemented in several fields, including doping analysis, clinical studies, in vivo or in vitro toxicology assays, and more. This review discusses the current analytical methods for assessing steroid changes and compares them to steroidomics. Steroids, their pathways, their implications in diseases and the biological matrices in which they are analysed will first be described. Then, the different analytical strategies will be presented with a focus on their ability to obtain relevant information on the steroid pattern. The future technical requirements for improving steroid analysis will also be presented.
Resumo:
Key Messages: A fundamental failure of high-risk prevention strategies is their inability to prevent disease in the large part of the population at a relatively small average risk and from which most cases of diseases originate. The development of individual predictive medicine and the widening of high-risk categories for numerous (chronic) conditions lead to the application of pseudo-high-risk prevention strategies. Widening the criteria justifying individual preventive interventions and the related pseudo-high-risk strategies lead to treating, individually, ever healthier and larger strata of the population. The pseudo-high-risk prevention strategies raise similar problems compared with high-risk strategies, however on a larger scale and without any of the benefit of population-based strategies. Some 30 years ago, the strengths and weaknesses of population-based and high-risk prevention strategies were brilliantly delineated by Geoffrey Rose in several seminal publications (Table 1).1,2 His work had major implications not only for epidemiology and public health but also for clinical medicine. In particular, Rose demonstrated the fundamental failure of high-risk prevention strategies, that is, by missing a large number of preventable cases.
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.