208 resultados para Overcome
Resumo:
Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.
Resumo:
Myocardial tagging has shown to be a useful magnetic resonance modality for the assessment and quantification of local myocardial function. Many myocardial tagging techniques suffer from a rapid fading of the tags, restricting their application mainly to systolic phases of the cardiac cycle. However, left ventricular diastolic dysfunction has been increasingly appreciated as a major cause of heart failure. Subtraction based slice-following CSPAMM myocardial tagging has shown to overcome limitations such as fading of the tags. Remaining impediments to this technique, however, are extensive scanning times (approximately 10 min), the requirement of repeated breath-holds using a coached breathing pattern, and the enhanced sensitivity to artifacts related to poor patient compliance or inconsistent depths of end-expiratory breath-holds. We therefore propose a combination of slice-following CSPAMM myocardial tagging with a segmented EPI imaging sequence. Together with an optimized RF excitation scheme, this enables to acquire as many as 20 systolic and diastolic grid-tagged images per cardiac cycle with a high tagging contrast during a short period of sustained respiration.
Resumo:
A major challenge in studying social behaviour stems from the need to disentangle the behaviour of each individual from the resulting collective. One way to overcome this problem is to construct a model of the behaviour of an individual, and observe whether combining many such individuals leads to the predicted outcome. This can be achieved by using robots. In this review we discuss the strengths and weaknesses of such an approach for studies of social behaviour. We find that robots-whether studied in groups of simulated or physical robots, or used to infiltrate and manipulate groups of living organisms-have important advantages over conventional individual-based models and have contributed greatly to the study of social behaviour. In particular, robots have increased our understanding of self-organization and the evolution of cooperative behaviour and communication. However, the resulting findings have not had the desired impact on the biological community. We suggest reasons for why this may be the case, and how the benefits of using robots can be maximized in future research on social behaviour.
Resumo:
Scientific discoveries that provide strong evidence of antitumor effects in preclinical models often encounter significant delays before being tested in patients with cancer. While some of these delays have a scientific basis, others do not. We need to do better. Innovative strategies need to move into early stage clinical trials as quickly as it is safe, and if successful, these therapies should efficiently obtain regulatory approval and widespread clinical application. In late 2009 and 2010 the Society for Immunotherapy of Cancer (SITC), convened an "Immunotherapy Summit" with representatives from immunotherapy organizations representing Europe, Japan, China and North America to discuss collaborations to improve development and delivery of cancer immunotherapy. One of the concepts raised by SITC and defined as critical by all parties was the need to identify hurdles that impede effective translation of cancer immunotherapy. With consensus on these hurdles, international working groups could be developed to make recommendations vetted by the participating organizations. These recommendations could then be considered by regulatory bodies, governmental and private funding agencies, pharmaceutical companies and academic institutions to facilitate changes necessary to accelerate clinical translation of novel immune-based cancer therapies. The critical hurdles identified by representatives of the collaborating organizations, now organized as the World Immunotherapy Council, are presented and discussed in this report. Some of the identified hurdles impede all investigators; others hinder investigators only in certain regions or institutions or are more relevant to specific types of immunotherapy or first-in-humans studies. Each of these hurdles can significantly delay clinical translation of promising advances in immunotherapy yet if overcome, have the potential to improve outcomes of patients with cancer.
Resumo:
Over the past two decades, intermittent hypoxic training (IHT), that is, a method where athletes live at or near sea level but train under hypoxic conditions, has gained unprecedented popularity. By adding the stress of hypoxia during 'aerobic' or 'anaerobic' interval training, it is believed that IHT would potentiate greater performance improvements compared to similar training at sea level. A thorough analysis of studies including IHT, however, leads to strikingly poor benefits for sea-level performance improvement, compared to the same training method performed in normoxia. Despite the positive molecular adaptations observed after various IHT modalities, the characteristics of optimal training stimulus in hypoxia are still unclear and their functional translation in terms of whole-body performance enhancement is minimal. To overcome some of the inherent limitations of IHT (lower training stimulus due to hypoxia), recent studies have successfully investigated a new training method based on the repetition of short (<30 s) 'all-out' sprints with incomplete recoveries in hypoxia, the so-called repeated sprint training in hypoxia (RSH). The aims of the present review are therefore threefold: first, to summarise the main mechanisms for interval training and repeated sprint training in normoxia. Second, to critically analyse the results of the studies involving high-intensity exercises performed in hypoxia for sea-level performance enhancement by differentiating IHT and RSH. Third, to discuss the potential mechanisms underpinning the effectiveness of those methods, and their inherent limitations, along with the new research avenues surrounding this topic.
Resumo:
Cooperation among unrelated individuals can arise if decisions to help others can be based on reputation. While working for dyadic interactions, reputation-use in social dilemmas involving many individuals (e.g. public goods games) becomes increasingly difficult as groups become larger and errors more frequent. Reputation is therefore believed to have played a minor role for the evolution of cooperation in collective action dilemmas such as those faced by early humans. Here, we show in computer simulations that a reputation system based on punitive actions can overcome these problems and, compared to reputation system based on generous actions, (i) is more likely to lead to the evolution of cooperation in sizable groups, (ii) more effectively sustains cooperation within larger groups, and (iii) is more robust to errors in reputation assessment. Punishment and punishment reputation could therefore have played crucial roles in the evolution of cooperation within larger groups of humans.
Resumo:
Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available.
Resumo:
Most of the novel targeted anticancer agents share classical characteristics that define drugs as candidates for blood concentration monitoring: long-term therapy; high interindividual but restricted intraindividual variability; significant drug-drug and drug- food interactions; correlations between concentration and efficacy/ toxicity with rather narrow therapeutic index; reversibility of effects; and absence of early markers of response. Surprisingly though, therapeutic concentration monitoring has received little attention for these drugs despite reiterated suggestions from clinical pharmacologists. Several issues explain the lack of clinical research and development in this field: global tradition of empiricism regarding treatment monitoring, lack of formal conceptual framework, ethical difficulties in the elaboration of controlled clinical trials, disregard from both drug manufacturers and public funders, limited encouragement from regulatory authorities, and practical hurdles making dosage adjustment based on concentration monitoring a difficult task for prescribers. However, new technologies are soon to help us overcome these obstacles, with the advent of miniaturized measurement devices able to quantify circulating drug concentrations at the point-of-care, to evaluate their plausibility given actual dosage and sampling time, to determine their appropriateness with reference to therapeutic targets, and to advise on suitable dosage adjustment. Such evolutions could bring conceptual changes into the clinical development of drugs such as anticancer agents, while increasing the therapeutic impact of population PK-PD studies and systematic reviews. Research efforts in that direction from the clinical pharmacology community will be essential for patients to receive the greatest benefits and the least harm from new anticancer treatments. The example of imatinib, the first commercialized tyrosine kinase inhibitor, will be outlined to illustrate a potential research agenda for the rational development of therapeutic concentration monitoring.
Resumo:
PURPOSE OF REVIEW: To review the current experience with angiogenesis inhibitors in the treatment of gliomas. RECENT FINDINGS: Antiangiogenic therapy has recently reached the clinic with the approval of bevacizumab for recurrent glioblastomas. A number of promising antiangiogenic and vasculature-modifying agents are under investigation for newly diagnosed and recurrent malignant gliomas. A recurrence under ongoing or after antiangiogenic therapy is often characterized by a more aggressive and, in particular, invasive phenotype. SUMMARY: Despite impressively high radiological response rates in patients with recurrent malignant glioma, the duration of response is usually short-lived, and the observed effect to a large extent may be due to normalization of the disrupted blood-brain barrier and less due to a direct antitumor effect. Overall survival remains poor. Induction of invasive phenotypes and escape with proangiogenic alternative pathways are contributing to resistance. Investigation of combination regimes targeting several pathways will determine the possibilities to overcome the resistance to antiangiogenic therapy in malignant gliomas. This article summarizes the results of recent clinical trials in this field, points towards mechanisms of resistance arising under angiogenesis inhibition and discusses the challenges for the future.
Resumo:
BACKGROUND AND PURPOSE: We previously reported increased benefit and reduced mortality after ultra-early stroke thrombolysis in a single center. We now explored in a large multicenter cohort whether extra benefit of treatment within 90 minutes from symptom onset is uniform across predefined stroke severity subgroups, as compared with later thrombolysis. METHODS: Prospectively collected data of consecutive ischemic stroke patients who received IV thrombolysis in 10 European stroke centers were merged. Logistic regression tested association between treatment delays, as well as excellent 3-month outcome (modified Rankin scale, 0-1), and mortality. The association was tested separately in tertiles of baseline National Institutes of Health Stroke Scale. RESULTS: In the whole cohort (n=6856), shorter onset-to-treatment time as a continuous variable was significantly associated with excellent outcome (P<0.001). Every fifth patient had onset-to-treatment time≤90 minutes, and these patients had lower frequency of intracranial hemorrhage. After adjusting for age, sex, admission glucose level, and year of treatment, onset-to-treatment time≤90 minutes was associated with excellent outcome in patients with National Institutes of Health Stroke Scale 7 to 12 (odds ratio, 1.37; 95% confidence interval, 1.11-1.70; P=0.004), but not in patients with baseline National Institutes of Health Stroke Scale>12 (odds ratio, 1.00; 95% confidence interval, 0.76-1.32; P=0.99) and baseline National Institutes of Health Stroke Scale 0 to 6 (odds ratio, 1.04; 95% confidence interval, 0.78-1.39; P=0.80). In the latter, however, an independent association (odds ratio, 1.51; 95% confidence interval, 1.14-2.01; P<0.01) was found when considering modified Rankin scale 0 as outcome (to overcome the possible ceiling effect from spontaneous better prognosis of patients with mild symptoms). Ultra-early treatment was not associated with mortality. CONCLUSIONS: IV thrombolysis within 90 minutes is, compared with later thrombolysis, strongly and independently associated with excellent outcome in patients with moderate and mild stroke severity.
Resumo:
The large spatial inhomogeneity in transmit B(1) field (B(1)(+)) observable in human MR images at high static magnetic fields (B(0)) severely impairs image quality. To overcome this effect in brain T(1)-weighted images, the MPRAGE sequence was modified to generate two different images at different inversion times, MP2RAGE. By combining the two images in a novel fashion, it was possible to create T(1)-weighted images where the result image was free of proton density contrast, T(2) contrast, reception bias field, and, to first order, transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B(1)(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T(1)-weighted images, acquired within 12 min, high-resolution 3D T(1) maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T(1) maps were validated in phantom experiments. In humans, the T(1) values obtained at 7 T were 1.15+/-0.06 s for white matter (WM) and 1.92+/-0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min, the T(1) values obtained (0.81+/-0.03 s for WM and 1.35+/-0.05 for GM) were once again found to be in very good agreement with values in the literature.
Resumo:
In the form of an essay, the author proposes a review of the reasons behind the recent changes in forestry policy. He identifies two explanatory elements and goes into these in more detail: the loss in cohesion in sectorial forestry logic since the 1980s, and the internal division which has arisen on account of the failure to put through the partial revision of the Federal Forestry Law. Firstly, it is clear that forestry practice continues to function on a sectorial basis, even though the management of resources increasingly extends between sectors. Accordingly, he also sees forestry management as being restricted by exterior influences. Secondly, the dichotomy between production and protection weakens the forestry community. The author hopes forestry will be able to overcome these problems and thus become (once more) an influential political protagonist.
Resumo:
This research aims to provide a contribution towards understanding how and why certain people can display disobedience behaviors, to overcome unjust situations, and withstand persecutions deployed by authority. The paper presents a hermeneutic content analysis of the autobiographical speeches and texts of Gandhi, M. L. King and Mandela. Our results show that parents' value orientation, experience of injustice during childhood and exploration of alternative viewpoints during adolescence play a crucial role in structuring prosocial disobedience. Findings show also that social responsibility and ingroup communication are important conditions for facing persecutions without dropping original goals.
Resumo:
Rupture of unstable plaques may lead to myocardial infarction or stroke and is the leading cause of morbidity and mortality in western countries. Thus, there is a clear need for identifying these vulnerable plaques before the rupture occurs. Atherosclerotic plaques are a challenging imaging target as they are small and move rapidly, especially in the coronary tree. Many of the currently available imaging tools for clinical use still provide minimal information about the biological characteristics of plaques, because they are limited with respect to spatial and temporal resolution. Moreover, many of these imaging tools are invasive. The new generation of imaging modalities such as magnetic resonance imaging, nuclear imaging such as positron emission tomography and single photon emission computed tomography, computed tomography, fluorescence imaging, intravascular ultrasound, and optical coherence tomography offer opportunities to overcome some of these limitations. This review discusses the potential of these techniques for imaging the unstable plaque.
Resumo:
This paper presents a pilot project to reinforce participatory practices in standardization. The INTERNORM project is funded by the University of Lausanne, Switzerland. It aims to create an interactive knowledge center based on the sharing of academic skills and the experiences accumulated by the civil society, especially consumer associations, environmental associations and trade unions to strengthen the participatory process of standardization. The first objective of the project is action-oriented: INTERNORM provides a common knowledge pool supporting the participation of civil society actors to international standard-setting activities by bringing them together with academic experts in working groups and by providing logistic and financial support to their participation to meetings of national and international technical committees. The second objective of the project is analytical: the standardization action initiated through INTERNORM provides a research field for a better understanding of the participatory dynamics underpinning international standardization. The paper presents three incentives that explain civil society (non-)involvement in standardization that try to overcome conventional resource-based hypotheses: an operational incentive, related to the use of standards in the selective goods provided by associations to their membership; a thematic incentive, provided by the setting of priorities by strategic committees created in some standardization organization; a rhetorical incentive, related to the discursive resource that civil society concerns offers to the different stakeholders.