44 resultados para AFAS (ASEAN Framework Agreement on Services)
Resumo:
Increasingly, patients with unhealthy alcohol and other drug use are being seen in primary care and other non-specialty addiction settings. Primary care providers are well positioned to screen, assess, and treat patients with alcohol and other drug use because this use, and substance use disorders, may contribute to a host of medical and mental health harms. We sought to identify and examine important recent advances in addiction medicine in the medical literature that have implications for the care of patients in primary care or other generalist settings. To accomplish this aim, we selected articles in the field of addiction medicine, critically appraised and summarized the manuscripts, and highlighted their implications for generalist practice. During an initial review, we identified articles through an electronic Medline search (limited to human studies and in English) using search terms for alcohol and other drugs of abuse published from January 2010 to January 2012. After this initial review, we searched for other literature in web-based or journal resources for potential articles of interest. From the list of articles identified in these initial reviews, each of the six authors independently selected articles for more intensive review and identified the ones they found to have a potential impact on generalist practice. The identified articles were then ranked by the number of authors who selected each article. Through a consensus process over 4 meetings, the authors reached agreement on the articles with implications for practice for generalist clinicians that warranted inclusion for discussion. The authors then grouped the articles into five categories: 1) screening and brief interventions in outpatient settings, 2) identification and management of substance use among inpatients, 3) medical complications of substance use, 4) use of pharmacotherapy for addiction treatment in primary care and its complications, and 5) integration of addiction treatment and medical care. The authors discuss each selected articles' merits, limitations, conclusions, and implication to advancing addiction screening, assessment, and treatment of addiction in generalist physician practice environments.
Resumo:
Purpose: The increase of apparent diffusion coefficient (ADC) in treated hepatic malignancies compared to pre-therapeutic values has been interpreted as treatment success; however, the variability of ADC measurements remains unknown. Furthermore, ADC has been usually measured in the whole lesion, while measurements should be probably centered on the area with the most restricted diffusion (MRDA) as it represents potential tumoral residue. Our objective was to compare the inter/intraobserver variability of ADC measurements in the whole lesion and in MRDA. Material and methods: Forty patients previously treated with chemoembolization or radiofrequency were evaluated (20 on 1.5T and 20 on 3.0T). After consensual agreement on the best ADC image, two readers measured the ADC values using separate regions of interest that included the whole lesion and the whole MRDA without exceeding their borders. The same measurements were repeated two weeks later. Spearman test and the Bland-Altman method were used. Results: Interobserver correlation in ADC measurements in the whole lesion and MRDA was as follows: 0.962 and 0.884. Intraobserver correlation was, respectively, 0.992 and 0.979. Interobserver limits of variability (mm2/sec*10-3) were between -0.25/+0.28 in the whole lesion and between -0.51/+0.46 in MRDA. Intraobserver limits of variability were, respectively: -0.25/+0.24 and -0.43/+0.47. Conclusion: We observed a good inter/intraobserver correlation in ADC measurements. Nevertheless, a limited variability does exist, and it should be considered when interpreting ADC values of hepatic malignancies.
Resumo:
In order to identify the main social policy tools that can efficiently combat working poverty, it is essential to identify its main driving factors. More importantly, this work shows that all poverty factors identified in the literature have a direct bearing on working households through three mechanisms, namely being badly paid, having a below-average workforce participation, and high needs. One of the main purposes of this work is to assess whether the policies put forward in the specialist literature as potentially efficient really work. This is done in two ways. A first empirical prong provides an evaluation of the employment and antipoverty effects of these instruments, based on a meta-analysis of four instruments: minimum wages, tax credits for working households, family cash benefits and childcare policies. The second prong relies on a broader framework based on welfare regimes. This work contributes to the identification of a typology of welfare regimes that is suitable for the analysis of working poverty, and four countries are chosen to exemplify each regime: the US, Sweden, Germany, and Spain. It then moves on to show that the weight of the three working poverty mechanisms varies widely from one welfare regime to the other. This second empirical contribution clearly shows that there is no "one-size-fits-all" approach to the fight against working poverty. But none of this is possible without having properly defined the phenomenon. Most of the literature is characterized by a "definitional chaos" that probably does more harm than good to social policy efforts. Hence, this book provides a conceptual reflection pleading for the use of a very encompassing definition of being in work. It shows that "the working poor" is too broad a category to be used for meaningful academic or policy discussion, and that a distinction must be operated between different categories of the working poor. Failing to acknowledge this prevents the design of an efficient policy mix.
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Nanotechnology is becoming part of our daily life in a wide range of products such as computers, bicycles, sunscreens or nanomedicines. While these applications already become reality, considerable work awaits scientists, engineers, and policy makers, who want such nanotechnological products to yield a maximum of benefit at a minimum of social, environmental, economic and (occupational) health cost. Considerable efforts for coordination and collaboration in research are needed if one wants to reach these goals in a reasonable time frame and an affordable price tag. This is recognized in Europe by the European Commission which funds not only research projects but also supports the coordination of research efforts. One of these coordination efforts is NanoImpactNet, a researcher-operated network, which started in 2008 promote scientific cross-talk across all disciplines on the health and environmental impact of nanomaterials. Stakeholders contribute to these activities, notably the definition of research and knowledge needs. Initial discussions in this domain focused on finding an agreement on common metrics, and which elements are needed for standardized approaches for hazard and exposure identification. There are many nanomaterial properties that may play a role. Hence, to gain the time needed to study this complex matter full of uncertainties, researchers and stakeholders unanimously called for simple, easy and fast risk assessment tools that can support decision making in this rapidly moving and growing domain. Today, several projects are starting or already running that will develop such assessment tools. At the same time, other projects investigate in depth which factors and material properties can lead to unwanted toxicity or exposure, what mechanisms are involved and how such responses can be predicted and modelled. A vision for the future is that once these factors, properties and mechanisms are understood, they can and will be accounted for in the development of new products and production processes following the idea of "Safety by Design". The promise of all these efforts is a future with nanomaterials where most of their risks are recognized and addressed before they even reach the market.
Resumo:
1. Species distribution models are increasingly used to address conservation questions, so their predictive capacity requires careful evaluation. Previous studies have shown how individual factors used in model construction can affect prediction. Although some factors probably have negligible effects compared to others, their relative effects are largely unknown. 2. We introduce a general "virtual ecologist" framework to study the relative importance of factors involved in the construction of species distribution models. 3. We illustrate the framework by examining the relative importance of five key factors-a missing covariate, spatial autocorrelation due to a dispersal process in presences/absences, sample size, sampling design and modeling technique-in a real study framework based on plants in a mountain landscape at regional scale, and show that, for the parameter values considered here, most of the variation in prediction accuracy is due to sample size and modeling technique. Contrary to repeatedly reported concerns, spatial autocorrelation has only comparatively small effects. 4. This study shows the importance of using a nested statistical framework to evaluate the relative effects of factors that may affect species distribution models.
Resumo:
Electricity is a strategic service in modern societies. Thus, it is extremely important for governments to be able to guarantee an affordable and reliable supply, which depends to a great extent on an adequate expansion of the generation and transmission capacities. Cross- border integration of electricity markets creates new challenges for the regulators, since the evolution of the market is now influenced by the characteristics and policies of neighbouring countries. There is still no agreement on why and how regions should integrate their electricity markets. The aim of this thesis is to improve the understanding of integrated electricity markets and how their behaviour depends on the prevailing characteristics of the national markets and the policies implemented in each country. We developed a simulation model to analyse under what circumstances integration is desirable. This model is used to study three cases of interconnection between two countries. Several policies regarding interconnection expansion and operation, combined with different generation capacity adequacy mechanisms, are evaluated. The thesis is composed of three papers. The first paper presents a detailed description of the model and an analysis of the case of Colombia and Ecuador. It shows that market coupling can bring important benefits, but the relative size of the countries can lead to import dependency issues in the smaller country. The second paper compares the case of Colombia and Ecuador with the case of Great Britain and France. These countries are significantly different in terms of electricity sources, hydro- storage capacity, complementarity and demand growth. We show that complementarity is essential in order to obtain benefits from integration, while higher demand growth and hydro- storage capacity can lead to counterintuitive outcomes, thus complicating policy design. In the third paper, an extended version of the model presented in the first paper is used to analyse the case of Finland and its interconnection with Russia. Different trading arrangements are considered. We conclude that unless interconnection capacity is expanded, the current trading arrangement, where a single trader owns the transmission rights and limits the flow during peak hours, is beneficial for Finland. In case of interconnection expansion, market coupling would be preferable. We also show that the costs of maintaining a strategic reserve in Finland are justified in order to limit import dependency, while still reaping the benefits of interconnection. In general, we conclude that electricity market integration can bring benefits if the right policies are implemented. However, a large interconnection capacity is only desirable if the countries exhibit significant complementarity and trust each other. The outcomes of policies aimed at guaranteeing security of supply at a national level can be quite counterintuitive due to the interactions between neighbouring countries and their effects on interconnection and generation investments. Thus, it is important for regulators to understand these interactions and coordinate their decisions in order to take advantage of the interconnection without putting security of supply at risk. But it must be taken into account that even when integration brings benefits to the region, some market participants lose and might try to hinder the integration process. -- Dans les sociétés modernes, l'électricité est un service stratégique. Il est donc extrêmement important pour les gouvernements de pouvoir garantir la sécurité d'approvisionnement à des prix abordables. Ceci dépend en grande mesure d'une expansion adéquate des capacités de génération et de transmission. L'intégration des marchés électriques pose des nouveaux défis pour les régulateurs, puisque l'évolution du marché est maintenant influencée par les caractéristiques et les politiques des pays voisins. Il n'est pas encore claire pourquoi ni comment les marches électriques devraient s'intégrer. L'objectif de cette thèse est d'améliorer la compréhension des marchés intégrés d'électricité et de leur comportement en fonction des caractéristiques et politiques de chaque pays. Un modèle de simulation est proposé pour étudier les conditions dans lesquelles l'intégration est désirable. Ce modèle est utilisé pour étudier trois cas d'interconnexion entre deux pays. Plusieurs politiques concernant l'expansion et l'opération de l'interconnexion, combinées avec différents mécanismes de rémunération de la capacité, sont évalués. Cette thèse est compose de trois articles. Le premier présente une description détaillée du modèle et une analyse du cas de la Colombie et de l'Equateur. Il montre que le couplage de marchés peut amener des bénéfices importants ; cependant, la différence de taille entre pays peut créer des soucis de dépendance aux importations pour le pays le plus petit. Le second papier compare le cas de la Colombie et l'Equateur avec le cas de la Grande Bretagne et de la France. Ces pays sont très différents en termes de ressources, taille des réservoirs d'accumulation pour l'hydro, complémentarité et croissance de la demande. Nos résultats montrent que la complémentarité joue un rôle essentiel dans l'obtention des bénéfices potentiels de l'intégration, alors qu'un taux élevé de croissance de la demande, ainsi qu'une grande capacité de stockage, mènent à des résultats contre-intuitifs, ce qui complique les décisions des régulateurs. Dans le troisième article, une extension du modèle présenté dans le premier article est utilisée pour analyser le cas de la Finlande et de la Russie. Différentes règles pour les échanges internationaux d'électricité sont considérées. Nos résultats indiquent qu'à un faible niveau d'interconnexion, la situation actuelle, où un marchand unique possède les droits de transmission et limite le flux pendant les heures de pointe, est bénéfique pour la Finlande. Cependant, en cas d'expansion de la capacité d'interconnexion, «market coupling» est préférable. préférable. Dans tous les cas, la Finlande a intérêt à garder une réserve stratégique, car même si cette politique entraine des coûts, elle lui permet de profiter des avantages de l'intégration tout en limitant ca dépendance envers les importations. En général, nous concluons que si les politiques adéquates sont implémentées, l'intégration des marchés électriques peut amener des bénéfices. Cependant, une grande capacité d'interconnexion n'est désirable que si les pays ont une complémentarité importante et il existe une confiance mutuelle. Les résultats des politiques qui cherchent à préserver la sécurité d'approvisionnement au niveau national peuvent être très contre-intuitifs, étant données les interactions entre les pays voisins et leurs effets sur les investissements en génération et en interconnexion. Il est donc très important pour les régulateurs de comprendre ces interactions et de coordonner décisions à fin de pouvoir profiter de l'interconnexion sans mettre en danger la sécurité d'approvisionnement. Mais il faut être conscients que même quand l'intégration amène de bénéfices pour la région, certains participants au marché sont perdants et pourraient essayer de bloquer le processus d'intégration.
Resumo:
Accountability and transparency are of growing importance in contemporary governance. The academic literature have broadly studied the two concepts separately, defining and redefining them, and including them into various framework, sometimes mistakenly using them as synonyms. The relationship between the two concepts has, curiously, only been studied by a few scholars with preliminary approaches. This theoretical paper will focus on both concepts, trying first to describe them taking into account the various evolutions in the literature and the recent evolutions as well as the first attempts to link the two concepts. In order to show a new approach linking the concepts, four cases from the Swiss context will be portrayed and will demonstrate the necessity to reconsider the relationship between transparency and accountability. Consequently, a new framework, based on Fox's framework (2007) will be presented and theoretically delimited.
Resumo:
Idiopathic scoliosis (IS) is a three-dimensional deformity of the spine and trunk. The most common form involve ado- lescents (AIS). The prevalence for AIS is 2-3% of the population, with 1 out of 6 patients requiring treatment of which 25% progress to surgery. Physical and rehabilitation medicine (PRM) plays a primary role in the so-called conservative treatment of adolescents with AIS, since all the therapeutic tools used (exercises and braces) fall into the PRM domain. According to a Cochrane systematic review there is evidence in favor of bracing, even if it is of low quality. Another shows that there is evidence in favor of exercises as an adjunctive treatment, but of low quality. Three meta-analysis have been published on bracing: one shows that bracing does not reduce surgery rates, but studies with bracing plus exercises were not included and had the highest effectiveness; another shows that full time is better than part-time bracing; the last focuses on observational studies following the SRS criteria and shows that not all full time rigid bracing are the same: some have the highest effectiveness, others have less than elastic and nighttime bracing. Two very important RCTs failed in recruitment, showing that in the field of bracing for scoliosis RCTs are not accepted by the patients. Consensuses by the international Society on Scoliosis Orthopedic and Rehabilitation Treatment (SOSORT) show that there is no agree- ment among experts either on the best braces or on their biomechanical action, and that compliance is a matter of clinical more than patients' behavior (there is strong agreement on the management criteria to achieve best results with bracing). A systematic review of all the existing studies shows effectiveness of exercises, and that auto-correction is the main goal of exercises. A systematic review shows that there are no studies on manual treatment. Research on conservative treat- ment of AIS has continuously decreased since the 1980s, but this trend changed only recently. The SOSORT Guidelines offers the actual standard of conservative care.
Accelerated Microstructure Imaging via Convex Optimisation for regions with multiple fibres (AMICOx)
Resumo:
This paper reviews and extends our previous work to enable fast axonal diameter mapping from diffusion MRI data in the presence of multiple fibre populations within a voxel. Most of the existing mi-crostructure imaging techniques use non-linear algorithms to fit their data models and consequently, they are computationally expensive and usually slow. Moreover, most of them assume a single axon orientation while numerous regions of the brain actually present more complex configurations, e.g. fiber crossing. We present a flexible framework, based on convex optimisation, that enables fast and accurate reconstructions of the microstructure organisation, not limited to areas where the white matter is coherently oriented. We show through numerical simulations the ability of our method to correctly estimate the microstructure features (mean axon diameter and intra-cellular volume fraction) in crossing regions.
Resumo:
The World Health Organization (WHO) plans to submit the 11th revision of the International Classification of Diseases (ICD) to the World Health Assembly in 2018. The WHO is working toward a revised classification system that has an enhanced ability to capture health concepts in a manner that reflects current scientific evidence and that is compatible with contemporary information systems. In this paper, we present recommendations made to the WHO by the ICD revision's Quality and Safety Topic Advisory Group (Q&S TAG) for a new conceptual approach to capturing healthcare-related harms and injuries in ICD-coded data. The Q&S TAG has grouped causes of healthcare-related harm and injuries into four categories that relate to the source of the event: (a) medications and substances, (b) procedures, (c) devices and (d) other aspects of care. Under the proposed multiple coding approach, one of these sources of harm must be coded as part of a cluster of three codes to depict, respectively, a healthcare activity as a 'source' of harm, a 'mode or mechanism' of harm and a consequence of the event summarized by these codes (i.e. injury or harm). Use of this framework depends on the implementation of a new and potentially powerful code-clustering mechanism in ICD-11. This new framework for coding healthcare-related harm has great potential to improve the clinical detail of adverse event descriptions, and the overall quality of coded health data.
Resumo:
Accountability and transparency are of growing importance in contemporary governance. The academic literature have broadly studied the two concepts separately, defining and redefining them, and including them into various framework, sometimes mistakenly using them as synonyms. The relationship between the two concepts has, curiously, only been studied by a few scholars with preliminary approaches. This theoretical paper will focus on both concepts, trying first to describe them taking into account the various evolutions in the literature and the recent evolutions as well as the first attempts to link the two concepts. In order to show a new approach linking the concepts, four cases from the Swiss context will be portrayed and will demonstrate the necessity to reconsider the relationship between transparency and accountability. Consequently, a new framework, based on Fox's framework (2007) will be presented and theoretically delimited.