137 resultados para equipartition principle


Relevância:

10.00% 10.00%

Publicador:

Resumo:

TERMINOLOGY AND PRINCIPLES OF COMBINING ANTIPSYCHOTICS WITH A SECOND MEDICATION: The term "combination" includes virtually all the ways in which one medication may be added to another. The other commonly used terms are "augmentation" which implies an additive effect from adding a second medicine to that obtained from prescribing a first, an "add on" which implies adding on to existing, possibly effective treatment which, for one reason or another, cannot or should not be stopped. The issues that arise in all potential indications are: a) how long it is reasonable to wait to prove insufficiency of response to monotherapy; b) by what criteria that response should be defined; c) how optimal is the dose of the first monotherapy and, therefore, how confident can one be that its lack of effect is due to a truly inadequate response? Before one considers combination treatment, one or more of the following criteria should be met; a) monotherapy has been only partially effective on core symptoms; b) monotherapy has been effective on some concurrent symptoms but not others, for which a further medicine is believed to be required; c) a particular combination might be indicated de novo in some indications; d) The combination could improve tolerability because two compounds may be employed below their individual dose thresholds for side effects. Regulators have been concerned primarily with a and, in principle at least, c above. In clinical practice, the use of combination treatment reflects the often unsatisfactory outcome of treatment with single agents. ANTIPSYCHOTICS IN MANIA: There is good evidence that most antipsychotics tested show efficacy in acute mania when added to lithium or valproate for patients showing no or a partial response to lithium or valproate alone. Conventional 2-armed trial designs could benefit from a third antipsychotic monotherapy arm. In the long term treatment of bipolar disorder, in patients responding acutely to the addition of quetiapine to lithium or valproate, this combination reduces the subsequent risk of relapse to depression, mania or mixed states compared to monotherapy with lithium or valproate. Comparable data is not available for combination with other antipsychotics. ANTIPSYCHOTICS IN MAJOR DEPRESSION: Some atypical antipsychotics have been shown to induce remission when added to an antidepressant (usually a SSRI or SNRI) in unipolar patients in a major depressive episode unresponsive to the antidepressant monotherapy. Refractoriness is defined as at least 6 weeks without meeting an adequate pre-defined treatment response. Long term data is not yet available to support continuing efficacy. SCHIZOPHRENIA: There is only limited evidence to support the combination of two or more antipsychotics in schizophrenia. Any monotherapy should be given at the maximal tolerated dose and at least two antipsychotics of different action/tolerability and clozapine should be given as a monotherapy before a combination is considered. The addition of a high potency D2/3 antagonist to a low potency antagonist like clozapine or quetiapine is the logical combination to treat positive symptoms, although further evidence from well conducted clinical trials is needed. Other mechanisms of action than D2/3 blockade, and hence other combinations might be more relevant for negative, cognitive or affective symptoms. OBSESSIVE-COMPULSIVE DISORDER: SSRI monotherapy has moderate overall average benefit in OCD and can take as long as 3 months for benefit to be decided. Antipsychotic addition may be considered in OCD with tic disorder and in refractory OCD. For OCD with poor insight (OCD with "psychotic features"), treatment of choice should be medium to high dose of SSRI, and only in refractory cases, augmentation with antipsychotics might be considered. Augmentation with haloperidol and risperidone was found to be effective (symptom reduction of more than 35%) for patients with tics. For refractory OCD, there is data suggesting a specific role for haloperidol and risperidone as well, and some data with regard to potential therapeutic benefit with olanzapine and quetiapine. ANTIPSYCHOTICS AND ADVERSE EFFECTS IN SEVERE MENTAL ILLNESS: Cardio-metabolic risk in patients with severe mental illness and especially when treated with antipsychotic agents are now much better recognized and efforts to ensure improved physical health screening and prevention are becoming established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show proof of principle for assessing compound biodegradation at 1-2 mg C per L by measuring microbial community growth over time with direct cell counting by flow cytometry. The concept is based on the assumption that the microbial community will increase in cell number through incorporation of carbon from the added test compound into new cells in the absence of (as much as possible) other assimilable carbon. We show on pure cultures of the bacterium Pseudomonas azelaica that specific population growth can be measured with as low as 0.1 mg 2-hydroxybiphenyl per L, whereas in mixed community 1 mg 2-hydroxybiphenyl per L still supported growth. Growth was also detected with a set of fragrance compounds dosed at 1-2 mg C per L into diluted activated sludge and freshwater lake communities at starting densities of 10(4) cells per ml. Yield approximations from the observed community growth was to some extent in agreement with standard OECD biodegradation test results for all, except one of the examined compounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many animals, melanin-based coloration is strongly heritable and is largely insensitive to the environment and body condition. According to the handicap principle, such a trait may not reveal individual quality because the production of different melanin-based colorations often entails similar costs. However, a recent study showed that the production of eumelanin pigments requires relatively large amounts of calcium, potentially implying that melanin-based coloration is associated with physiological processes requiring calcium. If this is the case, eumelanism may be traded-off against other metabolic processes that require the same elements. We used a correlative approach to examine, for the first time, this proposition in the barn owl, a species in which individuals vary in the amount, size, and blackness of eumelanic spots. For this purpose, we measured calcium concentration in the left humerus of 85 dead owls. Results showed that the humeri of heavily spotted individuals had a higher concentration of calcium. This suggests either that plumage spottiness signals the ability to absorb calcium from the diet for both eumelanin production and storage in bones, or that lightly spotted individuals use more calcium for metabolic processes at the expense of calcium storage in bones. Our study supports the idea that eumelanin-based coloration is associated with a number of physiological processes requiring calcium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite numerous discussions, workshops, reviews and reports about responsible development of nanotechnology, information describing health and environmental risk of engineered nanoparticles or nanomaterials is severely lacking and thus insufficient for completing rigorous risk assessment on their use. However, since preliminary scientific evaluations indicate that there are reasonable suspicions that activities involving nanomaterials might have damaging effects on human health; the precautionary principle must be applied. Public and private institutions as well as industries have the duty to adopt preventive and protective measures proportionate to the risk intensity and the desired level of protection. In this work, we present a practical, 'user-friendly' procedure for a university-wide safety and health management of nanomaterials, developed as a multi-stakeholder effort (government, accident insurance, researchers and experts for occupational safety and health). The process starts using a schematic decision tree that allows classifying the nano laboratory into three hazard classes similar to a control banding approach (from Nano 3 - highest hazard to Nano1 - lowest hazard). Classifying laboratories into risk classes would require considering actual or potential exposure to the nanomaterial as well as statistical data on health effects of exposure. Due to the fact that these data (as well as exposure limits for each individual material) are not available, risk classes could not be determined. For each hazard level we then provide a list of required risk mitigation measures (technical, organizational and personal). The target 'users' of this safety and health methodology are researchers and safety officers. They can rapidly access the precautionary hazard class of their activities and the corresponding adequate safety and health measures. We succeed in convincing scientist dealing with nano-activities that adequate safety measures and management are promoting innovation and discoveries by ensuring them a safe environment even in the case of very novel products. The proposed measures are not considered as constraints but as a support to their research. This methodology is being implemented at the Ecole Polytechnique de Lausanne in over 100 research labs dealing with nanomaterials. It is our opinion that it would be useful to other research and academia institutions as well. [Authors]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: : A primary goal of clinical pharmacology is to understand the factors that determine the dose-effect relationship and to use this knowledge to individualize drug dose. METHODS: : A principle-based criterion is proposed for deciding among alternative individualization methods. RESULTS: : Safe and effective variability defines the maximum acceptable population variability in drug concentration around the population average. CONCLUSIONS: : A decision on whether patient covariates alone are sufficient, or whether therapeutic drug monitoring in combination with target concentration intervention is needed, can be made by comparing the remaining population variability after a particular dosing method with the safe and effective variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While cell sorting usually relies on cell-surface protein markers, molecular beacons (MBs) offer the potential to sort cells based on the presence of any expressed mRNA and in principle could be extremely useful to sort rare cell populations from primary isolates. We show here how stem cells can be purified from mixed cell populations by sorting based on MBs. Specifically, we designed molecular beacons targeting Sox2, a well-known stem cell marker for murine embryonic (mES) and neural stem cells (NSC). One of our designed molecular beacons displayed an increase in fluorescence compared to a nonspecific molecular beacon both in vitro and in vivo when tested in mES and NSCs. We sorted Sox2-MB(+)SSEA1(+) cells from a mixed population of 4-day retinoic acid-treated mES cells and effectively isolated live undifferentiated stem cells. Additionally, Sox2-MB(+) cells isolated from primary mouse brains were sorted and generated neurospheres with higher efficiency than Sox2-MB(-) cells. These results demonstrate the utility of MBs for stem cell sorting in an mRNA-specific manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On 21 January 2011, the Grand Chamber of the European Court of Human Rights delivered its judgment in the case of MSS v. Belgium and Greece. This judgment puts into question the practices followed by many national authorities in the implementation of the Dublin system. Particularly noteworthy are the effects on the "safety presumption" that Member States accord to each other in the field of asylum. The authors explore the implications of the MSS decision, first, in regard of the evidentiary requirements imposed on asylum seekers to rebut the safety presumption. They come to the conclusion that through the decision, a real paradigm-shift has taken place - from the theoretical to the actual supremacy of the non-refoulement principle in Dublin matters. This is also true in light of the increased requirements imposed by the Court as regards the scope and depth of judicial review on transfer decisions. Moreover, the MSS judgment could give new impetus to the stalled reform process concerning the Dublin Regulation. Indeed, the Court's decision seems to enshrine in positive ECHR law the most progressive elements of the Commission's proposal, including procedural guarantees and, de facto, the mechanism for the temporary suspension of transfers to member states not offering adequate protection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Demographic Study of European Footballers is an annual publication destined for anyone who wishes to acquire a scientific understanding of the European football players' labour market. It presents the dynamics at work in 36 first division leagues in UEFA member countries. This edition covers our biggest ever survey comprising 528 clubs and 12,524 footballers. Statistical indicators relative to nine thematics (morphology, age, experience training, origin, etc.) allow the comparison of player profiles and squad compositions at league and club level. Through easily-understable regression analyses, the Study brings to light the principle differences between clubs and leagues according to economic and sporting level of championships. The final part presents the list of the most promising players under 23 years of age by league and position.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pontryagin's maximum principle from optimal control theory is used to find the optimal allocation of energy between growth and reproduction when lifespan may be finite and the trade-off between growth and reproduction is linear. Analyses of the optimal allocation problem to date have generally yielded bang-bang solutions, i.e. determinate growth: life-histories in which growth is followed by reproduction, with no intermediate phase of simultaneous reproduction and growth. Here we show that an intermediate strategy (indeterminate growth) can be selected for if the rates of production and mortality either both increase or both decrease with increasing body size, this arises as a singular solution to the problem. Our conclusion is that indeterminate growth is optimal in more cases than was previously realized. The relevance of our results to natural situations is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The likelihood of significant exposure to drugs in infants through breast milk is poorly defined, given the difficulties of conducting pharmacokinetics (PK) studies. Using fluoxetine (FX) as an example, we conducted a proof-of-principle study applying population PK (popPK) modeling and simulation to estimate drug exposure in infants through breast milk. We simulated data for 1,000 mother-infant pairs, assuming conservatively that the FX clearance in an infant is 20% of the allometrically adjusted value in adults. The model-generated estimate of the milk-to-plasma ratio for FX (mean: 0.59) was consistent with those reported in other studies. The median infant-to-mother ratio of FX steady-state plasma concentrations predicted by the simulation was 8.5%. Although the disposition of the active metabolite, norfluoxetine, could not be modeled, popPK-informed simulation may be valid for other drugs, particularly those without active metabolites, thereby providing a practical alternative to conventional PK studies for exposure risk assessment in this population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surgical tumor removal is often the treatment of choice in patients with head and neck squamous cell carcinoma. Depending on the extent of tumor resection, large defects are often produced in the individual head and neck regions, necessitating reconstructive surgery to avoid further functional impairment. In principle, this decision depends on the size and location of the defect, the aesthetic importance of the region and the functional significance of the area to be replaced. Reconstructive free flap procedures in patients who have undergone radiotherapy or exhibit vessel depletion in the neck due to multiple previous surgical interventions are particularly challenging. In order to ensure the best possible outcomes of surgical oncology therapies under difficult circumstances, this paper discusses the important factors and variables that can increase the success rate of microvascular grafts in irradiated or multiply resected patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Switzerland, the land management regime is characterized by a liberal attitude towards the institution of property rights, which is guaranteed by the Constitution. Under the present Swiss constitutional arrangement, authorities (municipalities) are required to take into account landowners' interests when implementing their spatial planning policy. In other words, the institution of property rights cannot be restricted easily in order to implement zoning plans and planning projects. This situation causes many problems. One of them is the gap between the way land is really used by the landowners and the way land should be used based on zoning plans. In fact, zoning plans only describe how landowners should use their property. There is no sufficient provision for handling cases where the use is not in accordance with zoning plans. In particular, landowners may not be expropriated for a non-conforming use of the land. This situation often leads to the opening of new building areas in greenfields and urban sprawl, which is in contradiction with the goals set into the Federal Law on Spatial Planning. In order to identify legal strategies of intervention to solve the problem, our paper is structured into three main parts. Firstly, we make a short description of the Swiss land management regime. Then, we focus on an innovative land management approach designed to implement zoning plans in accordance with property rights. Finally, we present a case study that shows the usefulness of the presented land management approach in practice. We develop three main results. Firstly, the land management approach brings a mechanism to involve landowners in planning projects. Coordination principle between spatial planning goals and landowners' interests is the cornerstone of all the process. Secondly, the land use is improved both in terms of space and time. Finally, the institution of property rights is not challenged, since there is no expropriation and the market stays free.