961 resultados para Precautionary principle
Resumo:
[spa] La voluntariedad constituye el presupuesto de cualquier análisis sobre la mediación. De entre sus diversas manifestaciones, destacan la libertad de acogerse al procedimiento y la posibilidad de desistir del mismo en cualquier momento. Sin embargo, en relación con su primera manifestación ―libertad de acogerse a la mediación―, el principio de voluntariedad parece admitir ciertas modulaciones, debidas, por una parte, a la exigencia legal, en algunos casos, del uso de la mediación como condición de procedibilidad ante los tribunales (mandatory mediation) y, por otra, a la discutida admisibilidad de las llamadas"cláusulas de mediación". El estudio se centra en estas últimas. En particular, se analiza el debate que éstas han generado, su eficacia ―tanto ordinaria (derivada de su cumplimiento) como extraordinaria (consecuencia de su incumplimiento)― y la cuestión de sus posibles contenidos.
Resumo:
BACKGROUND: : A primary goal of clinical pharmacology is to understand the factors that determine the dose-effect relationship and to use this knowledge to individualize drug dose. METHODS: : A principle-based criterion is proposed for deciding among alternative individualization methods. RESULTS: : Safe and effective variability defines the maximum acceptable population variability in drug concentration around the population average. CONCLUSIONS: : A decision on whether patient covariates alone are sufficient, or whether therapeutic drug monitoring in combination with target concentration intervention is needed, can be made by comparing the remaining population variability after a particular dosing method with the safe and effective variability.
Resumo:
While cell sorting usually relies on cell-surface protein markers, molecular beacons (MBs) offer the potential to sort cells based on the presence of any expressed mRNA and in principle could be extremely useful to sort rare cell populations from primary isolates. We show here how stem cells can be purified from mixed cell populations by sorting based on MBs. Specifically, we designed molecular beacons targeting Sox2, a well-known stem cell marker for murine embryonic (mES) and neural stem cells (NSC). One of our designed molecular beacons displayed an increase in fluorescence compared to a nonspecific molecular beacon both in vitro and in vivo when tested in mES and NSCs. We sorted Sox2-MB(+)SSEA1(+) cells from a mixed population of 4-day retinoic acid-treated mES cells and effectively isolated live undifferentiated stem cells. Additionally, Sox2-MB(+) cells isolated from primary mouse brains were sorted and generated neurospheres with higher efficiency than Sox2-MB(-) cells. These results demonstrate the utility of MBs for stem cell sorting in an mRNA-specific manner.
Resumo:
On 21 January 2011, the Grand Chamber of the European Court of Human Rights delivered its judgment in the case of MSS v. Belgium and Greece. This judgment puts into question the practices followed by many national authorities in the implementation of the Dublin system. Particularly noteworthy are the effects on the "safety presumption" that Member States accord to each other in the field of asylum. The authors explore the implications of the MSS decision, first, in regard of the evidentiary requirements imposed on asylum seekers to rebut the safety presumption. They come to the conclusion that through the decision, a real paradigm-shift has taken place - from the theoretical to the actual supremacy of the non-refoulement principle in Dublin matters. This is also true in light of the increased requirements imposed by the Court as regards the scope and depth of judicial review on transfer decisions. Moreover, the MSS judgment could give new impetus to the stalled reform process concerning the Dublin Regulation. Indeed, the Court's decision seems to enshrine in positive ECHR law the most progressive elements of the Commission's proposal, including procedural guarantees and, de facto, the mechanism for the temporary suspension of transfers to member states not offering adequate protection.
Resumo:
We work out a semiclassical theory of shot noise in ballistic n+-i-n+ semiconductor structures aiming at studying two fundamental physical correlations coming from Pauli exclusion principle and long-range Coulomb interaction. The theory provides a unifying scheme which, in addition to the current-voltage characteristics, describes the suppression of shot noise due to Pauli and Coulomb correlations in the whole range of system parameters and applied bias. The whole scenario is summarized by a phase diagram in the plane of two dimensionless variables related to the sample length and contact chemical potential. Here different regions of physical interest can be identified where only Coulomb or only Pauli correlations are active, or where both are present with different relevance. The predictions of the theory are proven to be fully corroborated by Monte Carlo simulations.
Resumo:
Ni(II)-Fe(II)-Fe(III) layered double hydroxides (LDH) or Ni-containing sulfate green rust (GR2) samples were prepared from Ni(II), Fe(II) and Fe(III) sulfate salts and analyzed with X ray diffraction. Nickel is readily incorporated in the GR2 structure and forms a solid solution between GR2 and a Ni(II)-Fe(III) LDH. There is a correlation between the unit cell a-value and the fraction of Ni(II) incorporated into the Ni(II)-GR2 structure. Since there is strong evidence that the divalent/trivalent cation ratio in GR2 is fixed at 2, it is possible in principle to determine the extent of divalent cation substitution for Fe(II) in GR2 from the unit cell a-value. Oxidation forms a mixture of minerals but the LDH structure is retained if at least 20 % of the divalent cations in the initial solution are Ni(II). It appears that Ni(II) is incorporated in a stable LDH structure. This may be important for two reasons, first for understanding the formation of LDHs, which are anion exchangers, in the natural environment. Secondly, this is important for understanding the fate of transition metals in the environment, particularly in the presence of reduced Fe compounds.
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
In the analysis of equilibrium policies in a di erential game, if agents have different time preference rates, the cooperative (Pareto optimum) solution obtained by applying the Pontryagin's Maximum Principle becomes time inconsistent. In this work we derive a set of dynamic programming equations (in discrete and continuous time) whose solutions are time consistent equilibrium rules for N-player cooperative di erential games in which agents di er in their instantaneous utility functions and also in their discount rates of time preference. The results are applied to the study of a cake-eating problem describing the management of a common property exhaustible natural resource. The extension of the results to a simple common property renewable natural resource model in in nite horizon is also discussed.
Resumo:
This guide was created to aid communities in the process of smart planning and is organized around the 10 Smart Planning Principles signed into Iowa law in 2010. A general description of the concept, strategies for encouraging use, policy tools for implementation, and a current Iowa example are presented for each Principle. In addition, a brief list of resources is provided to help local governments, community organizations and citizen planners find information and ideas on community involvement and incorporation of smart planning concepts in every day decisions.
Resumo:
A global existence and uniqueness result of the solution for multidimensional, time dependent, stochastic differential equations driven by a fractional Brownian motion with Hurst parameter H> is proved. It is shown, also, that the solution has finite moments. The result is based on a deterministic existence and uniqueness theorem whose proof uses a contraction principle and a priori estimates.
Resumo:
The Demographic Study of European Footballers is an annual publication destined for anyone who wishes to acquire a scientific understanding of the European football players' labour market. It presents the dynamics at work in 36 first division leagues in UEFA member countries. This edition covers our biggest ever survey comprising 528 clubs and 12,524 footballers. Statistical indicators relative to nine thematics (morphology, age, experience training, origin, etc.) allow the comparison of player profiles and squad compositions at league and club level. Through easily-understable regression analyses, the Study brings to light the principle differences between clubs and leagues according to economic and sporting level of championships. The final part presents the list of the most promising players under 23 years of age by league and position.
Resumo:
Pontryagin's maximum principle from optimal control theory is used to find the optimal allocation of energy between growth and reproduction when lifespan may be finite and the trade-off between growth and reproduction is linear. Analyses of the optimal allocation problem to date have generally yielded bang-bang solutions, i.e. determinate growth: life-histories in which growth is followed by reproduction, with no intermediate phase of simultaneous reproduction and growth. Here we show that an intermediate strategy (indeterminate growth) can be selected for if the rates of production and mortality either both increase or both decrease with increasing body size, this arises as a singular solution to the problem. Our conclusion is that indeterminate growth is optimal in more cases than was previously realized. The relevance of our results to natural situations is discussed.
Resumo:
Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
In this paper we examine in detail the implementation, with its associated difficulties, of the Killing conditions and gauge fixing into the variational principle formulation of Bianchi-type cosmologies. We address problems raised in the literature concerning the Lagrangian and the Hamiltonian formulations: We prove their equivalence, make clear the role of the homogeneity preserving diffeomorphisms in the phase space approach, and show that the number of physical degrees of freedom is the same in the Hamiltonian and Lagrangian formulations. Residual gauge transformations play an important role in our approach, and we suggest that Poincaré transformations for special relativistic systems can be understood as residual gauge transformations. In the Appendixes, we give the general computation of the equations of motion and the Lagrangian for any Bianchi-type vacuum metric and for spatially homogeneous Maxwell fields in a nondynamical background (with zero currents). We also illustrate our counting of degrees of freedom in an appendix.