907 resultados para Triple Consistency Principle
Resumo:
The precautionary principle has the potential to act as a valuable tool in food law. It operates in areas of scientific uncertainty, calling for protective measures where there are potential threats to human health (or the environment). However, the manner of the principle’s incorporation and implementation within legislation are key to its effectiveness and general legitimacy. Specific considerations include the role and nature of risk assessments, assessors, sources of evidence, divergent opinions, risk communication, other legitimate factors and the weighting of interests. However, more fundamentally, the crystallisation of approaches and removal of all flexibility would undermine the principle’s central tenets. Firstly, principles crucially play a guiding and interpretative role. Secondly, reflexive modernisation and continuing scientific uncertainty call for the precautionary principle’s continued application – precautionary measures do not end the precautionary principle’s relevance. This can be partially achieved through the legislation so as to facilitate later precautionary measures, e.g. through temporary authorisations, derogations and safeguard clauses. However, crucially, it requires that the legislation also be interpreted in light of the precautionary principle. This paper investigates the logic behind the Court of Justice of the EU’s judgments and the circumstances that enable or deter the Court in taking, or permitting, stronger precautionary approaches. Although apparently inconsistent, a number of contextual factors including the legislative provisions and actors involved influence the judgments substantially. The analysis provides insight into improving the principle’s incorporation to facilitate its continued application and maintenance of flexibility, whilst bearing in mind the general desirability of objectivity and legal certainty.
Resumo:
BACKGROUND: The value of adjuvant radiotherapy in triple negative breast cancer (TNBC) remains unclear. A systematic review and meta-analysis was conducted in TNBC patients to assess survival and recurrence outcomes associated with radiotherapy following either breast conserving therapy (BCT) or post-mastectomy radiotherapy (PMRT). METHODS: Four electronic databases were searched from January 2000 to November 2015 (PubMed, MEDLINE, EMBASE and Web of Science). Studies investigating overall survival and/or recurrence in TNBC patients according to radiotherapy administration were included. A random effects meta-analysis was conducted using mastectomy only patients as the reference. RESULTS: Twelve studies were included. The pooled hazard ratio (HR) for locoregional recurrence comparing BCT and PMRT to mastectomy only was 0.61 (95% confidence interval [CI] 0.41-0.90) and 0.62 (95% CI 0.44-0.86), respectively. Adjuvant radiotherapy was not significantly associated with distant recurrence. The pooled HR for overall survival comparing BCT and PMRT to mastectomy only was 0.57 (95% CI 0.36-0.88) and HR 1.12 (95% CI 0.75, 1.69). Comparing PMRT to mastectomy only, tests for interaction were not significant for stage (p=0.98) or age at diagnosis (p=0.85). However, overall survival was improved in patients with late-stage disease (T3-4, N2-3) pooled HR 0.53 (95% CI 0.32-0.86), and women <40 years, pooled HR 0.30 (95% CI 0.11-0.82). CONCLUSIONS: Adjuvant radiotherapy was associated with a significantly lower risk of locoregional recurrence in TNBC patients, irrespective of the type of surgery. While radiotherapy was not consistently associated with an overall survival gain, benefits may be obtained in women with late-stage disease and younger patients.
Resumo:
This paper examines the nature of monetary policy decisions in Mexico using discrete choice models applied to the Central Bank's explicit monetary policy instrument. We find that monetary policy adjustments in Mexico have been strongly consistent with the CB's inflation targeting strategy. We also find evidence that monetary policy responds in a forward-looking manner to deviations of inflation from the target and that observed policy adjustments exhibit asymmetric features, with stronger responses to positive than to negative deviations of inflation from the target and a greater likelihood of policy persistence during periods when monetary policy is tightened, compared with periods when policy is loosened.
Resumo:
In R v McNally, gender deception is found capable of leading to the vitiation of consent to sexual intercourse and, in so doing, places restriction on the freedom of transgendered individuals in favour of cisgendered freedom. This paper seeks to challenge the standing of this decision by adopting a combined methodological approach between Deleuzian post-structuralism and Gewirthian legal idealism. In so doing, we attempt to show that the combination offers a novel and productive approach to contentious decisions, such as that in McNally. Our approach brings together post-structuralist corporeality which conceives of the body as material and productive, and Gewirth’s ‘agent’ to conceptualise the legal body as an entity which can, and should, shape judicial reasoning. It does this by employing the criterion of categorically necessary freedom on institutionalised practical reasoning. These ‘bodies of agents’ can be conceived as the underpinning and justificatory basis for the authority of the law subject to the morally rational Principle of Generic Consistency. This egalitarian condition precedent requires individualisation and the ability to accept self-differentiation in order to return to a status, which can be validly described as “law”. Ultimately, we argue that this theoretical combination responds to a call to problematise the connection made between gender discourse and judicial reasoning, whilst offering the opportunity to further our conceptions of law and broaden the theoretical armoury with which to challenge judicial reasoning in McNally. That is, a ‘good faith’ attempt to further and guarantee transgender freedoms.
Resumo:
The law regulating the availability of abortion is problematic both legally and morally. It is dogmatic in its requirements of women and doctors and ignorant of would-be fathers. Practically, its usage is liberal - with s1(1)(a) Abortion Act 1967 treated as a ‘catch all’ ground - it allows abortion on demand. Yet this is not reflected in the ‘law’. Against this outdated legislation I propose a model of autonomy which seeks to tether our moral concerns with a new legal approach to abortion. I do so by maintaining that a legal conception of autonomy is derivable from the categorical imperative resulting from Gewirth’s argument to the Principle of Generic Consistency: Act in accordance with the generic rights of your recipients as well as of yourself. This model of Gewirthian Rational Autonomy, I suggest, provides a guide for both public and private notions of autonomy and how our autonomous interests can be balanced across social structures in order to legitimately empower choice. I claim, ultimately, that relevant rights in the context of abortion are derivable from this model.
Resumo:
The paper concerns the moral status of persons for the purposes of rights-holding and duty-bearing. Developing from Gewirth’s argument to the Principle of Generic Consistency (PGC) and Beyleveld et al.’s Principle of Precautionary Reasoning, I argue in favour of a capacity-based assessment of the task competencies required for choice-rights and certain duties (within the Hohfeldian analytic). Unlike other, traditional, theories of rights, I claim that precautionary reasoning as to agentic status holds the base justification for rights-holding. If this is the basis for generic legal rights, then the contingent argument must be used to explain communities of rights. Much in the same way as two ‘normal’ adult agents may not have equal rights to be an aeroplane pilot, not all adults hold the same task competencies in relation to the exercise of the generic rights to freedom derived from the PGC. In this paper, I set out to consider the rights held by children, persons suffering from mental illness and generic ‘full’ agents. In mapping the developing ‘portfolio’ of rights and duties that a person carries during their life we might better understand the legal relations of those who do not ostensibly fulfil the criteria of ‘full’ agent.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
A new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
L'objectif de cette étude est d'examiner la structure factorielle et la consistance interne de la TAS-20 sur un échantillon d'adolescents (n = 264), ainsi que de décrire la distribution des caractéristiques alexithymiques dans cet échantillon. La structure à trois facteurs de la TAS-20 a été confirmée par notre analyse factorielle confirmatoire. La consistance interne, mesurée à l'aide d'alpha de Cronbach, est acceptable pour le premier facteur (difficulté à identifier les sentiments (DIF)), bonne pour le second (difficulté à verbaliser les sentiments (DDF)), mais en revanche, faible pour le troisième facteur (pensées orientées vers l'extérieur (EOT)). Les résultats d'une Anova mettent en évidence une tendance linéaire indiquant que plus l'âge augmente plus le niveau d'alexithymie (score total TAS-20), la difficulté à identifier les sentiments et les pensées orientées vers l'extérieur diminuent. En ce qui concerne la prévalence de l'alexithymie, on remarque en effet que 38,5 % des adolescents de moins de 16 ans sont considérés comme alexithymiques, contre 30,1 % des 16-17 ans et 22 % des plus de 17 ans. Notre étude indique donc que la TAS-20 est un instrument adéquat pour évaluer l'alexithymie à l'adolescence, tout en suggérant quelques précautions étant donné l'aspect développemental de cette période.