68 resultados para Leakage resilience
Resumo:
Blood-brain barrier (BBB) hyperpermeability in multiple sclerosis (MS) is associated with lesion pathogenesis and has been linked to pathology in microvascular tight junctions (TJs). This study quantifies the uneven distribution of TJ pathology and its association with BBB leakage. Frozen sections from plaque and normal-appearing white matter (NAWM) in 14 cases were studied together with white matter from six neurological and five normal controls. Using single and double immunofluorescence and confocal microscopy, the TJ-associated protein zonula occludens-1 (ZO-1) was examined across lesion types and tissue categories, and in relation to fibrinogen leakage. Confocal image data sets were analysed for 2198 MS and 1062 control vessels. Significant differences in the incidence of TJ abnormalities were detected between the different lesion types in MS and between MS and control white matter. These were frequent in oil-red O (ORO)+ active plaques, affecting 42% of vessel segments, but less frequent in ORO- inactive plaques (23%), NAWM (13%), and normal (3.7%) and neurological controls (8%). A similar pattern was found irrespective of the vessel size, supporting a causal role for diffusible inflammatory mediators. In both NAWM and inactive lesions, dual labelling showed that vessels with the most TJ abnormality also showed most fibrinogen leakage. This was even more pronounced in active lesions, where 41% of vessels in the highest grade for TJ alteration showed severe leakage. It is concluded that disruption of TJs in MS, affecting both paracellular and transcellular paths, contributes to BBB leakage. TJ abnormality and BBB leakage in inactive lesions suggests either failure of TJ repair or a continuing pathological process. In NAWM, it suggests either pre-lesional change or secondary damage. Clinically inapparent TJ pathology has prognostic implications and should be considered when planning disease-modifying therapy
Resumo:
Suicide attacks have raised the stakes for officers deciding whether or not to shoot a suspect ('Police Officer's Terrorist Dilemma'). Despite high-profile errors we know little about how trust in the police is affected by their response to the terrorist threat. Building on a conceptualisation of lay observers as intuitive signal detection theorists, a general population sample (N= 1153) were presented with scenarios manipulated in terms of suspect status (Armed/Unarmed), officer decision (Shoot/Not Shoot) and outcome severity (e.g. suspect armed with Bomb/Knife; police shoot suspect/ suspect plus child bystander). Supporting predictions, people showed higher trust in officers who made correct decisions. reflecting good discrimination ability and who decided to shoot, reflecting an 'appropriate' response bias given the relative costs and benefits. This latter effect was moderated by (a) outcome severity, suggesting it did not simply reflect a preference for a particular type of action, and (b) preferences for a tough stance towards terrorism indexed by Right-Wing Authoritarianism (RWA). Despite loss of civilian life, failure to prevent minor terror attacks resulted in no loss of trust amongst people low in RWA. whereas among people high in RWA trust was positive when police erroneously shot all unarmed suspect. Relations to alternative definitions of trust and procedural justice research are discussed. Copyright (C),. 2007 John Wiley & Sons, Ltd.
Resumo:
The speedup provided by quantum algorithms with respect to their classical counterparts is at the origin of scientific interest in quantum computation. However, the fundamental reasons for such a speedup are not yet completely understood and deserve further attention. In this context, the classical simulation of quantum algorithms is a useful tool that can help us in gaining insight. Starting from the study of general conditions for classical simulation, we highlight several important differences between two nonequivalent classes of quantum algorithms. We investigate their performance under realistic conditions by quantitatively studying their resilience with respect to static noise. This latter refers to errors affecting the initial preparation of the register used to run an algorithm. We also compare the evolution of the entanglement involved in the different computational processes.
Resumo:
Using a stylized theoretical model, we argue that current economic analyses of climate policy tend to over-estimate the degree of carbon leakage, as they abstract from the effects of induced technological change. We analyse carbon leakage in a two-country model with directed technical change, where only one of the countries enforces an exogenous cap on emissions. Climate policy induces changes in relative prices, that cause carbon leakage through a terms-of-trade effect. However, these changes in relative prices also affect the incentives to innovate in different sectors. This leads to a counterbalancing induced-technology effect, which always reduces carbon leakage. We therefore conclude that the leakage rates reported in the literature may be too high, as these estimates neglect the effect of price changes on the incentives to innovate.