958 resultados para Boyd-Lawton theorem
Resumo:
Background: Cruciferous vegetable (CV) consumption is associated with a reduced risk of several cancers in epidemiologic studies. Objective: The aim of this study was to determine the effects of watercress (a CV) supplementation on biomarkers related to cancer risk in healthy adults. Design: A single-blind, randomized, crossover study was conducted in 30 men and 30 women (30 smokers and 30 nonsmokers) with a mean age of 33 y (range: 19-55 y). The subjects were fed 85 g raw watercress daily for 8 wk in addition to their habitual diet. The effect of supplementation was measured on a range of endpoints, including DNA damage in lymphocytes (with the comet assay), activity of detoxifying enzymes (glutathione peroxidase and superoxide dismutase) in erythrocytes, plasma antioxidants (retinol, ascorbic acid, a-tocopherol, lutein, and beta-carotene), plasma total antioxidant status with the use of the ferric reducing ability of plasma assay, and plasma lipid profile. Results: Watercress supplementation (active compared with control phase) was associated with reductions in basal DNA damage (by 17%; P = 0.03), in basal plus oxidative purine DNA damage (by 23.9%; P = 0.002), and in basal DNA damage in response to ex vivo hydrogen peroxide challenge (by 9.4%; P = 0.07). Beneficial changes seen after watercress intervention were greater and more significant in smokers than in nonsmokers. Plasma lutein and P-carotene increased significantly by 100% and 33% (P < 0.001), respectively, after watercress supplementation. Conclusion: The results support the theory that consumption of watercress can be linked to a reduced risk of cancer via decreased damage to DNA and possible modulation of antioxidant status by increasing carotenoid concentrations.
Resumo:
Objectives: To identify the extent of dual task interference between cognitive and motor tasks, (cognitive motor interference (CMI)) in sitting balance during recovery from stroke; to compare CMI in sitting balance between stroke and non-stroke groups; and to record any changes to CMI during sitting that correlate with functional recovery. Method: 36 patients from stroke rehabilitation settings in three NHS trusts. Healthy control group: 21 older volunteers. Measures of seated postural sway were taken in unsupported sitting positions, alone, or concurrently with either a repetitive utterance task or an oral word category generation task. Outcome measures were variability of sway area, path length of sway, and the number of valid words generated. Results: Stroke patients were generally less stable than controls during unsupported sitting tasks. They showed greater sway during repetitive speech compared with quiet sitting, but did not show increased instability to posture between repetitive speech and word category generation. When compared with controls, stroke patients experienced greater dual task interferences during repetitive utterance but not during word generation. Sway during repetitive speech was negatively correlated with concurrent function on the Barthel ADL index. Conclusions: The stroke patients showed postural instability and poor word generation skills. The results of this study show that the effort of verbal utterances alone was sufficient to disturb postural control early after stroke, and the extent of this instability correlated with concomitant Barthel ADL function.
Resumo:
This paper investigates whether and to what extent a wide range of actors in the UK are adapting to climate change, and whether this is evidence of a social transition. We document evidence of over 300 examples of early adopters of adaptation practice to climate change in the UK. These examples span a range of activities from small adjustments (or coping) to building adaptive capacity, implementing actions and creating deeper systemic change in public and private organisations in a range of sectors. We find that adaptation in the UK has been dominated by government initiatives and has principally occurred in the form of research into climate change impacts. These actions within government stimulate a further set of actions at other scales in public agencies, regulatory agencies and regional government (or in the devolved administrations), though with little real evidence of climate change adaptation initiatives trickling down to local government level. The water supply and flood defence sectors, requiring significant investment in large scale infrastructure such as reservoirs and coastal defences, have invested more heavily in identifying potential impacts and adaptations. Economic sectors that are not dependent on large scale infrastructure appear to be investing far less effort and resources in preparing for climate change. We conclude that while the government-driven top-down targeted adaptation approach has generated anticipatory action at low cost, it may also have created enough niche activities to allow for diffusion of new adaptation practices in response to real or perceived climate change. These results have significant implications for how climate policy can be developed to support autonomous adaptors in the UK and other countries.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
A multivariable hyperstable robust adaptive decoupling control algorithm based on a neural network is presented for the control of nonlinear multivariable coupled systems with unknown parameters and structure. The Popov theorem is used in the design of the controller. The modelling errors, coupling action and other uncertainties of the system are identified on-line by a neural network. The identified results are taken as compensation signals such that the robust adaptive control of nonlinear systems is realised. Simulation results are given.
Resumo:
The molecular mechanisms underlying the initiation and control of the release of cytochrome c during mitochondrion-dependent apoptosis are thought to involve the phosphorylation of mitochondrial Bcl-2 and Bcl-x(L). Although the c-Jun N-terminal kinase (JNK) has been proposed to mediate the phosphorylation of Bcl-2/Bcl-x(L) the mechanisms linking the modification of these proteins and the release of cytochrome c remain to be elucidated. This study was aimed at establishing interdependency between JNK signalling and mitochondrial apoptosis. Using an experimental model consisting of isolated, bioenergetically competent rat brain mitochondria, these studies show that (i) JNK catalysed the phosphorylation of Bcl-2 and Bcl-x(L) as well as other mitochondrial proteins, as shown by two-dimensional isoelectric focusing/SDS/PAGE; (ii) JNK-induced cytochrome c release, in a process independent of the permeability transition of the inner mitochondrial membrane (imPT) and insensitive to cyclosporin A; (iii) JNK mediated a partial collapse of the mitochondrial inner-membrane potential (Deltapsim) in an imPT- and cyclosporin A-independent manner; and (iv) JNK was unable to induce imPT/swelling and did not act as a co-inducer, but as an inhibitor of Ca-induced imPT. The results are discussed with regard to the functional link between the Deltapsim and factors influencing the permeability transition of the inner and outer mitochondrial membranes. Taken together, JNK-dependent phosphorylation of mitochondrial proteins including, but not limited to, Bcl-2/Bcl-x(L) may represent a potential of the modulation of mitochondrial function during apoptosis.
Resumo:
This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Resumo:
We introduce the perspex machine which unifies projective geometry and Turing computation and results in a supra-Turing machine. We show two ways in which the perspex machine unifies symbolic and non-symbolic AI. Firstly, we describe concrete geometrical models that map perspexes onto neural networks, some of which perform only symbolic operations. Secondly, we describe an abstract continuum of perspex logics that includes both symbolic logics and a new class of continuous logics. We argue that an axiom in symbolic logic can be the conclusion of a perspex theorem. That is, the atoms of symbolic logic can be the conclusions of sub-atomic theorems. We argue that perspex space can be mapped onto the spacetime of the universe we inhabit. This allows us to discuss how a robot might be conscious, feel, and have free will in a deterministic, or semi-deterministic, universe. We ground the reality of our universe in existence. On a theistic point, we argue that preordination and free will are compatible. On a theological point, we argue that it is not heretical for us to give robots free will. Finally, we give a pragmatic warning as to the double-edged risks of creating robots that do, or alternatively do not, have free will.