913 resultados para Averaging Principle
Resumo:
Pollen-mediated gene flow is one of the main concerns associated with the introduction of genetically modified (GM) crops. Should a premium for non-GM varieties emerge on the market, ‘contamination’ by GM pollen would generate a revenue loss for growers of non-GM varieties. This paper analyses the problem of pollen-mediated gene flow as a particular type of production externality. The model, although simple, provides useful insights into coexistence policies. Following on from this and taking GM herbicide-tolerant oilseed rape (Brassica napus) as a model crop, a Monte Carlo simulation is used to generate data and then estimate the effect of several important policy variables (including width of buffer zones and spatial aggregation) on the magnitude of the externality associated with pollen-mediated gene flow.
Resumo:
The problem of adjusting the weights (learning) in multilayer feedforward neural networks (NN) is known to be of a high importance when utilizing NN techniques in various practical applications. The learning procedure is to be performed as fast as possible and in a simple computational fashion, the two requirements which are usually not satisfied practically by the methods developed so far. Moreover, the presence of random inaccuracies are usually not taken into account. In view of these three issues, an alternative stochastic approximation approach discussed in the paper, seems to be very promising.
Resumo:
We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
Retributivism is often explicitly or implicitly assumed to be compatible with the harm principle, since the harm principle (in some guises) concerns the content of the criminal law, whilst retributivism concerns the punishment of those that break the law. In this essay I show that retributivism should not be endorsed alongside any version of the harm principle. For some versions of the harm principle, this is because retributivism is logically incompatible with it, or its grounds. For others, retributivists can only endorse the harm principle at the cost of endorsing implausible positions about the content of the criminal law.
Resumo:
We report numerical results from a study of balance dynamics using a simple model of atmospheric motion that is designed to help address the question of why balance dynamics is so stable. The non-autonomous Hamiltonian model has a chaotic slow degree of freedom (representing vortical modes) coupled to one or two linear fast oscillators (representing inertia-gravity waves). The system is said to be balanced when the fast and slow degrees of freedom are separated. We find adiabatic invariants that drift slowly in time. This drift is consistent with a random-walk behaviour at a speed which qualitatively scales, even for modest time scale separations, as the upper bound given by Neishtadt’s and Nekhoroshev’s theorems. Moreover, a similar type of scaling is observed for solutions obtained using a singular perturbation (‘slaving’) technique in resonant cases where Nekhoroshev’s theorem does not apply. We present evidence that the smaller Lyapunov exponents of the system scale exponentially as well. The results suggest that the observed stability of nearly-slow motion is a consequence of the approximate adiabatic invariance of the fast motion.
Resumo:
This paper examines the determinacy implications of forecast-based monetary policy rules that set the interest rate in response to expected future inflation in a Neo-Wicksellian model that incorporates real balance effects. We show that the presence of such effects in closed economies restricts the ability of the Taylor principle to prevent indeterminacy of the rational expectations equilibrium. The problem is exacerbated in open economies, particularly if the policy rule reacts to consumer-price, rather than domestic-price, inflation. However, determinacy can be restored in both closed and open economies with the addition of monetary policy inertia.
Resumo:
Several recent reports suggest that inflammatory signals play a decisive role in the self-renewal, migration and differentiation of multipotent neural stem cells (NSCs). NSCs are believed to be able to ameliorate the symptoms of several brain pathologies through proliferation, migration into the area of the lesion and either differentiation into the appropriate cell type or secretion of anti-inflammatory cytokines. Although NSCs have beneficial roles, current evidence indicates that brain tumours, such as astrogliomas or ependymomas are also caused by tumour-initiating cells with stem-like properties. However, little is known about the cellular and molecular processes potentially generating tumours from NSCs. Most pro-inflammatory conditions are considered to activate the transcription factor NF-kappaB in various cell types. Strong inductive effects of NF-kappaB on proliferation and migration of NSCs have been described. Moreover, NF-kappaB is constitutively active in most tumour cells described so far. Chronic inflammation is also known to initiate cancer. Thus, NF-kappaB might provide a novel mechanistic link between chronic inflammation, stem cells and cancer. This review discusses the apparently ambivalent role of NF-kappaB: physiological maintenance and repair of the brain via NSCs, and a potential role in tumour initiation. Furthermore, it reveals a possible mechanism of brain tumour formation based on inflammation and NF-kappaB activity in NSCs.
Resumo:
John Broome has argued that value incommensurability is vagueness, by appeal to a controversial ‘collapsing principle’ about comparative indeterminacy. I offer a new counterexample to the collapsing principle. That principle allows us to derive an outright contradiction from the claim that some object is a borderline case of some predicate. But if there are no borderline cases, then the principle is empty. The collapsing principle is either false or empty.
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.
Resumo:
The detection of anthropogenic climate change can be improved by recognising the seasonality in the climate change response. This is demonstrated for the North Atlantic jet (zonal wind at 850 hPa, U850) and European precipitation responses projected by the CMIP5 climate models. The U850 future response is characterised by a marked seasonality: an eastward extension of the North Atlantic jet into Europe in November-April, and a poleward shift in May-October. Under the RCP8.5 scenario, the multi-model mean response in U850 in these two extended seasonal means emerges by 2035-2040 for the lower--latitude features and by 2050-2070 for the higher--latitude features, relative to the 1960-1990 climate. This is 5-15 years earlier than when evaluated in the traditional meteorological seasons (December--February, June--August), and it results from an increase in the signal to noise ratio associated with the spatial coherence of the response within the extended seasons. The annual mean response lacks important information on the seasonality of the response without improving the signal to noise ratio. The same two extended seasons are demonstrated to capture the seasonality of the European precipitation response to climate change and to anticipate its emergence by 10-20 years. Furthermore, some of the regional responses, such as the Mediterranean precipitation decline and the U850 response in North Africa in the extended winter, are projected to emerge by 2020-2025, according to the models with a strong response. Therefore, observations might soon be useful to test aspects of the atmospheric circulation response predicted by some of the CMIP5 models.