842 resultados para Branching Processes with Immigration
Resumo:
The plate is bound in after pl.12, v.2.
Resumo:
Kept up to date by cumulative supplements
Resumo:
"September 28, 1982."
Type 1 nitrergic (ND1) cells of the rabbit retina: Comparison with other axon-bearing amacrine cells
Resumo:
NADPH diaphorase (NADPHd) histochemistry labels two types of nitrergic amacrine cells in the rabbit retina. Both the large ND1 cells and the small ND2 cells stratify in the middle of the inner plexiform layer, and their overlapping processes produce a dense plexus, which makes it difficult to trace the morphology of single cells. The complete morphology of the ND1 amacrine cells has been revealed by injecting Neurobiotin into large round somata in the inner nuclear layer, which resulted in the labelling of amacrine cells whose proximal morphology and stratification matched those of the ND1 cells stained by NADPHd histochemistry. The Neurobiotin-injected ND1 cells showed strong homologous tracer coupling to surrounding ND1 cells, and double-labelling experiments confirmed that these coupled cells showed NADPHd reactivity. The ND1 amacrine cells branch in stratum 3 of the inner plexiform layer, where they produce a sparsely branched dendritic tree of 400-600 mum diameter in ventral peripheral retina. In addition, each cell gives rise to several fine beaded processes, which arise either from a side branch of the dendritic tree or from the tapering of a distal dendrite. These axon-like processes branch successively within the vicinity of the dendritic field before extending, with little or no further branching, for 3-5 mm from the soma in ventral peripheral retina. Consequently, these cells may span one-third of the visual field of each eye, and their spatial extent appears to be greater than that of most other types of axon-bearing amacrine cells injected with Neurobiotin in this study. The morphology and tracer-coupling pattern of the ND1 cells are compared with those of confirmed type 1 catecholaminergic cells, a presumptive type 2 catecholaminergic cell, the type 1 polyaxonal. cells, the long-range amacrine cells, a novel type of axon-bearing cell that also branches in stratum 3, and a type of displaced amacrine cell that may correspond to the type 2 polyaxonal cell. (C) 2004 Wiley-Liss, Inc.
Resumo:
The efficiency of inhibitory control processes has been proposed as a mechanism constraining working-memory capacity. In order to investigate genetic influences on processes that may reflect interference control, event-related potential (ER-P) activity recorded at frontal sites, during distracting and nondistracting conditions of a working-memory task, in a sample of 509 twin pairs was examined. The ERP component of interest was the slow wave (SW). Considerable overlap in source of genetic influence was found, with a common genetic factor accounting for 37 - 45% of SW variance irrespective of condition. However, 3 - 8 % of SW variance in the distracting condition was influenced by an independent genetic source. These results suggest that neural responses to irrelevant and distracting information, that may disrupt working-memory performance, differ in a fundamental way from perceptual and memory-based processing in a working-memory task. Furthermore, the results are consistent with the view that cognition is a complex genetic trait influenced by numerous genes of small influence.
Resumo:
Load-induced extravascular fluid flow has been postulated to play a role in mechanotransduction of physiological loads at the cellular level. Furthermore, the displaced fluid serves as a carrier for metabolites, nutrients, mineral precursors and osteotropic agents important for cellular activity. We hypothesise that load-induced fluid flow enhances the transport of these key substances, thus helping to regulate cellular activity associated with processes of functional adaptation and remodelling. To test this hypothesis, molecular tracer methods developed previously by our group were applied in vivo to observe and quantify the effects of load-induced fluid flow under four-point-bending loads. Preterminal tracer transport studies were carried out on 24 skeletally mature Sprague Dawley rats. Mechanical loading enhanced the transport of both small- and larger-molecular-mass tracers within the bony tissue of the tibial mid-diaphysis. Mechanical loading showed a highly significant effect on the number of periosteocytic spaces exhibiting tracer within the cross section of each bone. For all loading rates studied, the concentration of Procion Red tracer was consistently higher in the tibia subjected to pure bending loads than in the unloaded, contralateral tibia, Furthermore, the enhancement of transport was highly site-specific. In bones subjected to pure bending loads, a greater number of periosteocytic spaces exhibited the presence of tracer in the tension band of the cross section than in the compression band; this may reflect the higher strains induced in the tension band compared with the compression band within the mid-diaphysis of the rat tibia. Regardless of loading mode, the mean difference between the loaded side and the unloaded contralateral control side decreased with increasing loading frequency. Whether this reflects the length of exposure to the tracer or specific frequency effects cannot be determined by this set of experiments. These in vivo experimental results corroborate those of previous ex vivo and in vitro studies, Strain-related differences in tracer distribution provide support for the hypothesis that load-induced fluid flow plays a regulatory role in processes associated with functional adaptation.
Resumo:
The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
Resumo:
We consider the problem of assigning an input vector bfx to one of m classes by predicting P(c|bfx) for c = 1, ldots, m. For a two-class problem, the probability of class 1 given bfx is estimated by s(y(bfx)), where s(y) = 1/(1 + e-y). A Gaussian process prior is placed on y(bfx), and is combined with the training data to obtain predictions for new bfx points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multi-class problems (m >2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.
Resumo:
Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.
Resumo:
We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.
Resumo:
We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c=1,...,m. For a two-class problem, the probability of class one given x is estimated by s(y(x)), where s(y)=1/(1+e-y). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multiclass problems (m>2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.