120 resultados para categorization IT PFC computational neuroscience model HMAX
Resumo:
It has been hypothesized that the brain categorizes stressors and utilizes neural response pathways that vary in accordance with the assigned category. If this is true, stressors should elicit patterns of neuronal activation within the brain that are category-specific. Data from previous Immediate-early gene expression mapping studies have hinted that this is the case, but interstudy differences in methodology render conclusions tenuous. In the present study, immunolabelling for the expression of c-fos was used as a marker of neuronal activity elicited in the rat brain by haemorrhage, immune challenge, noise, restraint and forced swim. All stressors elicited c-fos expression in 25-30% of hypothalamic paraventricular nucleus corticotrophin-releasing-factor cells, suggesting that these stimuli were of comparable strength, at least with regard to their ability to activate the hypothalamic-pituitary-ad renal axis. In the amygdala, haemorrhage and immune challenge both elicited c-fos expression in a large number of neurons in the central nucleus of the amygdala, whereas noise, restraint and forced swim primarily elicited recruitment of cells within the medial nucleus of the amygdala. In the medulla, all stressors recruited similar numbers of noradrenergic (A1 and A2) and adrenergic (C1 and C2) cells. However, haemorrhage and immune challenge elicited c-fos expression In subpopulations of A1 and A2 noradrenergic cells that were significantly more rostral than those recruited by noise, restraint or forced swim. The present data support the suggestion that the brain recognizes at least two major categories of stressor, which we have referred to as 'physical' and 'psychological'. Moreover, the present data suggest that the neural activation footprint that is left in the brain by stressors can be used to determine the category to which they have been assigned by the brain.
Resumo:
1. The past 15 years has seen the emergence of a new field of neuroscience research based primarily on how the immune system and the central nervous system can interact. A notable example of this interaction occurs when peripheral inflammation, infection or tissue injury activates the hypothalamic- pituitary-adrenal axis (HPA). 2. During such assaults, immune cells release the pro- inflammatory cytokines interleukin (IL)-1, IL-6 and tumour necrosis factor-alpha into the general circulation. 3. These cytokines are believed to act as mediators for HPA axis activation. However, physical limitations of cytokines impede their movement across the blood-brain barrier and, consequently, it has been unclear as to precisely how and where IL-1beta signals cross into the brain to trigger HPA axis activation. 4. Evidence from recent anatomical and functional studies suggests two neuronal networks may be involved in triggering HPA axis activity in response to circulating cytokines. These are catecholamine cells of the medulla oblongata and the circumventricular organs (CVO). 5. The present paper examines the role of CVO in generating HPA axis responses to pro-inflammatory cytokines and culminates with a proposed model based on cytokine signalling primarily involving the area postrema and catecholamine cells in the ventrolateral and dorsal medulla.
Resumo:
This paper develops a theory that firms seek out new country markets on the basis of expected commercial returns. These expectations depend on judgements about the attractiveness of the market and the firm's competitive position in it, which in turn are influenced by informants. It is the number and strengths of these informants that will underlie the probability of a country being identified and assessed as a new market by any firm.
Resumo:
Ligaments undergo finite strain displaying hyperelastic behaviour as the initially tangled fibrils present straighten out, combined with viscoelastic behaviour (strain rate sensitivity). In the present study the anterior cruciate ligament of the human knee joint is modelled in three dimensions to gain an understanding of the stress distribution over the ligament due to motion imposed on the ends, determined from experimental studies. A three dimensional, finite strain material model of ligaments has recently been proposed by Pioletti in Ref. [2]. It is attractive as it separates out elastic stress from that due to the present strain rate and that due to the past history of deformation. However, it treats the ligament as isotropic and incompressible. While the second assumption is reasonable, the first is clearly untrue. In the present study an alternative model of the elastic behaviour due to Bonet and Burton (Ref. [4]) is generalized. Bonet and Burton consider finite strain with constant modulii for the fibres and for the matrix of a transversely isotropic composite. In the present work, the fibre modulus is first made to increase exponentially from zero with an invariant that provides a measure of the stretch in the fibre direction. At 12% strain in the fibre direction, a new reference state is then adopted, after which the material modulus is made constant, as in Bonet and Burton's model. The strain rate dependence can be added, either using Pioletti's isotropic approximation, or by making the effect depend on the strain rate in the fibre direction only. A solid model of a ligament is constructed, based on experimentally measured sections, and the deformation predicted using explicit integration in time. This approach simplifies the coding of the material model, but has a limitation due to the detrimental effect on stability of integration of the substantial damping implied by the nonlinear dependence of stress on strain rate. At present, an artificially high density is being used to provide stability, while the dynamics are being removed from the solution using artificial viscosity. The result is a quasi-static solution incorporating the effect of strain rate. Alternate approaches to material modelling and integration are discussed, that may result in a better model.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
A generalised model for the prediction of single char particle gasification dynamics, accounting for multi-component mass transfer with chemical reaction, heat transfer, as well as structure evolution and peripheral fragmentation is developed in this paper. Maxwell-Stefan analysis is uniquely applied to both micro and macropores within the framework of the dusty-gas model to account for the bidisperse nature of the char, which differs significantly from the conventional models that are based on a single pore type. The peripheral fragmentation and random-pore correlation incorporated into the model enable prediction of structure/reactivity relationships. The occurrence of chemical reaction within the boundary layer reported by Biggs and Agarwal (Chem. Eng. Sci. 52 (1997) 941) has been confirmed through an analysis of CO/CO2 product ratio obtained from model simulations. However, it is also quantitatively observed that the significance of boundary layer reaction reduces notably with the reduction of oxygen concentration in the flue gas, operational pressure and film thickness. Computations have also shown that in the presence of diffusional gradients peripheral fragmentation occurs in the early stages on the surface, after which conversion quickens significantly due to small particle size. Results of the early commencement of peripheral fragmentation at relatively low overall conversion obtained from a large number of simulations agree well with experimental observations reported by Feng and Bhatia (Energy & Fuels 14 (2000) 297). Comprehensive analysis of simulation results is carried out based on well accepted physical principles to rationalise model prediction. (C) 2001 Elsevier Science Ltd. AH rights reserved.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
Large (>1600 mum), ingestively masticated particles of bermuda grass (Cynodon dactylon L. Pers.) leaf and stem labelled with Yb-169 and Ce-144 respectively were inserted into the rumen digesta raft of heifers grazing bermuda grass. The concentration of markers in digesta sampled from the raft and ventral rumen were monitored at regular intervals over approximately 144 h. The data from the two sampling sites were simultaneously fitted to two pool (raft and ventral rumen-reticulum) models with either reversible or sequential flow between the two pools. The sequential flow model fitted the data equally as well as the reversible flow model but the reversible flow model was used because of its greater application. The reversible flow model, hereafter called the raft model, had the following features: a relatively slow age-dependent transfer rate from the raft (means for a gamma 2 distributed rate parameter for leaf 0.0740 v. stem 0.0478 h(-1)), a very slow first order reversible flow from the ventral rumen to the raft (mean for leaf and stem 0.010 h(-1)) and a very rapid first order exit from the ventral rumen (mean of leaf and stem 0.44 h(-1)). The raft was calculated to occupy approximately 0.82 total rumen DM of the raft and ventral rumen pools. Fitting a sequential two pool model or a single exponential model individually to values from each of the two sampling sites yielded similar parameter values for both sites and faster rate parameters for leaf as compared with stem, in agreement with the raft model. These results were interpreted as indicating that the raft forms a large relatively inert pool within the rumen. Particles generated within the raft have difficulty escaping but once into the ventral rumen pool they escape quickly with a low probability of return to the raft. It was concluded that the raft model gave a good interpretation of the data and emphasized escape from and movement within the raft as important components of the residence time of leaf and stem particles within the rumen digesta of cattle.
Resumo:
This paper proposes an alternative framework for examining the international macroeconomic impact of domestic monetary and fiscal policies and focuses on the distinction between national spending and national production and the reactive behavior of foreign investors to changing external account balances. It demonstrates that under a floating exchange rate regime, monetary and fiscal policies can affect aggregate expenditure and output quite differently, with important implications for the behavior of the exchange rate, the current account balance, and national income in the short run, as well as the economy's price level in the long run. In particular, this paper predicts that expansionary monetary and fiscal policies tend to depreciate the currency and only temporarily raise gross domestic product and the current account surplus, although permanently raise the domestic price level. This is a revised version of a paper presented at the Forty-Ninth International Atlantic Economic Conference, March 14–21, 2000, Munich, Germany.
Resumo:
We have established the first example of an orthotopic xenograft model of human nonseminomatous germ cell tumour (NSGCT). This reproducible model exhibits many clinically relevant features including metastases to the retroperitoneal lymph nodes and lungs, making it an ideal tool for research into the development and progression of testicular germ cell tumours.
Resumo:
The purpose of this study, was to develop a newborn piglet model of hypoxia/ischaemia which would better emulate the clinical situation in the asphyxiated human neonate and produce a consistent degree of histopathological injury following the insult. One-day-old piglets (n = 18) were anaesthetised with a mixture of propofol (10 mg/kg/h) and alfentinal (5,5.5 mug/kg/h) i.v. The piglets were intubated and ventilated. Physiological variables were monitored continuously. Hypoxia was induced by decreasing the inspired oxygen (FiO(2)) to 3-4% and adjusting FiO(2) to maintain the cerebral function monitor peak amplitude at less than or equal to5 muV. The duration of the mild insult was 20, min while the severe insult was 30 min which included 10 min where the blood pressure was allowed to fall below 70% of baseline. Control piglets (n=4 of 18) were subjected to the same protocol except for the hypoxic/ischaemic insult. The piglets were allowed to recover from anaesthesia then euthanased 72 It after the insult. The brains were perfusion-fixed, removed and embedded in paraffin. Coronal sections were stained by haematoxylin/eosin. A blinded observer examined the frontal and parietal cortex, hippocampus, basal ganglia, thalamus and cerebellum for the degree of damage. The total mean histology score for the five areas of the brain for the severe insult was 15.6 +/-4.4 (mean +/-S.D., n=7), whereas no damage was seen in either the mild insult (n=4) or control groups. This 'severe damage' model produces a consistent level of damage and will prove useful for examining potential neuroprotective therapies in the neonatal brain. (C) 2001 Elsevier Science BY. All rights reserved.
Resumo:
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time-varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time-varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.