981 resultados para Multirate signal model
Resumo:
Neuropeptide Y (NPY) is an abundant neurotransmitter in the brain and sympathetic nervous system (SNS). Hypothalamic NPY is known to be a key player in food intake and energy expenditure. NPY’s role in cardiovascular regulation has also been shown. In humans, a Leucine 7 to Proline 7 single nucleotide polymorphism (p.L7P) in the signal peptide of the NPY gene has been associated with traits of metabolic syndrome. The p.L7P subjects also show increased stress-related release of NPY, which suggests that more NPY is produced and released from SNS. The main objective of this study was to create a novel mouse model with noradrenergic cell-targeted overexpression of NPY, and to characterize the metabolic and vascular phenotype of this model. The mouse model was named OE-NPYDBH mouse. Overexpression of NPY in SNS and brain noradrenergic neurons led to increased adiposity without significant weight gain or increased food intake. The mice showed lipid accumulation in the liver at young age, which together with adiposity led to impaired glucose tolerance and hyperinsulinemia with age. The mice displayed stress-related increased mean arterial blood pressure, increased plasma levels of catecholamines and enhanced SNS activity measured by GDP binding activity to brown adipose tissue mitochondria. Sexual dimorphism in NPY secretion pattern in response to stress was also seen. In an experimental model of vascular injury, the OE-NPYDBH mice developed more pronounced neointima formation compared with wildtype controls. These results together with the clinical data indicate that NPY in noradrenergic cells plays an important role in the pathogenesis of metabolic syndrome and related diseases. Furthermore, new insights on the role of the extrahypothalamic NPY in the process have been obtained. The OE-NPYDBH model provides an important tool for further stress and metabolic syndrome-related studies.
Resumo:
The objective of this study was to model mathematically and to simulate the dynamic behavior of an auger-type fertilizer applicator (AFA) in order to use the variable-rate application (VRA) and reduce the coefficient of variation (CV) of the application, proposing an angular speed controller θ' for the motor drive shaft. The input model was θ' and the response was the fertilizer mass flow, due to the construction, density of fertilizer, fill factor and the end position of the auger. The model was used to simulate a control system in open loop, with an electric drive for AFA using an armature voltage (V A) controller. By introducing a sinusoidal excitation signal in V A with amplitude and delay phase optimized and varying θ' during an operation cycle, it is obtained a reduction of 29.8% in the CV (constant V A) to 11.4%. The development of the mathematical model was a first step towards the introduction of electric drive systems and closed loop control for the implementation of AFA with low CV in VRA.
Resumo:
One of the main problems related to the transport and manipulation of multiphase fluids concerns the existence of characteristic flow patterns and its strong influence on important operation parameters. A good example of this occurs in gas-liquid chemical reactors in which maximum efficiencies can be achieved by maintaining a finely dispersed bubbly flow to maximize the total interfacial area. Thus, the ability to automatically detect flow patterns is of crucial importance, especially for the adequate operation of multiphase systems. This work describes the application of a neural model to process the signals delivered by a direct imaging probe to produce a diagnostic of the corresponding flow pattern. The neural model is constituted of six independent neural modules, each of which trained to detect one of the main horizontal flow patterns, and a last winner-take-all layer responsible for resolving when two or more patterns are simultaneously detected. Experimental signals representing different bubbly, intermittent, annular and stratified flow patterns were used to validate the neural model.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Increased heart rate variability (HRV) and high-frequency content of the terminal region of the ventricular activation of signal-averaged ECG (SAECG) have been reported in athletes. The present study investigates HRV and SAECG parameters as predictors of maximal aerobic power (VO2max) in athletes. HRV, SAECG and VO2max were determined in 18 high-performance long-distance (25 ± 6 years; 17 males) runners 24 h after a training session. Clinical visits, ECG and VO2max determination were scheduled for all athletes during thew training period. A group of 18 untrained healthy volunteers matched for age, gender, and body surface area was included as controls. SAECG was acquired in the resting supine position for 15 min and processed to extract average RR interval (Mean-RR) and root mean squared standard deviation (RMSSD) of the difference of two consecutive normal RR intervals. SAECG variables analyzed in the vector magnitude with 40-250 Hz band-pass bi-directional filtering were: total and 40-µV terminal (LAS40) duration of ventricular activation, RMS voltage of total (RMST) and of the 40-ms terminal region of ventricular activation. Linear and multivariate stepwise logistic regressions oriented by inter-group comparisons were adjusted in significant variables in order to predict VO2max, with a P < 0.05 considered to be significant. VO2max correlated significantly (P < 0.05) with RMST (r = 0.77), Mean-RR (r = 0.62), RMSSD (r = 0.47), and LAS40 (r = -0.39). RMST was the independent predictor of VO2max. In athletes, HRV and high-frequency components of the SAECG correlate with VO2max and the high-frequency content of SAECG is an independent predictor of VO2max.
Resumo:
Animal models of intervertebral disc degeneration play an important role in clarifying the physiopathological mechanisms and testing novel therapeutic strategies. The objective of the present study is to describe a simple animal model of disc degeneration involving Wistar rats to be used for research studies. Disc degeneration was confirmed and classified by radiography, magnetic resonance and histological evaluation. Adult male Wistar rats were anesthetized and submitted to percutaneous disc puncture with a 20-gauge needle on levels 6-7 and 8-9 of the coccygeal vertebrae. The needle was inserted into the discs guided by fluoroscopy and its tip was positioned crossing the nucleus pulposus up to the contralateral annulus fibrosus, rotated 360° twice, and held for 30 s. To grade the severity of intervertebral disc degeneration, we measured the intervertebral disc height from radiographic images 7 and 30 days after the injury, and the signal intensity T2-weighted magnetic resonance imaging. Histological analysis was performed with hematoxylin-eosin and collagen fiber orientation using picrosirius red staining and polarized light microscopy. Imaging and histological score analyses revealed significant disc degeneration both 7 and 30 days after the lesion, without deaths or systemic complications. Interobserver histological evaluation showed significant agreement. There was a significant positive correlation between histological score and intervertebral disc height 7 and 30 days after the lesion. We conclude that the tail disc puncture method using Wistar rats is a simple, cost-effective and reproducible model for inducing disc degeneration.
Resumo:
Sublethal ischemic preconditioning (IPC) is a powerful inducer of ischemic brain tolerance. However, its underlying mechanisms are still not well understood. In this study, we chose four different IPC paradigms, namely 5 min (5 min duration), 5×5 min (5 min duration, 2 episodes, 15-min interval), 5×5×5 min (5 min duration, 3 episodes, 15-min intervals), and 15 min (15 min duration), and demonstrated that three episodes of 5 min IPC activated autophagy to the greatest extent 24 h after IPC, as evidenced by Beclin expression and LC3-I/II conversion. Autophagic activation was mediated by the tuberous sclerosis type 1 (TSC1)-mTor signal pathway as IPC increased TSC1 but decreased mTor phosphorylation. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) and hematoxylin and eosin staining confirmed that IPC protected against cerebral ischemic/reperfusion (I/R) injury. Critically, 3-methyladenine, an inhibitor of autophagy, abolished the neuroprotection of IPC and, by contrast, rapamycin, an autophagy inducer, potentiated it. Cleaved caspase-3 expression, neurological scores, and infarct volume in different groups further confirmed the protection of IPC against I/R injury. Taken together, our data indicate that autophagy activation might underlie the protection of IPC against ischemic injury by inhibiting apoptosis.
Resumo:
The Feedback-Related Negativity (FRN) is thought to reflect the dopaminergic prediction error signal from the subcortical areas to the ACC (i.e., a bottom-up signal). Two studies were conducted in order to test a new model of FRN generation, which includes direct modulating influences of medial PFC (i.e., top-down signals) on the ACC at the time of the FRN. Study 1 examined the effects of one’s sense of control (top-down) and of informative cues (bottom-up) on the FRN measures. In Study 2, sense of control and instruction-based (top-down) and probability-based expectations (bottom-up) were manipulated to test the proposed model. The results suggest that any influences of medial PFC on the activity of the ACC that occur in the context of incentive tasks are not direct. The FRN was shown to be sensitive to salient stimulus characteristics. The results of this dissertation partially support the reinforcement learning theory, in that the FRN is a marker for prediction error signal from subcortical areas. However, the pattern of results outlined here suggests that prediction errors are based on salient stimulus characteristics and are not reward specific. A second goal of this dissertation was to examine whether ACC activity, measured through the FRN, is altered in individuals at-risk for problem-gambling behaviour (PG). Individuals in this group were more sensitive to the valence of the outcome in a gambling task compared to not at-risk individuals, suggesting that gambling contexts increase the sensitivity of the reward system to valence of the outcome in individuals at risk for PG. Furthermore, at-risk participants showed an increased sensitivity to reward characteristics and a decreased response to loss outcomes. This contrasts with those not at risk whose FRNs were sensitive to losses. As the results did not replicate previous research showing attenuated FRNs in pathological gamblers, it is likely that the size and time of the FRN does not change gradually with increasing risk of maladaptive behaviour. Instead, changes in ACC activity reflected by the FRN in general can be observed only after behaviour becomes clinically maladaptive or through comparison between different types of gain/loss outcomes.
Resumo:
Affiliation: Département de Biochimie, Université de Montréal
Resumo:
Suite à la rencontre d’un antigène (Ag) présenté à la surface des cellules présentatrice de l’Ag (CPA), les lymphocytes T naïfs, ayant un récepteur des cellules T (RCT) spécifique de l’Ag, vont proliférer et se différencier en LT effecteurs (1). Suite à l’élimination de l’Ag la majorité des LTe vont mourir par apoptose alors que les restants vont se différencier en LT mémoire (LTm) protégeant l’organisme à long terme. Les mécanismes qui permettent la différenciation des LTe en LTm sont encore inconnus. Pour comprendre comment les LTm CD8+ sont générés à partir des LTe, nous avons émis l’hypothèse que la densité de l’Ag présenté par les CPA peut avoir un impact sur la sélection des LT CD8+ répondant l’Ag à se différencier en LTm. De manière intéressante, nos résultats montrent qu’une immunisation avec des cellules dendritiques (DCs) exprimant un haut niveau de complexe CMH/peptide à sa surface permet le développement de LTm. À l’inverse, le développement des LTm est fortement réduit (10-20X) lorsque les souris sont immunisées avec des DCs exprimant un niveau faible de complexes CMH/peptide à leur surface. De plus, la quantité d’Ag n’a aucune influence ni sur l’expansion des LT CD8+ ni sur l’acquisition de leurs fonctions effectrices, mais affecte de manière critique la génération des LTm. Nos résultats suggèrent que le nombre de RCT engagé lors de la reconnaissance de l’Ag est important pour la formation des LTm. Pour cela nous avons observé par vidéo-microscopie le temps d’interaction entre des LTn et des DCs. Nos résultats montrent que le temps et la qualité de l’interaction sont dépendants de la densité d’Ag présenté par les DCs. Effectivement, nous observons une diminution dans le pourcentage de LT faisant une interaction prolongée avec les DCs quand le niveau d’Ag est faible. De plus, nous observons des variations de l’expression des facteurs de transcription clefs impliqués dans la différenciation des LTm tels qu’Eomes, Bcl-6 et Blimp-1. Par ailleurs, la densité d’Ag fait varier l’expression du Neuron-derived orphan nuclear receptor 1 (Nor-1). Nor-1 est impliqué dans la conversion de Bcl-2 en molécule pro-apoptotique et contribue à la mort par apoptose des LTe pendant la phase de contraction. Notre modèle propose que la densité de l’épitope contrôle la génération des CD8+ LTm. Une meilleure compréhension des mécanismes impliqués dans la génération des LTm permettra le développement de meilleures stratégies pour la génération de vaccin. Dans un second temps, nous avons évalué le rôle du signal RCT dans l’homéostasie des LTm. Pour ce faire, nous avons utilisé un modèle de souris transgénique pour le RCT dont son expression peut être modulée par un traitement à la tétracycline. Ce système nous a permis d’abolir l’expression du RCT à la surface des LTm. De manière intéressante, en absence de RCT exprimé, les LTm CD8+ peuvent survivre à long terme dans l’organisme et rester fonctionnels. De plus, une sous population des LTm CD4+ a la capacité de survivre sans RCT exprimé dans un hôte lymphopénique alors que l’autre sous population nécessite l’expression du RCT.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
Observations show the oceans have warmed over the past 40 yr. with appreciable regional variation and more warming at the surface than at depth. Comparing the observations with results from two coupled ocean-atmosphere climate models [the Parallel Climate Model version 1 (PCM) and the Hadley Centre Coupled Climate Model version 3 (HadCM3)] that include anthropogenic forcing shows remarkable agreement between the observed and model-estimated warming. In this comparison the models were sampled at the same locations as gridded yearly observed data. In the top 100 m of the water column the warming is well separated from natural variability, including both variability arising from internal instabilities of the coupled ocean-atmosphere climate system and that arising from volcanism and solar fluctuations. Between 125 and 200 m the agreement is not significant, but then increases again below this level, and remains significant down to 600 m. Analysis of PCM's heat budget indicates that the warming is driven by an increase in net surface heat flux that reaches 0.7 W m(-2) by the 1990s; the downward longwave flux increases bv 3.7 W m(-2). which is not fully compensated by an increase in the upward longwave flux of 2.2 W m(-2). Latent and net solar heat fluxes each decrease by about 0.6 W m(-2). The changes in the individual longwave components are distinguishable from the preindustrial mean by the 1920s, but due to cancellation of components. changes in the net surface heat flux do not become well separated from zero until the 1960s. Changes in advection can also play an important role in local ocean warming due to anthropogenic forcing, depending, on the location. The observed sampling of ocean temperature is highly variable in space and time. but sufficient to detect the anthropogenic warming signal in all basins, at least in the surface layers, bv the 1980s.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Mathematical modeling of bacterial chemotaxis systems has been influential and insightful in helping to understand experimental observations. We provide here a comprehensive overview of the range of mathematical approaches used for modeling, within a single bacterium, chemotactic processes caused by changes to external gradients in its environment. Specific areas of the bacterial system which have been studied and modeled are discussed in detail, including the modeling of adaptation in response to attractant gradients, the intracellular phosphorylation cascade, membrane receptor clustering, and spatial modeling of intracellular protein signal transduction. The importance of producing robust models that address adaptation, gain, and sensitivity are also discussed. This review highlights that while mathematical modeling has aided in understanding bacterial chemotaxis on the individual cell scale and guiding experimental design, no single model succeeds in robustly describing all of the basic elements of the cell. We conclude by discussing the importance of this and the future of modeling in this area.