982 resultados para Neural tube
Resumo:
We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated.
Resumo:
Most computational models of neurons assume that their electrical characteristics are of paramount importance. However, all long-term changes in synaptic efficacy, as well as many short-term effects, are mediated by chemical mechanisms. This technical report explores the interaction between electrical and chemical mechanisms in neural learning and development. Two neural systems that exemplify this interaction are described and modelled. The first is the mechanisms underlying habituation, sensitization, and associative learning in the gill withdrawal reflex circuit in Aplysia, a marine snail. The second is the formation of retinotopic projections in the early visual pathway during embryonic development.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.
Resumo:
BACKGROUND: Despite numerous studies on endotracheal tube cuff pressure (CP) management, the literature has yet to establish a technique capable of adequately tilling the cuff with an appropriate volume of air while generating low CP in a less subjective way. the purpose of this prospective study was to evaluate and compare the CP levels and air volume required to fill the endotracheal tubes cuff using 2 different techniques (volume-time curve versus minimal occlusive volume) in the immediate postoperative period after coronary artery bypass grafting. METHODS: A total of 267 subjects were analyzed. After the surgery, the lungs were ventilated using pressure controlled continuous mandatory ventilation, and the same ventilatory parameters were adjusted. Upon arrival in the ICU, the cuff was completely deflated and re-inflated, and at this point the volume of air to fill the cuff was adjusted using one of 2 randomly selected techniques: volume-time curve and minimal occlusive volume. We measured the volume of air injected into the cuff, the CP, and the expired tidal volume of the mechanical ventilation after the application of each technique. RESULTS: the volume-time curve technique demonstrated a significantly lower CP and a lower volume of air injected into the cuff, compared to the minimal occlusive volume technique (P < .001). No significant difference was observed in the expired tidal volume between the 2 techniques (P = .052). However, when the subjects were submitted to the minimal occlusive volume technique, 17% (n = 47) experienced air leakage as observed by the volume-time graph. CONCLUSIONS: the volume-time curve technique was associated with a lower CP and a lower volume of air injected into the cuff, when compared to the minimal occlusive volume technique in the immediate postoperative period after coronary artery bypass grafting. Therefore, the volume-time curve may be a more reliable alternative for endotracheal tube cuff management.
Resumo:
BackgroundMechanical ventilation is important in caring for patients with critical illness. Clinical complications, increased mortality, and high costs of health care are associated with prolonged ventilatory support or premature discontinuation of mechanical ventilation. Weaning refers to the process of gradually or abruptly withdrawing mechanical ventilation. the weaning process begins after partial or complete resolution of the underlying pathophysiology precipitating respiratory failure and ends with weaning success (successful extubation in intubated patients or permanent withdrawal of ventilatory support in tracheostomized patients).ObjectivesTo evaluate the effectiveness and safety of two strategies, a T-tube and pressure support ventilation, for weaning adult patients with respiratory failure that required invasive mechanical ventilation for at least 24 hours, measuring weaning success and other clinically important outcomes.Search methodsWe searched the following electronic databases: Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 6); MEDLINE (via PubMed) (1966 to June 2012); EMBASE (January 1980 to June 2012); LILACS (1986 to June 2012); CINAHL (1982 to June 2012); SciELO (from 1997 to August 2012); thesis repository of CAPES (Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior) (http://capesdw.capes.gov.br/capesdw/) (August 2012); and Current Controlled Trials (August 2012).We reran the search in December 2013. We will deal with any studies of interest when we update the review.Selection criteriaWe included randomized controlled trials (RCTs) that compared a T-tube with pressure support (PS) for the conduct of spontaneous breathing trials and as methods of gradual weaning of adult patients with respiratory failure of various aetiologies who received invasive mechanical ventilation for at least 24 hours.Data collection and analysisTwo authors extracted data and assessed the methodological quality of the included studies. Meta-analyses using the random-effects model were conducted for nine outcomes. Relative risk (RR) and mean difference (MD) or standardized mean difference (SMD) were used to estimate the treatment effect, with 95% confidence intervals (CI).Main resultsWe included nine RCTs with 1208 patients; 622 patients were randomized to a PS spontaneous breathing trial (SBT) and 586 to a T-tube SBT. the studies were classified into three categories of weaning: simple, difficult, and prolonged. Four studies placed patients in two categories of weaning. Pressure support ventilation (PSV) and a T-tube were used directly as SBTs in four studies (844 patients, 69.9% of the sample). in 186 patients (15.4%) both interventions were used along with gradual weaning from mechanical ventilation; the PS was gradually decreased, twice a day, until it was minimal and periods with a T-tube were gradually increased to two and eight hours for patients with difficult and prolonged weaning. in two studies (14.7% of patients) the PS was lowered to 2 to 4 cm H2O and 3 to 5 cm H2O based on ventilatory parameters until the minimal PS levels were reached. PS was then compared to the trial with the T-tube (TT).We identified 33 different reported outcomes in the included studies; we took 14 of them into consideration and performed meta-analyses on nine. With regard to the sequence of allocation generation, allocation concealment, selective reporting and attrition bias, no study presented a high risk of bias. We found no clear evidence of a difference between PS and TT for weaning success (RR 1.07, 95% CI 0.97 to 1.17, 9 studies, low quality of evidence), intensive care unit (ICU) mortality (RR 0.81, 95% CI 0.53 to 1.23, 5 studies, low quality of evidence), reintubation (RR 0.92, 95% CI 0.66 to 1.26, 7 studies, low quality evidence), ICU and long-term weaning unit (LWU) length of stay (MD -7.08 days, 95% CI -16.26 to 2.1, 2 studies, low quality of evidence) and pneumonia (RR 0.67, 95% CI 0.08 to 5.85, 2 studies, low quality of evidence). PS was significantly superior to the TT for successful SBTs (RR 1.09, 95% CI 1.02 to 1.17, 4 studies, moderate quality of evidence). Four studies reported on weaning duration, however we were unable to combined the study data because of differences in how the studies presented their data. One study was at high risk of other bias and four studies were at high risk for detection bias. Three studies reported that the weaning duration was shorter with PS, and in one study the duration was shorter in patients with a TT.Authors' conclusionsTo date, we have found evidence of generally low quality from studies comparing pressure support ventilation (PSV) and with a T-tube. the effects on weaning success, ICU mortality, reintubation, ICU and LWU length of stay, and pneumonia were imprecise. However, PSV was more effective than a T-tube for successful spontaneous breathing trials (SBTs) among patients with simple weaning. Based on the findings of single trials, three studies presented a shorter weaning duration in the group undergoing PS SBT, however a fourth study found a shorter weaning duration with a T-tube.
Resumo:
BACKGROUND: Previous investigation showed that the volume-time curve technique could be an alternative for endotracheal tube (ETT) cuff management. However, the clinical impact of the volume-time curve application has not been documented. the purpose of this study was to compare the occurrence and intensity of a sore throat, cough, thoracic pain, and pulmonary function between these 2 techniques for ETT cuff management: volume-time curve technique versus minimal occlusive volume (MOV) technique after coronary artery bypass grafting. METHODS: A total of 450 subjects were randomized into 2 groups for cuff management after intubation: MOV group (n = 222) and volume-time curve group (n = 228). We measured cuff pressure before extubation. We performed spirometry 24 h before and after surgery. We graded sore throat and cough according to a 4-point scale at 1, 24, 72, and 120 h after extubation and assessed thoracic pain at 24 h after extubation and quantified the level of pain by a 10-point scale. RESULTS: the volume-time curve group presented significantly lower cuff pressure (30.9 +/- 2.8 vs 37.7 +/- 3.4 cm H2O), less incidence and intensity of sore throat (1 h, 23.7 vs 51.4%; and 24 h, 18.9 vs 40.5%, P < .001), cough (1 h, 19.3 vs 48.6%; and 24 h, 18.4 vs 42.3%, P < .001), thoracic pain (5.2 +/- 1.8 vs 7.1 +/- 1.7), better preservation of FVC (49.5 +/- 9.9 vs 41.8 +/- 12.9%, P = .005), and FEV1, (46.6 +/- 1.8 vs 38.6 +/- 1.4%, P = .005) compared with the MOV group. CONCLUSIONS: the subjects who received the volume-time curve technique for ETT cuff management presented a significantly lower incidence and severity of sore throat and cough, less thoracic pain, and minimally impaired pulmonary function than those subjects who received the MOV technique during the first 24 h after coronary artery bypass grafting.
Resumo:
Sauze, C and Neal, M. 'Endocrine Inspired Modulation of Artificial Neural Networks for Mobile Robotics', Dynamics of Learning Behavior and Neuromodulation Workshop, European Conference on Artifical Life 2007, Lisbon, Portugal, September 10th-14th 2007.
Resumo:
Martin Huelse: Generating complex connectivity structures for large-scale neural models. In: V. Kurkova, R. Neruda, and J. Koutnik (Eds.): ICANN 2008, Part II, LNCS 5164, pp. 849?858, 2008. Sponsorship: EPSRC
Resumo:
It is well documented that the presence of even a few air bubbles in water can signifi- cantly alter the propagation and scattering of sound. Air bubbles are both naturally and artificially generated in all marine environments, especially near the sea surface. The abil- ity to measure the acoustic propagation parameters of bubbly liquids in situ has long been a goal of the underwater acoustics community. One promising solution is a submersible, thick-walled, liquid-filled impedance tube. Recent water-filled impedance tube work was successful at characterizing low void fraction bubbly liquids in the laboratory [1]. This work details the modifications made to the existing impedance tube design to allow for submersed deployment in a controlled environment, such as a large tank or a test pond. As well as being submersible, the useable frequency range of the device is increased from 5 - 9 kHz to 1 - 16 kHz and it does not require any form of calibration. The opening of the new impedance tube is fitted with a large stainless steel flange to better define the boundary condition on the plane of the tube opening. The new device was validated against the classic theoretical result for the complex reflection coefficient of a tube opening fitted with an infinite flange. The complex reflection coefficient was then measured with a bubbly liquid (order 250 micron radius and 0.1 - 0.5 % void fraction) outside the tube opening. Results from the bubbly liquid experiments were inconsistent with flanged tube theory using current bubbly liquid models. The results were more closely matched to unflanged tube theory, suggesting that the high attenuation and phase speeds in the bubbly liquid made the tube opening appear as if it were radiating into free space.
Resumo:
What brain mechanisms underlie autism and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the iSTART model, which proposes how cognitive, emotional, timing, and motor processes may interact together to create and perpetuate autistic symptoms. These model processes were originally developed to explain data concerning how the brain controls normal behaviors. The iSTART model shows how autistic behavioral symptoms may arise from prescribed breakdowns in these brain processes.
Resumo:
This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.
Resumo:
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.
Resumo:
Calligraphic writing presents a rich set of challenges to the human movement control system. These challenges include: initial learning, and recall from memory, of prescribed stroke sequences; critical timing of stroke onsets and durations; fine control of grip and contact forces; and letter-form invariance under voluntary size scaling, which entails fine control of stroke direction and amplitude during recruitment and derecruitment of musculoskeletal degrees of freedom. Experimental and computational studies in behavioral neuroscience have made rapid progress toward explaining the learning, planning and contTOl exercised in tasks that share features with calligraphic writing and drawing. This article summarizes computational neuroscience models and related neurobiological data that reveal critical operations spanning from parallel sequence representations to fine force control. Part one addresses stroke sequencing. It treats competitive queuing (CQ) models of sequence representation, performance, learning, and recall. Part two addresses letter size scaling and motor equivalence. It treats cursive handwriting models together with models in which sensory-motor tmnsformations are performed by circuits that learn inverse differential kinematic mappings. Part three addresses fine-grained control of timing and transient forces, by treating circuit models that learn to solve inverse dynamics problems.