957 resultados para Real blow up
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.
Resumo:
We address the valuation of an operating wind farm and the finite-lived option to invest in it under different reward/support schemes: a constant feed-in tariff, a premium on top of the electricity market price (either a fixed premium or a variable subsidy such as a renewable obligation certificate or ROC), and a transitory subsidy, among others. Futures contracts on electricity with ever longer maturities enable market-based valuations to be undertaken. The model considers up to three sources of uncertainty: the electricity price, the level of wind generation, and the certificate (ROC) price where appropriate. When analytical solutions are lacking, we resort to a trinomial lattice combined with Monte Carlo simulation; we also use a two-dimensional binomial lattice when uncertainty in the ROC price is considered. Our data set refers to the UK. The numerical results show the impact of several factors involved in the decision to invest: the subsidy per MWh generated, the initial lump-sum subsidy, the maturity of the investment option, and electricity price volatility. Different combinations of variables can help bring forward investments in wind generation. One-off policies, e.g., a transitory initial subsidy, seem to have a stronger effect than a fixed premium per MWh produced.
Resumo:
This paper presents an adaptive Sequential Monte Carlo approach for real-time applications. Sequential Monte Carlo method is employed to estimate the states of dynamic systems using weighted particles. The proposed approach reduces the run-time computation complexity by adapting the size of the particle set. Multiple processing elements on FPGAs are dynamically allocated for improved energy efficiency without violating real-time constraints. A robot localisation application is developed based on the proposed approach. Compared to a non-adaptive implementation, the dynamic energy consumption is reduced by up to 70% without affecting the quality of solutions. © 2012 IEEE.
Resumo:
This paper presents a heterogeneous reconfigurable system for real-time applications applying particle filters. The system consists of an FPGA and a multi-threaded CPU. We propose a method to adapt the number of particles dynamically and utilise the run-time reconfigurability of the FPGA for reduced power and energy consumption. An application is developed which involves simultaneous mobile robot localisation and people tracking. It shows that the proposed adaptive particle filter can reduce up to 99% of computation time. Using run-time reconfiguration, we achieve 34% reduction in idle power and save 26-34% of system energy. Our proposed system is up to 7.39 times faster and 3.65 times more energy efficient than the Intel Xeon X5650 CPU with 12 threads, and 1.3 times faster and 2.13 times more energy efficient than an NVIDIA Tesla C2070 GPU. © 2013 Springer-Verlag.
Resumo:
Superradiance (SR), or cooperative spontaneous emission, has been predicted by R. Dicke before the invention of the laser. During the last few years one can see a renaissance of both experimental and theoretical studies of the superradiant phase transition in a variety of media, ranging from quantum dots and Bose condensates through to black holes. Until recently, despite of many years of research, SR has been considered as a phenomenon of pure scientific interest without obvious potential applications. However, recent investigations of the femtosecond SR emission generation from semiconductors have opened up some practical opportunities for the exploitation of this quantum optics phenomenon. Here we present a brief review of some features, advantages and potential applications of the SR generation from semiconductor laser structures
Resumo:
Tributyltin (TBT) is widely used as antifouling paints, agriculture biocides, and plastic stabilizers around the world, resulting in great pollution problem in aquatic environments. However, it has been short of the biomonitor to detect TBT in freshwater. We constructed the suppression subtractive hybridization library of Tetrahymena thermophila exposed to TBT, and screened out 101 Expressed Sequence Tags whose expressions were significantly up- or down-regulated with TBT treatment. From this, a series of genes related to the TBT toxicity were discovered, such as glutathione-S-transferase gene (down-regulated), plasma membrane Ca2+ ATPase isoforms 3 gene (up-regulated) and NgoA (up-regulated). Furthermore, their expressions under different concentrations of TBT treatment (0.5-40 ppb) were detected by real time fluorescent quantitative PCR. The differentially expressed genes of T thermophila in response to TBT were identified, which provide the basic to make Tetrahymena as a sensitive, rapid and convenient TBT biomonitor in freshwater based on rDNA inducible expression system. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.
Resumo:
This paper demonstrates an optimal control solution to change of machine set-up scheduling based on dynamic programming average cost per stage value iteration as set forth by Cararnanis et. al. [2] for the 2D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.
Resumo:
In some supply chains, materials are ordered periodically according to local information. This paper investigates how to improve the performance of such a supply chain. Specifically, we consider a serial inventory system in which each stage implements a local reorder interval policy; i.e., each stage orders up to a local basestock level according to a fixed-interval schedule. A fixed cost is incurred for placing an order. Two improvement strategies are considered: (1) expanding the information flow by acquiring real-time demand information and (2) accelerating the material flow via flexible deliveries. The first strategy leads to a reorder interval policy with full information; the second strategy leads to a reorder point policy with local information. Both policies have been studied in the literature. Thus, to assess the benefit of these strategies, we analyze the local reorder interval policy. We develop a bottom-up recursion to evaluate the system cost and provide a method to obtain the optimal policy. A numerical study shows the following: Increasing the flexibility of deliveries lowers costs more than does expanding information flow; the fixed order costs and the system lead times are key drivers that determine the effectiveness of these improvement strategies. In addition, we find that using optimal batch sizes in the reorder point policy and demand rate to infer reorder intervals may lead to significant cost inefficiency. © 2010 INFORMS.
Resumo:
The AMSR-E satellite data and in-situ data were applied to retrieve sea surface air temperature (Ta) over the Southern Ocean. The in-situ data were obtained from the 24~(th) -26~(th) Chinese Antarctic Expeditions during 2008-2010. First, Ta was used to analyze the relativity with the bright temperature (Tb) from the twelve channels of AMSR-E, and no high relativity was found between Ta and Tb from any of the channels. The highest relativity was 0.38 (with 23.8 GHz). The dataset for the modeling was obtained by using in-situ data to match up with Tb, and two methods were applied to build the retrieval model. In multi-parameters regression method, the Tbs from 12 channels were used to the model and the region was divided into two parts according to the latitude of 50°S. The retrieval results were compared with the in-situ data. The Root Mean Square Error (RMS) and relativity of high latitude zone were 0.96℃and 0.93, respectively. And those of low latitude zone were 1.29 ℃ and 0.96, respectively. Artificial neural network (ANN) method was applied to retrieve Ta.The RMS and relativity were 1.26 ℃ and 0.98, respectively.
Resumo:
Structural and magnetic properties of thin Mn films on the Fe(001) surface have been investigated by a combination of photoelectron spectroscopy and computer simulation in the temperature range 300 Kless than or equal toTless than or equal to750 K. Room-temperature as deposited Mn overlayers are found to be ferromagnetic up to 2.5-monolayer (ML) coverage, with a magnetic moment parallel to that of the iron substrate. The Mn atomic moment decreases with increasing coverage, and thicker samples (4-ML and 4.5-ML coverage) are antiferromagnetic. Photoemission measurements performed while the system temperature is rising at constant rate (dT/dtsimilar to0.5 K/s) detect the first signs of Mn-Fe interdiffusion at T=450 K, and reveal a broad temperature range (610 Kless than or equal toTless than or equal to680 K) in which the interface appears to be stable. Interdiffusion resumes at Tgreater than or equal to680 K. Molecular dynamics and Monte Carlo simulations allow us to attribute the stability plateau at 610 Kless than or equal toTless than or equal to680 K to the formation of a single-layer MnFe surface alloy with a 2x2 unit cell and a checkerboard distribution of Mn and Fe atoms. X-ray-absorption spectroscopy and analysis of the dichroic signal show that the alloy has a ferromagnetic spin structure, collinear with that of the substrate. The magnetic moments of Mn and Fe atoms in the alloy are estimated to be 0.8mu(B) and 1.1mu(B), respectively.
Resumo:
The loss of GABAergic neurotransmission has been closely linked with epileptogenesis. The modulation of the synaptic activity occurs both via the removal of GABA from the synaptic cleft and by GABA transporters (GATs) and by modulation of GABA receptors. The tremor rat (TRM; tm/tm) is the parent strain of the spontaneously epileptic rat (SER; zi/zi, tm/tm), which exhibits absence-like seizure after 8 weeks of age. However, there are no reports that can elucidate the effects of GATs and GABAA receptors (GABARs) on TRMs. The present study was conducted to detect GATs and GABAR a1 subunit in TRMs hippocampus at mRNA and protein levels. In this study, total synaptosomal GABA content was significantly decreased in TRMs hippocampus compared with control Wistar rats by high performance liquid chromatography (HPLC); mRNA and protein expressions of GAT-1, GAT-3 and GABAR a1 subunit were all significantly increased in TRMs hippocampus by real time PCR and western blot, respectively; GAT-1 and GABAR a1 subunit proteins were localized widely in TRMs and control rats hippocampus including CA1, CA3 and dentate gyrus (DG) regions whereas only a wide distribution of GAT-3 was observed in CA1 region by immunohistochemistry. These data demonstrate that excessive expressions of GAT-1 as well as GAT-3 and GABAR a1 subunit in TRMs hippocampus may provide the potential therapeutic targets for genetic epilepsy.
Resumo:
In spite of intensive research, computational modeling of the injection stretch blow molding (ISBM) still cannot match the accuracy of other polymer processes such as injection molding. There is a lack of understanding of the interdependence among the machine parameters set up by the operators, process parameters, material behavior, and the resulting final thickness distribution and performance of the molded product. The work presented in this paper describes a set of instrumentation tools developed for investigation of the ISBM process in an industrial setting. Results are presented showing the pressure and air temperature evolution inside the mold, the stretch rod force and displacement history, and the moment of contact of the polymer with seven discrete locations on the mold.
Resumo:
The ectrodactyly-ectodermal dysplasiaclefting syndrome is a rare autosomal dominant disorder caused by heterozygous mutations in the p63 gene, a transcription factor belonging to the p53 family. The majority of cases of ectrodactyly-ectodermal dysplasia syndrome are caused by de novo mutations and are therefore sporadic in approximately 60% of patients. The substitution of arginine to histidine (R279H), due to a c.836G>A mutation in exon 7 of the p63 gene, represents 55% of the identified mutations and is considered a mutational hot spot. A quantitative and sensitive real-time PCR was performed to quantify both wild-type and R279H alleles in DNA extracted from peripheral blood and RNA from cultured epithelial cells. Standard curves were constructed for both wild-type and mutant probes. The sensitivity of the assay was determined by generating serial dilutions of the DNA isolated from heterozygous patients (50% of alleles mutated) with wild-type DNA, thus obtaining decreasing percentages of p63 R279H mutant allele (50%, 37.5%, 25%, 12.5%, 10%, 7.5%, 5%, 2.5%, and 0.0%). The assay detected up to 1% of the mutant p63. The high sensitivity of the assay is of particular relevance to prenatal diagnosis and counseling and to detect therapeutic effects of drug treatment or gene therapy aimed at reducing the amount of mutated p63. © 2012 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.