957 resultados para realistic neural modeling
Resumo:
The design of present generation uncooled Hg1-xCdxTe infrared photon detectors relies on complex heterostructures with a basic unit cell of type (n) under bar (+)/pi/(p) under bar (+). We present an analysis of double barrier (n) under bar (+)/pi/(p) under bar (+) mid wave infrared (x = 0.3) HgCdTe detector for near room temperature operation using numerical computations. The present work proposes an accurate and generalized methodology in terms of the device design, material properties, and operation temperature to study the effects of position dependence of carrier concentration, electrostatic potential, and generation-recombination (g-r) rates on detector performance. Position dependent profiles of electrostatic potential, carrier concentration, and g-r rates were simulated numerically. Performance of detector was studied as function of doping concentration of absorber and contact layers, width of both layers and minority carrier lifetime. Responsivity similar to 0.38 A W-1, noise current similar to 6 x 10(-14) A/Hz(1/2) and D* similar to 3.1 x 10(10)cm Hz(1/2) W-1 at 0.1 V reverse bias have been calculated using optimized values of doping concentration, absorber width and carrier lifetime. The suitability of the method has been illustrated by demonstrating the feasibility of achieving the optimum device performance by carefully selecting the device design and other parameters. (C) 2010 American Institute of Physics. doi:10.1063/1.3463379]
Resumo:
Effective usage of image guidance by incorporating the refractive index (RI) variation in computational modeling of light propagation in tissue is investigated to assess its impact on optical-property estimation. With the aid of realistic patient breast three-dimensional models, the variation in RI for different regions of tissue under investigation is shown to influence the estimation of optical properties in image-guided diffuse optical tomography (IG-DOT) using numerical simulations. It is also shown that by assuming identical RI for all regions of tissue would lead to erroneous estimation of optical properties. The a priori knowledge of the RI for the segmented regions of tissue in IG-DOT, which is difficult to obtain for the in vivo cases, leads to more accurate estimates of optical properties. Even inclusion of approximated RI values, obtained from the literature, for the regions of tissue resulted in better estimates of optical properties, with values comparable to that of having the correct knowledge of RI for different regions of tissue.
Resumo:
In this work a physically based analytical quantum threshold voltage model for the triple gate long channel metal oxide semiconductor field effect transistor is developed The proposed model is based on the analytical solution of two-dimensional Poisson and two-dimensional Schrodinger equation Proposed model is extended for short channel devices by including semi-empirical correction The impact of effective mass variation with film thicknesses is also discussed using the proposed model All models are fully validated against the professional numerical device simulator for a wide range of device geometries (C) 2010 Elsevier Ltd All rights reserved
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Results of an investigation dealing with the behaviour of grid-connected induction generators (GCIGs) driven by typical prime movers such as mini-hydro/wind turbines are presented. Certain practical operational problems of such systems are identified. Analytical techniques are developed to study the behavior of such systems. The system consists of the induction generator (IG) feeding a 11 kV grid through a step-up transformer and a transmission line. Terminal capacitors to compensate for the lagging VAr are included in the study. Computer simulation was carried out to predict the system performance at the given input power from the turbine. Effects of variations in grid voltage, frequency, input power, and terminal capacitance on the machine and system performance are studied. An analysis of self-excitation conditions on disconnection of supply was carried out. The behavior of a 220 kW hydel system and 55/11 kW and 22 kW wind driven system corresponding to actual field conditions is discussed
Resumo:
A neural network approach for solving the two-dimensional assignment problem is proposed. The design of the neural network is discussed and simulation results are presented. The neural network obtains 10-15% lower cost placements on the examples considered, than the adjacent pairwise exchange method.
Resumo:
Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.
Resumo:
Molecular dynamics simulation studies on polyene antifungal antibiotic amphotericin B, its head-to-tail dimeric structure and lipid - amphotericin B complex demonstrate interesting features of the flexibilities within the molecule and define the optimal interactions for the formation of a stable dimeric structure and complex with phospholipid.
Resumo:
The modes of binding of Gp(2',5')A, Gp(2',5')C, Gp(2',5')G and Gp(2',5')U to RNase T1 have been determined by computer modelling studies. All these dinucleoside phosphates assume extended conformations in the active site leading to better interactions with the enzyme. The 5'-terminal guanine of all these ligands is placed in the primary base binding site of the enzyme in an orientation similar to that of 2'-GMP in the RNase T1-2'-GMP complex. The 2'-terminal purines are placed close to the hydrophobic pocket formed by the residues Gly71, Ser72, Pro73 and Gly74 which occur in a loop region. However, the orientation of the 2'-terminal pyrimidines is different from that of 2'-terminal purines. This perhaps explains the higher binding affinity of the 2',5'-linked guanine dinucleoside phosphates with 2'-terminal purines than those with 2'-terminal pyrimidines. A comparison of the binding of the guanine dinucleoside phosphates with 2',5'- and 3',5'-linkages suggests significant differences in the ribose pucker and hydrogen bonding interactions between the catalytic residues and the bound nucleoside phosphate implying that 2',5'-linked dinucleoside phosphates may not be the ideal ligands to probe the role of the catalytic amino acid residues. A change in the amino acid sequence in the surface loop region formed by the residues Gly71 to Gly74 drastically affects the conformation of the base binding subsite, and this may account for the inactivity of the enzyme with altered sequence i.e., with Pro, Gly and Ser at positions 71 to 73 respectively. These results thus suggest that in addition to recognition and catalytic sites, interactions at the loop regions which constitute the subsite for base binding are also crucial in determining the substrate specificity.
Resumo:
An associative memory with parallel architecture is presented. The neurons are modelled by perceptrons having only binary, rather than continuous valued input. To store m elements each having n features, m neurons each with n connections are needed. The n features are coded as an n-bit binary vector. The weights of the n connections that store the n features of an element has only two values -1 and 1 corresponding to the absence or presence of a feature. This makes the learning very simple and straightforward. For an input corrupted by binary noise, the associative memory indicates the element that is closest (in terms of Hamming distance) to the noisy input. In the case where the noisy input is equidistant from two or more stored vectors, the associative memory indicates two or more elements simultaneously. From some simple experiments performed on the human memory and also on the associative memory, it can be concluded that the associative memory presented in this paper is in some respect more akin to a human memory than a Hopfield model.
Resumo:
We make an assessment of the impact of projected climate change on forest ecosystems in India. This assessment is based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the dynamic global vegetation model IBIS for A2 and B2 scenarios. According to the model projections, 39% of forest grids are likely to undergo vegetation type change under the A2 scenario and 34% under the B2 scenario by the end of this century. However, in many forest dominant states such as Chattisgarh, Karnataka and Andhra Pradesh up to 73%, 67% and 62% of forested grids are projected to undergo change. Net Primary Productivity (NPP) is projected to increase by 68.8% and 51.2% under the A2 and B2 scenarios, respectively, and soil organic carbon (SOC) by 37.5% for A2 and 30.2% for B2 scenario. Based on the dynamic global vegetation modeling, we present a forest vulnerability index for India which is based on the observed datasets of forest density, forest biodiversity as well as model predicted vegetation type shift estimates for forested grids. The vulnerability index suggests that upper Himalayas, northern and central parts of Western Ghats and parts of central India are most vulnerable to projected impacts of climate change, while Northeastern forests are more resilient. Thus our study points to the need for developing and implementing adaptation strategies to reduce vulnerability of forests to projected climate change.