968 resultados para neural modeling
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
A neural network approach for solving the two-dimensional assignment problem is proposed. The design of the neural network is discussed and simulation results are presented. The neural network obtains 10-15% lower cost placements on the examples considered, than the adjacent pairwise exchange method.
Resumo:
Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.
Resumo:
Molecular dynamics simulation studies on polyene antifungal antibiotic amphotericin B, its head-to-tail dimeric structure and lipid - amphotericin B complex demonstrate interesting features of the flexibilities within the molecule and define the optimal interactions for the formation of a stable dimeric structure and complex with phospholipid.
Resumo:
The modes of binding of Gp(2',5')A, Gp(2',5')C, Gp(2',5')G and Gp(2',5')U to RNase T1 have been determined by computer modelling studies. All these dinucleoside phosphates assume extended conformations in the active site leading to better interactions with the enzyme. The 5'-terminal guanine of all these ligands is placed in the primary base binding site of the enzyme in an orientation similar to that of 2'-GMP in the RNase T1-2'-GMP complex. The 2'-terminal purines are placed close to the hydrophobic pocket formed by the residues Gly71, Ser72, Pro73 and Gly74 which occur in a loop region. However, the orientation of the 2'-terminal pyrimidines is different from that of 2'-terminal purines. This perhaps explains the higher binding affinity of the 2',5'-linked guanine dinucleoside phosphates with 2'-terminal purines than those with 2'-terminal pyrimidines. A comparison of the binding of the guanine dinucleoside phosphates with 2',5'- and 3',5'-linkages suggests significant differences in the ribose pucker and hydrogen bonding interactions between the catalytic residues and the bound nucleoside phosphate implying that 2',5'-linked dinucleoside phosphates may not be the ideal ligands to probe the role of the catalytic amino acid residues. A change in the amino acid sequence in the surface loop region formed by the residues Gly71 to Gly74 drastically affects the conformation of the base binding subsite, and this may account for the inactivity of the enzyme with altered sequence i.e., with Pro, Gly and Ser at positions 71 to 73 respectively. These results thus suggest that in addition to recognition and catalytic sites, interactions at the loop regions which constitute the subsite for base binding are also crucial in determining the substrate specificity.
Resumo:
An associative memory with parallel architecture is presented. The neurons are modelled by perceptrons having only binary, rather than continuous valued input. To store m elements each having n features, m neurons each with n connections are needed. The n features are coded as an n-bit binary vector. The weights of the n connections that store the n features of an element has only two values -1 and 1 corresponding to the absence or presence of a feature. This makes the learning very simple and straightforward. For an input corrupted by binary noise, the associative memory indicates the element that is closest (in terms of Hamming distance) to the noisy input. In the case where the noisy input is equidistant from two or more stored vectors, the associative memory indicates two or more elements simultaneously. From some simple experiments performed on the human memory and also on the associative memory, it can be concluded that the associative memory presented in this paper is in some respect more akin to a human memory than a Hopfield model.
Resumo:
We make an assessment of the impact of projected climate change on forest ecosystems in India. This assessment is based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the dynamic global vegetation model IBIS for A2 and B2 scenarios. According to the model projections, 39% of forest grids are likely to undergo vegetation type change under the A2 scenario and 34% under the B2 scenario by the end of this century. However, in many forest dominant states such as Chattisgarh, Karnataka and Andhra Pradesh up to 73%, 67% and 62% of forested grids are projected to undergo change. Net Primary Productivity (NPP) is projected to increase by 68.8% and 51.2% under the A2 and B2 scenarios, respectively, and soil organic carbon (SOC) by 37.5% for A2 and 30.2% for B2 scenario. Based on the dynamic global vegetation modeling, we present a forest vulnerability index for India which is based on the observed datasets of forest density, forest biodiversity as well as model predicted vegetation type shift estimates for forested grids. The vulnerability index suggests that upper Himalayas, northern and central parts of Western Ghats and parts of central India are most vulnerable to projected impacts of climate change, while Northeastern forests are more resilient. Thus our study points to the need for developing and implementing adaptation strategies to reduce vulnerability of forests to projected climate change.
Resumo:
This paper describes the field oriented control of a salient pole wound field synchronous machine in stator flux coordinates. The procedure for derivation of flux linkage equations along any general rotating axes including stator flux axes is given. The stator flux equations are used to identify the cross-coupling occurring between the axes due to saliency in the machine. The coupling terms are canceled as feedforward terms in the generation of references for current controllers to achieve good decoupling during transients. The design of current controller for stator-flux-oriented control is presented. This paper proposes the method of extending rotor flux closed loop observer for sensorless control of wound field synchronous machine. This paper also proposes a new sensorless control by using stator flux closed loop observer and estimation of torque angle using stator current components in stator flux coordinates. Detailed experimental results from a sensorless 15.8 hp salient pole wound field synchronous machine drive are presented to demonstrate the performance of the proposed control strategy from a low speed of 0.8 Hz to 50 Hz.
Resumo:
The objective of the present work is to propose a constitutive model for ice by considering the influence of important parameters such as strain rate dependence and pressure sensitivity on the response of the material. In this regard, the constitutive model proposed by Carney et al. (2006) is considered as a starting basis and subsequently modified to incorporate the effect of brittle cracking within a continuum damage mechanics framework. The damage is taken to occur in the form of distributed cracking within the material during impact which is consistent with experimental observations. At the point of failure, the material is assumed to be fluid-like with deviatoric stress almost dropping down to zero. The constitutive model is implemented in a general purpose finite element code using an explicit formulation. Several single element tests under uniaxial tension and compression, as well as biaxial loading are conducted in order to understand the performance of the model. Few large size simulations are also performed to understand the capability of the model to predict brittle damage evolution in un-notched and notched three point bend specimens. The proposed model predicts lower strength under tensile loading as compared to compressive loading which is in tune with experimental observations. Further the model also asserts the strain rate dependency of the strength behavior under both compressive as well as tensile loading, which also corroborates well with experimental results. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A hybrid technique to model two dimensional fracture problems which makes use of displacement discontinuity and direct boundary element method is presented. Direct boundary element method is used to model the finite domain of the body, while displacement discontinuity elements are utilized to represent the cracks. Thus the advantages of the component methods are effectively combined. This method has been implemented in a computer program and numerical results which show the accuracy of the present method are presented. The cases of bodies containing edge cracks as well as multiple cracks are considered. A direct method and an iterative technique are described. The present hybrid method is most suitable for modeling problems invoking crack propagation.