138 resultados para CLASTIC INPUTS
Resumo:
Concern over changes in global climate has increased in recent years with improvement in understanding of atmospheric dynamics and growth in evidence of climate link to long‐term variability in hydrologic records. Climate impact studies rely on climate change information at fine spatial resolution. Towards this, the past decade has witnessed significant progress in development of downscaling models to cascade the climate information provided by General Circulation Models (GCMs) at coarse spatial resolution to the scale relevant for hydrologic studies. While a plethora of downscaling models have been applied successfully to mid‐latitude regions, a few studies are available on tropical regions where the atmosphere is known to have more complex behavior. In this paper, a support vector machine (SVM) approach is proposed for statistical downscaling to interpret climate change signals provided by GCMs over tropical regions of India. Climate variables affecting spatio‐temporal variation of precipitation at each meteorological sub‐division of India are identified. Following this, cluster analysis is applied on climate data to identify the wet and dry seasons in each year. The data pertaining to climate variables and precipitation of each meteorological sub‐division is then used to develop SVM based downscaling model for each season. Subsequently, the SVM based downscaling model is applied to future climate predictions from the second generation Coupled Global Climate Model (CGCM2) to assess the impact of climate change on hydrological inputs to the meteorological sub‐divisions. The results obtained from the SVM downscaling model are then analyzed to assess the impact of climate change on precipitation over India.
Resumo:
Near-infrared diffuse optical tomography (DOT) technique has the capability of providing good quantitative reconstruction of tissue absorption and scattering properties with additional inputs such as input and output modulation depths and correction for the photon leakage. We have calculated the two-dimensional (2D) input modulation depth from three-dimensional (3D) diffusion to model the 2D diffusion of photons. The photon leakage when light traverses from phantom to the fiber tip is estimated using a solid angle model. The experiments are carried for single (5 and 6 mm) as well as multiple inhomogeneities (6 and 8 mm) with higher absorption coefficient in a homogeneous phantom. Diffusion equation for photon transport is solved using finite element method and Jacobian is modeled for reconstructing the optical parameters. We study the development and performance of DOT system using modulated single light source and multiple detectors. The dual source methods are reported to have better reconstruction capabilities to resolve and localize single as well as multiple inhomogeneities because of its superior noise rejection capability. However, an experimental setup with dual sources is much more difficult to implement because of adjustment of two out of phase identical light probes symmetrically on either side of the detector during scanning time. Our work shows that with a relatively simpler system with a single source, the results are better in terms of resolution and localization. The experiments are carried out with 5 and 6 mm inhomogeneities separately and 6 and 8 mm inhomogeneities both together with absorption coefficient almost three times as that of the background. The results show that our experimental single source system with additional inputs such as 2D input/output modulation depth and air fiber interface correction is capable of detecting 5 and 6 mm inhomogeneities separately and can identify the size difference of multiple inhomogeneities such as 6 and 8 mm. The localization error is zero. The recovered absorption coefficient is 93% of inhomogeneity that we have embedded in experimental phantom.
Resumo:
Atomistic molecular dynamics simulations have been carried out to reveal the characteristic features of ethylenediamine (EDA) cored protonated (corresponding to neutral pH) poly amido amine (PAMAM) dendrimers of generation 3 (G3) and 4 (G4) that are functionalized with single strand DNAs (ssDNAs). The four ssDNA strands that are attached via an alkythiolate [-S(CH(2))(6)-] linker molecule to the free amine groups on the surface of the PAMAM dendrimers are observed to undergo a rapid conformational change during the 25 ns long simulation period. From the RMSD values of ssDNAs, we find relative stability in the case of purine rich (having more adenine and guanine) ssDNA strands than pyrimidine rich (thymine and cytosine) ssDNA strands. The degree of wrapping of ssDNA strands on the dendrimer molecule was found to be influenced by the charge ratio of DNA and the dendrimer. As the G4 dendrimer contains relatively more positive charge than G3 dendrimer, we observe extensive wrapping of ssDNAs on the G4 dendrimer than G3 dendrimer. This might indicate that DNA functionalized G3 dendrimer is more suitable to construct higher order nanostructures. The linker molecule was also found to undergo drastic conformational change during the simulation. During nanosecond long simulation some portion of the linker molecule was found to be lying nearly flat on the surface of the dendrimer molecule. The ssDNA strands along with the linkers are seen to penetrate the surface of the dendrimer molecule and approach closer to the center of the dendrimer indicating the soft sphere nature of the dendrimer molecule. The effective radius of DNA-functionalized dendrimer nanoparticles was found to be independent of base composition of ssDNAs and was observed to be around 19.5 angstrom and 22.4 angstrom when we used G3 and G4 PAMAM dendrimers as the core of the nanoparticle respectively. The observed effective radius of DNA-functionalized dendrimer molecules apparently indicates the significant shrinkage in the structure that has taken place in dendrimer, linker and DNA strands. As a whole our results describe the characteristic features of DNA-functionalized dendrimer nanoparticles and can be used as strong inputs to design effectively the DNA-dendrimer nanoparticle self-assembly for their active biological applications.
Resumo:
Third World hinterlands provide most of the settings in which the quality of human life has improved the least over the decade since Our Common Future was published. This low quality of life promotes a desire for large number of offspring, fuelling population growth and an exodus to the urban centres of the Third World, Enhancing the quality of life of these people in ways compatible with the health of their environments is therefore the most significant of the challenges from the perspective of sustainable development. Human quality of life may be viewed in terms of access to goods, services and a satisfying social role. The ongoing processes of globalization are enhancing flows of goods worldwide, but these hardly reach the poor of Third World countrysides. But processes of globalization have also vastly improved everybody's access to Information, and there are excellent opportunities of putting this to good use to enhance the quality of life of the people of Third World countrysides through better access to education and health. More importantly, better access to information could promote a more satisfying social role through strengthening grass-roots involvement in development planning and management of natural resources. I illustrate these possibilities with the help of a series of concrete experiences form the south Indian state of Kerala. Such an effort does not call for large-scare material inputs, rather it calls for a culture of inform-and-share in place place of the prevalent culture of control-and-command. It calls for openness and transparency in transactions involving government agencies, NGOs, and national and transnational business enterprises. It calls for acceptance of accountability by such agencies.
Resumo:
Agroforestry has a potential for sequestering as much carbon if not more than forests. Massive benefits can be channeled to small farmers and landless labourers through cultivation of Tamarind and other fast growing and fruit yielding trees. This paper describes a project started by small farmers and landless labourers in a semiarid areas of south India. The aim is to upgrade dryland holdings of the member families through economically sound dry land horticulture, community woodlots, and planting of fast growing species along orchard and field boundaries. The small farmers invest massive labour inputs and project gives economic benefits to change their land use practices and improve environmental quality. This paper describes the planning. processes of the project, hurdles in finding AIJ partners, current monitoring procedures and costs of C sequestration. This shows this project is economically viable on its own, but initially needed, and continues to need Carbon credit investment in order to spread rapidly across the geopolitical region covered by the organization. It argues that economic gains to small farmers and landless labourers are the most certain way of achieving massive biomass increase and soil carbon replenishment, and that multiple holistic benefits are achieved through this kind of project.
Resumo:
Artificial Neural Networks (ANNs) have recently been proposed as an alterative method for salving certain traditional problems in power systems where conventional techniques have not achieved the desired speed, accuracy or efficiency. This paper presents application of ANN where the aim is to achieve fast voltage stability margin assessment of power network in an energy control centre (ECC), with reduced number of appropriate inputs. L-index has been used for assessing voltage stability margin. Investigations are carried out on the influence of information encompassed in input vector and target out put vector, on the learning time and test performance of multi layer perceptron (MLP) based ANN model. LP based algorithm for voltage stability improvement, is used for generating meaningful training patterns in the normal operating range of the system. From the generated set of training patterns, appropriate training patterns are selected based on statistical correlation process, sensitivity matrix approach, contingency ranking approach and concentric relaxation method. Simulation results on a 24 bus EHV system, 30 bus modified IEEE system, and a 82 bus Indian power network are presented for illustration purposes.
Resumo:
In this paper, knowledge-based approach using Support Vector Machines (SVMs) are used for estimating the coordinated zonal settings of a distance relay. The approach depends on the detailed simulation studies of apparent impedance loci as seen by distance relay during disturbance, considering various operating conditions including fault resistance. In a distance relay, the impedance loci given at the relay location is obtained from extensive transient stability studies. SVMs are used as a pattern classifier for obtaining distance relay co-ordination. The scheme utilizes the apparent impedance values observed during a fault as inputs. An improved performance with the use of SVMs, keeping the reach when faced with different fault conditions as well as system power flow changes, are illustrated with an equivalent 265 bus system of a practical Indian Western Grid.
Estimating the Hausdorff-Besicovitch dimension of boundary of basin of attraction in helicopter trim
Resumo:
Helicopter trim involves solution of nonlinear force equilibrium equations. As in many nonlinear dynamic systems, helicopter trim problem can show chaotic behavior. This chaotic behavior is found in the basin of attraction of the nonlinear trim equations which have to be solved to determine the main rotor control inputs given by the pilot. This study focuses on the boundary of the basin of attraction obtained for a set of control inputs. We analyze the boundary by considering it at different magnification levels. The magnified views reveal intricate geometries. It is also found that the basin boundary exhibits the characteristic of statistical self-similarity, which is an essential property of fractal geometries. These results led the authors to investigate the fractal dimension of the basin boundary. It is found that this dimension is indeed greater than the topological dimension. From all the observations, it is evident that the boundary of the basin of attraction for helicopter trim problem is fractal in nature. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Online remote visualization and steering of critical weather applications like cyclone tracking are essential for effective and timely analysis by geographically distributed climate science community. A steering framework for controlling the high-performance simulations of critical weather events needs to take into account both the steering inputs of the scientists and the criticality needs of the application including minimum progress rate of simulations and continuous visualization of significant events. In this work, we have developed an integrated user-driven and automated steering framework INST for simulations, online remote visualization, and analysis for critical weather applications. INST provides the user control over various application parameters including region of interest, resolution of simulation, and frequency of data for visualization. Unlike existing efforts, our framework considers both the steering inputs and the criticality of the application, namely, the minimum progress rate needed for the application, and various resource constraints including storage space and network bandwidth to decide the best possible parameter values for simulations and visualization.
Resumo:
In the two-user Gaussian Strong Interference Channel (GSIC) with finite constellation inputs, it is known that relative rotation between the constellations of the two users enlarges the Constellation Constrained (CC) capacity region. In this paper, a metric for finding the approximate angle of rotation to maximally enlarge the CC capacity is presented. It is shown that for some portion of the Strong Interference (SI) regime, with Gaussian input alphabets, the FDMA rate curve touches the capacity curve of the GSIC. Even as the Gaussian alphabet FDMA rate curve touches the capacity curve of the GSIC, at high powers, with both the users using the same finite constellation, we show that the CC FDMA rate curve lies strictly inside the CC capacity curve for the constellations BPSK, QPSK, 8-PSK, 16-QAM and 64-QAM. It is known that, with Gaussian input alphabets, the FDMA inner-bound at the optimum sum-rate point is always better than the simultaneous-decoding inner-bound throughout the Weak Interference (WI) regime. For a portion of the WI regime, it is shown that, with identical finite constellation inputs for both the users, the simultaneous-decoding inner-bound enlarged by relative rotation between the constellations can be strictly better than the FDMA inner-bound.
Resumo:
Control of flow in duct networks has a myriad of applications ranging from heating, ventilation, and air-conditioning to blood flow networks. The system considered here provides vent velocity inputs to a novel 3-D wind display device called the TreadPort Active Wind Tunnel. An error-based robust decentralized sliding-mode control method with nominal feedforward terms is developed for individual ducts while considering cross coupling between ducts and model uncertainty as external disturbances in the output. This approach is important due to limited measurements, geometric complexities, and turbulent flow conditions. Methods for resolving challenges such as turbulence, electrical noise, valve actuator design, and sensor placement are presented. The efficacy of the controller and the importance of feedforward terms are demonstrated with simulations based upon an experimentally validated lumped parameter model and experiments on the physical system. Results show significant improvement over traditional control methods and validate prior assertions regarding the importance of decentralized control in practice.
Resumo:
Ampcalculator (AMPC) is a Mathematica (c) based program that was made publicly available some time ago by Unterdorfer and Ecker. It enables the user to compute several processes at one loop (upto O(p(4))) in SU(3) chiral perturbation theory. They include computing matrix elements and form factors for strong and non-leptonic weak processes with at most six external states. It was used to compute some novel processes and was tested against well-known results by the original authors. Here we present the results of several thorough checks of the package. Exhaustive checks performed by the original authors are not publicly available, and hence the present effort. Some new results are obtained from the software especially in the kaon odd-intrinsic parity non-leptonic decay sector involving the coupling G(27). Another illustrative set of amplitudes at tree level we provide is in the context of tau-decays with several mesons including quark mass effects, of use to the BELLE experiment. All eight meson-meson scattering amplitudes have been checked. The Kaon-Compton amplitude has been checked and a minor error in the published results has been pointed out. This exercise is a tutorial-based one, wherein several input and output notebooks are also being made available as ancillary files on the arXiv. Some of the additional notebooks we provide contain explicit expressions that we have used for comparison with established results. The purpose is to encourage users to apply the software to suit their specific needs. An automatic amplitude generator of this type can provide error-free outputs that could be used as inputs for further simplification, and in varied scenarios such as applications of chiral perturbation theory at finite temperature, density and volume. This can also be used by students as a learning aid in low-energy hadron dynamics.
Resumo:
Narayanan R, Johnston D. Functional maps within a single neuron. J Neurophysiol 108: 2343-2351, 2012. First published August 29, 2012; doi:10.1152/jn.00530.2012.-The presence and plasticity of dendritic ion channels are well established. However, the literature is divided on what specific roles these dendritic ion channels play in neuronal information processing, and there is no consensus on why neuronal dendrites should express diverse ion channels with different expression profiles. In this review, we present a case for viewing dendritic information processing through the lens of the sensory map literature, where functional gradients within neurons are considered as maps on the neuronal topograph. Under such a framework, drawing analogies from the sensory map literature, we postulate that the formation of intraneuronal functional maps is driven by the twin objectives of efficiently encoding inputs that impinge along different dendritic locations and of retaining homeostasis in the face of changes that are required in the coding process. In arriving at this postulate, we relate intraneuronal map physiology to the vast literature on sensory maps and argue that such a metaphorical association provides a fresh conceptual framework for analyzing and understanding single-neuron information encoding. We also describe instances where the metaphor presents specific directions for research on intraneuronal maps, derived from analogous pursuits in the sensory map literature. We suggest that this perspective offers a thesis for why neurons should express and alter ion channels in their dendrites and provides a framework under which active dendrites could be related to neural coding, learning theory, and homeostasis.
Resumo:
Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.