996 resultados para Wadeable Streams Assessment
Resumo:
Impacts of climate change on hydrology are assessed by downscaling large scale general circulation model (GCM) outputs of climate variables to local scale hydrologic variables. This modelling approach is characterized by uncertainties resulting from the use of different models, different scenarios, etc. Modelling uncertainty in climate change impact assessment includes assigning weights to GCMs and scenarios, based on their performances, and providing weighted mean projection for the future. This projection is further used for water resources planning and adaptation to combat the adverse impacts of climate change. The present article summarizes the recent published work of the authors on uncertainty modelling and development of adaptation strategies to climate change for the Mahanadi river in India.
Resumo:
Nanotechnology is a new technology which is generating a lot of interest among academicians, practitioners and scientists. Critical research is being carried out in this area all over the world.Governments are creating policy initiatives to promote developments it the nanoscale science and technology developments. Private investment is also seeing a rising trend. Large number of academic institutions and national laboratories has set up research centers that are workingon the multiple applications of nanotechnology. Wide ranges of applications are claimed for nanotechnology. This consists of materials, chemicals, textiles, semiconductors, to wonder drug delivery systems and diagnostics. Nanotechnology is considered to be a next big wave of technology after information technology and biotechnology. In fact, nanotechnology holds the promise of advances that exceed those achieved in recent decades in computers and biotechnology. Much interest in nanotechnology also could be because of the fact that enormous monetary benefits are expected from nanotechnology based products. According to NSF, revenues from nanotechnology could touch $ 1 trillion by 2015. However much of the benefits are projected ones. Realizing claimed benefits require successful development of nanoscience andv nanotechnology research efforts. That is the journey of invention to innovation has to be completed. For this to happen the technology has to flow from laboratory to market. Nanoscience and nanotechnology research efforts have to come out in the form of new products, new processes, and new platforms.India has also started its Nanoscience and Nanotechnology development program in under its 10(th) Five Year Plan and funds worth Rs. One billion have been allocated for Nanoscience and Nanotechnology Research and Development. The aim of the paper is to assess Nanoscience and Nanotechnology initiatives in India. We propose a conceptual model derived from theresource based view of the innovation. We have developed a structured questionnaire to measure the constructs in the conceptual model. Responses have been collected from 115 scientists and engineers working in the field of Nanoscience and Nanotechnology. The responses have been analyzed further by using Principal Component Analysis, Cluster Analysis and Regression Analysis.
Resumo:
The indigenous cloud forests in the Taita Hills have suffered substantial degradation for several centuries due to agricultural expansion. Currently, only 1% of the original forested area remains preserved in this region. Furthermore, climate change imposes an imminent threat for local economy and environmental sustainability. In such circumstances, elaborating tools to conciliate socioeconomic growth and natural resources conservation is an enormous challenge. This dissertation tackles essential aspects for understanding the ongoing agricultural activities in the Taita Hills and their potential environmental consequences in the future. Initially, alternative methods were designed to improve our understanding of the ongoing agricultural activities. Namely, methods for agricultural survey planning and to estimate evapotranspiration were evaluated, taking into account a number of limitations regarding data and resources availability. Next, this dissertation evaluates how upcoming agricultural expansion, together with climate change, will affect the natural resources in the Taita Hills up to the year 2030. The driving forces of agricultural expansion in the region were identified as aiming to delineate future landscape scenarios and evaluate potential impacts from the soil and water conservation point of view. In order to investigate these issues and answer the research questions, this dissertation combined state of the art modelling tools with renowned statistical methods. The results indicate that, if current trends persist, agricultural areas will occupy roughly 60% of the study area by 2030. Although the simulated land use changes will certainly increase soil erosion figures, new croplands are likely to come up predominantly in the lowlands, which comprise areas with lower soil erosion potential. By 2030, rainfall erosivity is likely to increase during April and November due to climate change. Finally, this thesis addressed the potential impacts of agricultural expansion and climate changes on Irrigation Water Requirements (IWR), which is considered another major issue in the context of the relations between land use and climate. Although the simulations indicate that climate change will likely increase annual volumes of rainfall during the following decades, IWR will continue to increase due to agricultural expansion. By 2030, new cropland areas may cause an increase of approximately 40% in the annual volume of water necessary for irrigation.
Resumo:
In this study, we derive a fast, novel time-domain algorithm to compute the nth-order moment of the power spectral density of the photoelectric current as measured in laser-Doppler flowmetry (LDF). It is well established that in the LDF literature these moments are closely related to fundamental physiological parameters, i.e. concentration of moving erythrocytes and blood flow. In particular, we take advantage of the link between moments in the Fourier domain and fractional derivatives in the temporal domain. Using Parseval's theorem, we establish an exact analytical equivalence between the time-domain expression and the conventional frequency-domain counterpart. Moreover, we demonstrate the appropriateness of estimating the zeroth-, first- and second-order moments using Monte Carlo simulations. Finally, we briefly discuss the feasibility of implementing the proposed algorithm in hardware.
Resumo:
Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (phi, psi) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at http://caps.ncbs.res.in/harmony/
Resumo:
One of the foremost design considerations in microelectronics miniaturization is the use of embedded passives which provide practical solution. In a typical circuit, over 80 percent of the electronic components are passives such as resistors, inductors, and capacitors that could take up to almost 50 percent of the entire printed circuit board area. By integrating passive components within the substrate instead of being on the surface, embedded passives reduce the system real estate, eliminate the need for discrete and assembly, enhance electrical performance and reliability, and potentially reduce the overall cost. Moreover, it is lead free. Even with these advantages, embedded passive technology is at a relatively immature stage and more characterization and optimization are needed for practical applications leading to its commercialization.This paper presents an entire process from design and fabrication to electrical characterization and reliability test of embedded passives on multilayered microvia organic substrate. Two test vehicles focusing on resistors and capacitors have been designed and fabricated. Embedded capacitors in this study are made with polymer/ceramic nanocomposite (BaTiO3) material to take advantage of low processing temperature of polymers and relatively high dielectric constant of ceramics and the values of these capacitors range from 50 pF to 1.5 nF with capacitance per area of approximately 1.5 nF/cm(2). Limited high frequency measurement of these capacitors was performed. Furthermore, reliability assessments of thermal shock and temperature humidity tests based on JEDEC standards were carried out. Resistors used in this work have been of three types: 1) carbon ink based polymer thick film (PTF), 2) resistor foils with known sheet resistivities which are laminated to printed wiring board (PWB) during a sequential build-up (SBU) process and 3) thin-film resistor plating by electroless method. Realization of embedded resistors on conventional board-level high-loss epoxy (similar to 0.015 at 1 GHz) and proposed low-loss BCB dielectric (similar to 0.0008 at > 40 GHz) has been explored in this study. Ni-P and Ni-W-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. For the first time, Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced System-on-Package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties.Although embedded passives are more reliable by eliminating solder joint interconnects, they also introduce other concerns such as cracks, delamination and component instability. More layers may be needed to accommodate the embedded passives, and various materials within the substrate may cause significant thermo -mechanical stress due to coefficient of thermal expansion (CTE) mismatch. In this work, numerical models of embedded capacitors have been developed to qualitatively examine the effects of process conditions and electrical performance due to thermo-mechanical deformations.Also, a prototype working product with the board level design including features of embedded resistors and capacitors are underway. Preliminary results of these are presented.
Resumo:
In this paper, we present self assessment schemes (SAS) for multiple agents performing a search mission on an unknown terrain. The agents are subjected to limited communication and sensor ranges. The agents communicate and coordinate with their neighbours to arrive at route decisions. The self assessment schemes proposed here have very low communication and computational overhead. The SAS also has attractive features like scalability to large number of agents and fast decision-making capability. SAS can be used with partial or complete information sharing schemes during the search mission. We validate the performance of SAS using simulation on a large search space consisting of 100 agents with different information structures and self assessment schemes. We also compare the results obtained using SAS with that of a previously proposed negotiation scheme. The simulation results show that the SAS is scalable to large number of agents and can perform as good as the negotiation schemes with reduced communication requirement (almost 20% of that required for negotiation).
Resumo:
Purpose: To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms.Methods: A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model base numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. Results: The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. Conclusions: The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3456441]
Resumo:
Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.