77 resultados para methodologies
Resumo:
In this paper we propose a framework for optimum steering input determination of all-wheel steer vehicles (AWSV) on rough terrains. The framework computes the steering input which minimizes the tracking error for a given trajectory. Unlike previous methodologies of computing steering inputs of car-like vehicles, the proposed methodology depends explicitly on the vehicle dynamics and can be extended to vehicle having arbitrary number of steering inputs. A fully generic framework has been used to derive the vehicle dynamics and a non-linear programming based constrained optimization approach has been used to compute the steering input considering the instantaneous vehicle dynamics, no-slip and contact constraints of the vehicle. All Wheel steer Vehicles have a special parallel steering ability where the instantaneous centre of rotation (ICR) is at infinity. The proposed framework automatically enables the vehicle to choose between parallel steer and normal operation depending on the error with respect to the desired trajectory. The efficacy of the proposed framework is proved by extensive uneven terrain simulations, for trajectories with continuous or discontinuous velocity profile.
Resumo:
The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.
Resumo:
With the development of deep sequencing methodologies, it has become important to construct site saturation mutant (SSM) libraries in which every nucleotide/codon in a gene is individually randomized. We describe methodologies for the rapid, efficient, and economical construction of such libraries using inverse polymerase chain reaction (PCR). We show that if the degenerate codon is in the middle of the mutagenic primer, there is an inherent PCR bias due to the thermodynamic mismatch penalty, which decreases the proportion of unique mutants. Introducing a nucleotide bias in the primer can alleviate the problem. Alternatively, if the degenerate codon is placed at the 5' end, there is no PCR bias, which results in a higher proportion of unique mutants. This also facilitates detection of deletion mutants resulting from errors during primer synthesis. This method can be used to rapidly generate SSM libraries for any gene or nucleotide sequence, which can subsequently be screened and analyzed by deep sequencing. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
A variety of methods are available to estimate future solar radiation (SR) scenarios at spatial scales that are appropriate for local climate change impact assessment. However, there are no clear guidelines available in the literature to decide which methodologies are most suitable for different applications. Three methodologies to guide the estimation of SR are discussed in this study, namely: Case 1: SR is measured, Case 2: SR is measured but sparse and Case 3: SR is not measured. In Case 1, future SR scenarios are derived using several downscaling methodologies that transfer the simulated large-scale information of global climate models to a local scale ( measurements). In Case 2, the SR was first estimated at the local scale for a longer time period using sparse measured records, and then future scenarios were derived using several downscaling methodologies. In Case 3: the SR was first estimated at a regional scale for a longer time period using complete or sparse measured records of SR from which SR at the local scale was estimated. Finally, the future scenarios were derived using several downscaling methodologies. The lack of observed SR data, especially in developing countries, has hindered various climate change impact studies. Hence, this was further elaborated by applying the Case 3 methodology to a semi-arid Malaprabha reservoir catchment in southern India. A support vector machine was used in downscaling SR. Future monthly scenarios of SR were estimated from simulations of third-generation Canadian General Circulation Model (CGCM3) for various SRES emission scenarios (A1B, A2, B1, and COMMIT). Results indicated a projected decrease of 0.4 to 12.2 W m(-2) yr(-1) in SR during the period 2001-2100 across the 4 scenarios. SR was calculated using the modified Hargreaves method. The decreasing trends for the future were in agreement with the simulations of SR from the CGCM3 model directly obtained for the 4 scenarios.
Resumo:
A controlled laboratory experiment was carried out on forty Indian male college students for evaluating the effect of indoor thermal environment on occupants' response and thermal comfort. During experiment, indoor temperature varied from 21 degrees C to 33 degrees C, and the variables like relative humidity, airflow, air temperature and radiant temperature were recorded along with subject's physiological parameters (skin (T-sk) and oral temperature (T-c)) and subjective thermal sensation responses (TSV). From T-sk and T-c, body temperature (T-b) was evaluated. Subjective Thermal Sensation Vote (TSV) was recorded using ASHRAE 7-point scale. In PMV model, Fanger's T-sk equation was used to accommodate adaptive response. Step-wise regression analysis result showed T-b was better predictor of TSV than T-sk and T-c. Regional skin temperature response, suppressed sweating without dipping, lower sweating threshold temperature and higher cutaneous threshold for sweating were observed as thermal adaptive responses. These adaptive responses cannot be considered in PMV model. To incorporate subjective adaptive response, mean skin temperature (T-sk) is considered in dry heat loss calculation. Along with these, PMV-model and other two methodologies are adopted to calculate PMV values and results are compared. However, recent literature is limited to measure the sweat rate in Indians and consideration of constant Ersw in PMV model needs to be corrected. Using measured T-sk in PMV model (Method(1)), thermal comfort zone corresponding to 0.5 <= PMV <= 0.5 was evaluated as (22.46-25.41) degrees C with neutral temperature of 23.91 degrees C, similarly while using TSV response, wider comfort zone was estimated as (23.25-26.32) degrees C with neutral temperature at 24.83 degrees C, which was further increased to with TSV-PPDnew, relation. It was observed that PMV-model overestimated the actual thermal response. Interestingly, these subjects were found to be less sensitive to hot but more sensitive to cold. A new TSV-PPD relation (PPDnew) was obtained from the population distribution of TSV response with an asymmetric distribution of hot-cold thermal sensation response from Indians. The calculations of human thermal stress according to steady state energy balance models used on PMV model seem to be inadequate to evaluate human thermal sensation of Indians. Relevance to industry: The purpose of this paper is to estimate thermal comfort zone and optimum temperature for Indians. It also highlights that PMV model seems to be inadequate to evaluate subjective thermal perception in Indians. These results can be used in feedback control of HVAC systems in residential and industrial buildings. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Non-invasive 3D imaging in materials and medical research involves methodologies such as X-ray imaging, MRI, fluorescence and optical coherence tomography, NIR absorption imaging, etc., providing global morphological/density/absorption changes of the hidden components. However, molecular information of such buried materials has been elusive. In this article we demonstrate observation of molecular structural information of materials hidden/buried in depth using Raman scattering. Typically, Raman spectroscopic observations are made at fixed collection angles, such as, 906, 1356, and 1806, except in spatially offset Raman scattering (SORS) (only back scattering based collection of photons) and transmission techniques. Such specific collection angles restrict the observations of Raman signals either from or near the surface of the materials. Universal Multiple Angle Raman Spectroscopy (UMARS) presented here employs the principle of (a) penetration depth of photons and then diffuse propagation through non-absorbing media by multiple scattering and (b) detection of signals from all the observable angles.
Resumo:
The Computational Analysis of Novel Drug Opportunities (CANDO) platform (http://protinfo.org/cando) uses similarity of compound-proteome interaction signatures to infer homology of compound/drug behavior. We constructed interaction signatures for 3733 human ingestible compounds covering 48,278 protein structures mapping to 2030 indications based on basic science methodologies to predict and analyze protein structure, function, and interactions developed by us and others. Our signature comparison and ranking approach yielded benchmarking accuracies of 12-25% for 1439 indications with at least two approved compounds. We prospectively validated 49/82 `high value' predictions from nine studies covering seven indications, with comparable or better activity to existing drugs, which serve as novel repurposed therapeutics. Our approach may be generalized to compounds beyond those approved by the FDA, and can also consider mutations in protein structures to enable personalization. Our platform provides a holistic multiscale modeling framework of complex atomic, molecular, and physiological systems with broader applications in medicine and engineering.
Resumo:
Chiral auxiliaries are used for NMR spectroscopic study of enantiomers. Often the presence of impurities, severe overlap of peaks, excessive line broadening and complex multiplicity pattern restricts the chiral analysis using 1D H-1 NMR spectrum. There are few approaches to resolve the overlapped peaks. One approach is to use suitable chiral auxiliary, which induces large chemical shift difference between the discriminated peaks (Delta delta(R,S)) and minimize the overlap. Another direction of approach is to design appropriate NMR experiments to circumvent some of these problems, viz, enhancing spectral resolution, unravelling the superimposed spectra of enantiomers, and reduction of spectral complexity. Large number of NMR techniques, such as two dimensional selective F-1 decoupling, RES-TOCSY, multiple quantum detection, frequency selective homodecoupling, band selective homodecoupling, broadband homodecoupling, etc. have been reported for such a purpose. Many of these techniques have aided in chiral analysis for molecules of diverse functionality in the presence of chiral auxiliaries. The present review summarizes the recently reported NMR experimental methodologies, with a special emphasis on the work carried out in authors' laboratory.
Resumo:
A brief account of the basic principle and methodologies of MRI technique, right from its beginning, are outlined. The final pulse sequence used for MRI using Fourier Imaging (phase encoding), Echo-Planar Imaging (EPI) for detection of a whole plane in a single excitation and T-1 and T-2 contrast enhancement is explained. The various associated methods such as, MR-spectroscopy, flow measurement (MRI-angiography), Lung-imaging using hyperpolarized Xe-129 and He-3 and functional imaging (f-MRI) are described.
Resumo:
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km(2). An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km(3), depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
Resumo:
Large-scale estimates of the area of terrestrial surface waters have greatly improved over time, in particular through the development of multi-satellite methodologies, but the generally coarse spatial resolution (tens of kms) of global observations is still inadequate for many ecological applications. The goal of this study is to introduce a new, globally applicable downscaling method and to demonstrate its applicability to derive fine resolution results from coarse global inundation estimates. The downscaling procedure predicts the location of surface water cover with an inundation probability map that was generated by bagged derision trees using globally available topographic and hydrographic information from the SRTM-derived HydroSHEDS database and trained on the wetland extent of the GLC2000 global land cover map. We applied the downscaling technique to the Global Inundation Extent from Multi-Satellites (GIEMS) dataset to produce a new high-resolution inundation map at a pixel size of 15 arc-seconds, termed GIEMS-D15. GIEMS-D15 represents three states of land surface inundation extents: mean annual minimum (total area, 6.5 x 10(6) km(2)), mean annual maximum (12.1 x 10(6) km(2)), and long-term maximum (173 x 10(6) km(2)); the latter depicts the largest surface water area of any global map to date. While the accuracy of GIEMS-D15 reflects distribution errors introduced by the downscaling process as well as errors from the original satellite estimates, overall accuracy is good yet spatially variable. A comparison against regional wetland cover maps generated by independent observations shows that the results adequately represent large floodplains and wetlands. GIEMS-D15 offers a higher resolution delineation of inundated areas than previously available for the assessment of global freshwater resources and the study of large floodplain and wetland ecosystems. The technique of applying inundation probabilities also allows for coupling with coarse-scale hydro-climatological model simulations. (C) 2014 Elsevier Inc All rights reserved.
Resumo:
Precise information on streamflows is of major importance for planning and monitoring of water resources schemes related to hydro power, water supply, irrigation, flood control, and for maintaining ecosystem. Engineers encounter challenges when streamflow data are either unavailable or inadequate at target locations. To address these challenges, there have been efforts to develop methodologies that facilitate prediction of streamflow at ungauged sites. Conventionally, time intensive and data exhaustive rainfall-runoff models are used to arrive at streamflow at ungauged sites. Most recent studies show improved methods based on regionalization using Flow Duration Curves (FDCs). A FDC is a graphical representation of streamflow variability, which is a plot between streamflow values and their corresponding exceedance probabilities that are determined using a plotting position formula. It provides information on the percentage of time any specified magnitude of streamflow is equaled or exceeded. The present study assesses the effectiveness of two methods to predict streamflow at ungauged sites by application to catchments in Mahanadi river basin, India. The methods considered are (i) Regional flow duration curve method, and (ii) Area Ratio method. The first method involves (a) the development of regression relationships between percentile flows and attributes of catchments in the study area, (b) use of the relationships to construct regional FDC for the ungauged site, and (c) use of a spatial interpolation technique to decode information in FDC to construct streamflow time series for the ungauged site. Area ratio method is conventionally used to transfer streamflow related information from gauged sites to ungauged sites. Attributes that have been considered for the analysis include variables representing hydrology, climatology, topography, land-use/land- cover and soil properties corresponding to catchments in the study area. Effectiveness of the presented methods is assessed using jack knife cross-validation. Conclusions based on the study are presented and discussed. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Climate change is most likely to introduce an additional stress to already stressed water systems in developing countries. Climate change is inherently linked with the hydrological cycle and is expected to cause significant alterations in regional water resources systems necessitating measures for adaptation and mitigation. Increasing temperatures, for example, are likely to change precipitation patterns resulting in alterations of regional water availability, evapotranspirative water demand of crops and vegetation, extremes of floods and droughts, and water quality. A comprehensive assessment of regional hydrological impacts of climate change is thus necessary. Global climate model simulations provide future projections of the climate system taking into consideration changes in external forcings, such as atmospheric carbon-dioxide and aerosols, especially those resulting from anthropogenic emissions. However, such simulations are typically run at a coarse scale, and are not equipped to reproduce regional hydrological processes. This paper summarizes recent research on the assessment of climate change impacts on regional hydrology, addressing the scale and physical processes mismatch issues. Particular attention is given to changes in water availability, irrigation demands and water quality. This paper also includes description of the methodologies developed to address uncertainties in the projections resulting from incomplete knowledge about future evolution of the human-induced emissions and from using multiple climate models. Approaches for investigating possible causes of historically observed changes in regional hydrological variables are also discussed. Illustrations of all the above-mentioned methods are provided for Indian regions with a view to specifically aiding water management in India.
Resumo:
What is the scope and responsibilities of design? This work partially answers this by employing a normative approach to design of a biomass cook stove. This study debates on the sufficiency of existing design methodologies in the light of a capability approach. A case study of a biomass cook stove Astra Ole has elaborated the theoretical constructs of capability approach, which, in turn, has structured insights from field to evaluate the product. Capability approach based methodology is also prescriptively used to design the mould for rapid dissemination of the Astra Ole.
Resumo:
We propose data acquisition from continuous-time signals belonging to the class of real-valued trigonometric polynomials using an event-triggered sampling paradigm. The sampling schemes proposed are: level crossing (LC), close to extrema LC, and extrema sampling. Analysis of robustness of these schemes to jitter, and bandpass additive gaussian noise is presented. In general these sampling schemes will result in non-uniformly spaced sample instants. We address the issue of signal reconstruction from the acquired data-set by imposing structure of sparsity on the signal model to circumvent the problem of gap and density constraints. The recovery performance is contrasted amongst the various schemes and with random sampling scheme. In the proposed approach, both sampling and reconstruction are non-linear operations, and in contrast to random sampling methodologies proposed in compressive sensing these techniques may be implemented in practice with low-power circuitry.