57 resultados para Improvement programs
Resumo:
Due to large scale afforestation programs and forest conservation legislations, India's total forest area seems to have stabilized or even increased. In spite of such efforts, forest fragmentation and degradation continues, with forests being subject to increased pressure due to anthropogenic factors. Such fragmentation and degradation is leading to the forest cover to change from very dense to moderately dense and open forest and 253 km(2) of very dense forest has been converted to moderately dense forest, open forest, scrub and non-forest (during 2005-2007). Similarly, there has been a degradation of 4,120 km(2) of moderately dense forest to open forest, scrub and non-forest resulting in a net loss of 936 km(2) of moderately dense forest. Additionally, 4,335 km(2) of open forest have degraded to scrub and non-forest. Coupled with pressure due to anthropogenic factors, climate change is likely to be an added stress on forests. Forest sector programs and policies are major factors that determine the status of forests and potentially resilience to projected impacts of climate change. An attempt is made to review the forest policies and programs and their implications for the status of forests and for vulnerability of forests to projected climate change. The study concludes that forest conservation and development policies and programs need to be oriented to incorporate climate change impacts, vulnerability and adaptation.
Resumo:
Improved performance of plasma in raw engine exhaust treatment is reported. A new type of reactor referred to as of cross-flow dielectric barrier discharge (DBD) was used, in which the gas flow is perpendicular to the corona electrode. In raw exhaust environment, the cross-flow (radial-flow) reactor exhibits a superior performance with regard to NOX removal when compared to that with axial flow of gas. Experiments were conducted at different flow rates ranging from 2 L/min to 25 L/min. The plasma assisted barrier discharge reactor has shown encouraging results in NOx removal at high flow rates.
Resumo:
Intracellular pathogen sensor, NOD2, has been implicated in regulation of wide range of anti-inflammatory responses critical during development of a diverse array of inflammatory diseases; however, underlying molecular details are still imprecisely understood. In this study, we demonstrate that NOD2 programs macrophages to trigger Notch1 signaling. Signaling perturbations or genetic approaches suggest signaling integration through cross-talk between Notch1-PI3K during the NOD2-triggered expression of a multitude of immunological parameters including COX-2/PGE(2) and IL-10. NOD2 stimulation enhanced active recruitment of CSL/RBP-Jk on the COX-2 promoter in vivo. Intriguingly, nitric oxide assumes critical importance in NOD2-mediated activation of Notch1 signaling as iNOS(-/-) macrophages exhibited compromised ability to execute NOD2-triggered Notch1 signaling responses. Correlative evidence demonstrates that this mechanism operates in vivo in brain and splenocytes derived from wild type, but not from iNOS(-/-) mice. Importantly, NOD2-driven activation of the Notch1-PI3K signaling axis contributes to its capacity to impart survival of macrophages against TNF-alpha or IFN-gamma-mediated apoptosis and resolution of inflammation. Current investigation identifies Notch1-PI3K as signaling cohorts involved in the NOD2-triggered expression of a battery of genes associated with anti-inflammatory functions. These findings serve as a paradigm to understand the pathogenesis of NOD2-associated inflammatory diseases and clearly pave a way toward development of novel therapeutics.
Resumo:
The amount of reactive power margin available in a system determines its proximity to voltage instability under normal and emergency conditions. More the reactive power margin, better is the systems security and vice-versa. A hypothetical way of improving the reactive margin of a synchronous generator is to reduce the real power generation within its mega volt-ampere (MVA) ratings. This real power generation reduction will affect its power contract agreements entered in the electricity market. Owing to this, the benefit that the generator foregoes will have to be compensated by paying them some lost opportunity cost. The objective of this study is three fold. Firstly, the reactive power margins of the generators are evaluated. Secondly, they are improved using a reactive power optimization technique and optimally placed unified power flow controllers. Thirdly, the reactive power capacity exchanges along the tie-lines are evaluated under base case and improved conditions. A detailed analysis of all the reactive power sources and sinks scattered throughout the network is carried out in the study. Studies are carried out on a real life, three zone, 72-bus equivalent Indian southern grid considering normal and contingency conditions with base case operating point and optimised results presented.
Resumo:
This paper investigates the diversity-multiplexing gain tradeoff (DMT) of a time-division duplex (TDD) single-input multiple-output (SIMO) system with perfect channel state information (CSI) at the receiver (CSIR) and partial CSI at the transmitter (CSIT). The partial CSIT is acquired through a training sequence from the receiver to the transmitter. The training sequence is chosen in an intelligent manner based on the CSIR, to reduce the training length by a factor of r, the number of receive antennas. We show that, for the proposed training scheme and a given channel coherence time, the diversity order increases linearly with r for nonzero multiplexing gain. This is a significant improvement over conventional orthogonal training schemes.
Resumo:
The ability of Static Var Compensators (SVCs) to rapidly and continuously control reactive power in response to changing system conditions can result in the improvement of system stability and also increase the power transfer in the transmission system. This paper concerns the application of strategically located SVCs to enhance the transient stability limits and the direct evaluation of the effect of these SVCs on transient stability using a Structure Preserving Energy Function (SPEF). The SVC control system can be modelled from the steady- state control characteristic to accurately simulate its effect on transient stability. Treating the SVC as a voltage-dependent reactive power load leads to the derivation of a path-independent SPEF for the SVC. Case studies on a 10-machine test system using multiple SVCs illustrate the effects of SVCs on transient stability and its accurate prediction.
Resumo:
This paper is devoted to the improvement in the range of operation (linearity range) of chimney weir (consisting of a rectangular weir or vertical slot over an inward trapezium), A new and more elegant optimization procedure is developed to analyse the discharge-head relationship in the weir. It is shown that a rectangular weir placed over an inverted V-notch of depth 0.90d gives the maximum operating range, where d is the overall depth of the inward trapezoidal weir (from the crest to the vertex). For all flows in the rectangular portion, the discharge is proportional to the linear power of the head, h, measured above a reference plane located at 0.292d below the weir crest, in the range 0.90d less than or equal to h less than or equal to 7.474: within a maximum error of +/-1.5% from the theoretical discharge. The optimum range of operation of the newly designed weir is 200% greater than that in the chimney weir designed by Keshava Murthy and Giridhar, and is nearly 950% greater than that in the inverted V-notch. Experiments with two weirs having half crest widths of 0.10 and 0.12 m yield a constant average coefficient of discharge of 0.634 and confirm the theory. The application of the weir in the design of rectangular grit chamber outlet is emphasized, in that the datum for the linear discharge-head relationship is below the crest level of the weir.
Resumo:
With deregulation, the total transfer capability (TTC) calculation, which is the basis for evaluating available transfer capability (ATC), has become very significant. TTC is an important index in power markets with large volume of inter-area power exchanges and wheeling transactions taking place on an hourly basis. Its computation helps to achieve a viable technical and commercial transmission operation. The aim of the paper is to evaluate TTC in the interconnections and also to improve it using reactive optimization technique and UPFC devices. Computations are carried out for normal and contingency cases such as single line, tie line and generator outages. Base and optimized results are presented, and the results show how reactive optimization and unified power flow controller help to improve the system conditions. In this paper repeated power flow method is used to calculate TTC due to its ease of implementation. A case study is carried out on a 205 bus equivalent system, a part of Indian Southern grid. Parameters like voltage magnitude, L-index, minimum singular value and MW losses are computed to analyze the system performance.
Resumo:
Recently, composite reinforcements in which combinations of materials and material forms such as strips, grids, and strips and anchors, depending on requirements have proven to be effective in various ground improvement applications. Composite geogrids studied in this paper belong to the category of composite reinforcements and are useful for bearing capacity improvement. The paper presents evaluation of results of bearing capacity tests conducted oil a composite geogrid, made of composite reinforcement consisting of steel and cement mortar. The study shows that the behavior of composite reinforcements follows the general trends observed in the case of conventional geogrids, with reference to the depth of first layer below the footing, number of layers of reinforcement, and vertical spacing of the reinforcement. Results show that the performance is comparable to that of a conventional polymer geogrid.
Resumo:
Large-grain synchronous dataflow graphs or multi-rate graphs have the distinct feature that the nodes of the dataflow graph fire at different rates. Such multi-rate large-grain dataflow graphs have been widely regarded as a powerful programming model for DSP applications. In this paper we propose a method to minimize buffer storage requirement in constructing rate-optimal compile-time (MBRO) schedules for multi-rate dataflow graphs. We demonstrate that the constraints to minimize buffer storage while executing at the optimal computation rate (i.e. the maximum possible computation rate without storage constraints) can be formulated as a unified linear programming problem in our framework. A novel feature of our method is that in constructing the rate-optimal schedule, it directly minimizes the memory requirement by choosing the schedule time of nodes appropriately. Lastly, a new circular-arc interval graph coloring algorithm has been proposed to further reduce the memory requirement by allowing buffer sharing among the arcs of the multi-rate dataflow graph. We have constructed an experimental testbed which implements our MBRO scheduling algorithm as well as (i) the widely used periodic admissible parallel schedules (also known as block schedules) proposed by Lee and Messerschmitt (IEEE Transactions on Computers, vol. 36, no. 1, 1987, pp. 24-35), (ii) the optimal scheduling buffer allocation (OSBA) algorithm of Ning and Gao (Conference Record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, SC, Jan. 10-13, 1993, pp. 29-42), and (iii) the multi-rate software pipelining (MRSP) algorithm (Govindarajan and Gao, in Proceedings of the 1993 International Conference on Application Specific Array Processors, Venice, Italy, Oct. 25-27, 1993, pp. 77-88). Schedules generated for a number of random dataflow graphs and for a set of DSP application programs using the different scheduling methods are compared. The experimental results have demonstrated a significant improvement (10-20%) in buffer requirements for the MBRO schedules compared to the schedules generated by the other three methods, without sacrificing the computation rate. The MBRO method also gives a 20% average improvement in computation rate compared to Lee's Block scheduling method.
Resumo:
Analytical studies are carried out to minimize acquisition time in phase-lock loop (PLL) applications using aiding functions. A second order aided PLL is realized with the help of the quasi-stationary approach to verify the acquisition behavior in the absence of noise. Time acquisition is measured both from the study of the LPF output transient and by employing a lock detecting and indicating circuit to crosscheck experimental and analytical results. A closed form solution is obtained for the evaluation of the time acquisition using different aiding functions. The aiding signal is simple and economical and can be used with state of the art hardware.
Resumo:
The three dimensional structure of a protein provides major insights into its function. Protein structure comparison has implications in functional and evolutionary studies. A structural alphabet (SA) is a library of local protein structure prototypes that can abstract every part of protein main chain conformation. Protein Blocks (PBS) is a widely used SA, composed of 16 prototypes, each representing a pentapeptide backbone conformation defined in terms of dihedral angles. Through this description, the 3D structural information can be translated into a 1D sequence of PBs. In a previous study, we have used this approach to compare protein structures encoded in terms of PBs. A classical sequence alignment procedure based on dynamic programming was used, with a dedicated PB Substitution Matrix (SM). PB-based pairwise structural alignment method gave an excellent performance, when compared to other established methods for mining. In this study, we have (i) refined the SMs and (ii) improved the Protein Block Alignment methodology (named as iPBA). The SM was normalized in regards to sequence and structural similarity. Alignment of protein structures often involves similar structural regions separated by dissimilar stretches. A dynamic programming algorithm that weighs these local similar stretches has been designed. Amino acid substitutions scores were also coupled linearly with the PB substitutions. iPBA improves (i) the mining efficiency rate by 6.8% and (ii) more than 82% of the alignments have a better quality. A higher efficiency in aligning multi-domain proteins could be also demonstrated. The quality of alignment is better than DALI and MUSTANG in 81.3% of the cases. Thus our study has resulted in an impressive improvement in the quality of protein structural alignment. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Advertisements(Ads) are the main revenue earner for Television (TV) broadcasters. As TV reaches a large audience, it acts as the best media for advertisements of products and services. With the emergence of digital TV, it is important for the broadcasters to provide an intelligent service according to the various dimensions like program features, ad features, viewers’ interest and sponsors’ preference. We present an automatic ad recommendation algorithm that selects a set of ads by considering these dimensions and semantically match them with programs. Features of the ad video are captured interms of annotations and they are grouped into number of predefined semantic categories by using a categorization technique. Fuzzy categorical data clustering technique is applied on categorized data for selecting better suited ads for a particular program. Since the same ad can be recommended for more than one program depending upon multiple parameters, fuzzy clustering acts as the best suited method for ad recommendation. The relative fuzzy score called “degree of membership” calculated for each ad indicates the membership of a particular ad to different program clusters. Subjective evaluation of the algorithm is done by 10 different people and rated with a high success score.