908 resultados para Optimized eco-productive paradigm
Resumo:
Productive coexistence and coexistence gain of populations were studied using nine years' data from field experiments of Taxodium ascendens-intercrop systems in Lixiahe, Jiangsu Province, China. A theoretical framework for productive coexistence in agroforestry was developed. Interaction patterns between trees and intercrops were presented within this framework. A model framework was developed to describe the coexistence gain and interaction of populations in T. ascendens-intercrop systems. Facilitation and resource sharing were identified as main contribution to the advantage of species combination in agroforestry. The model of population interaction developed in the present study was accepted for describing the interaction of populations in T. ascendens-intercrop systems, because it explained a high proportion of the variance of experimental data and fitted well the observations in most intercropping types. The model developed in the present study provides flexibility for describing different patterns of intra- and inter-specific interactions. Model coefficients were applied to the determination of the ecological compatibility of species. Managed T. ascendens-intercrop systems were advantageous as compared to a monoculture of trees or arable crops. In T. ascendens stands up to the age of three, arable crops contributed about 50-80 % of the total biomass yield of agroforestry. The diameter or height growth of T. ascendens was not significantly influenced by intercrops, indicating that intercropping under trees produced extra yields but did not depress the tree growth. When the trees were young (during the first three years), T. ascendens did not depress the crop yields, and a land equivalent ratio greater than unity was obtained together with a high yield of both components. The diameter and height of the trees were similar in four spacing configurations with an equal number of trees per hectare up to the age of eight, but wider between-rows open range were beneficial for the intercrops. The relationship between open-ranges and species coexistence was also analysed and the distribution of soil nutrients studied.
Resumo:
High performance video standards use prediction techniques to achieve high picture quality at low bit rates. The type of prediction decides the bit rates and the image quality. Intra Prediction achieves high video quality with significant reduction in bit rate. This paper present an area optimized architecture for Intra prediction, for H.264 decoding at HDTV resolution with a target of achieving 60 fps. The architecture was validated on Virtex-5 FPGA based platform. The architecture achieves a frame rate of 64 fps. The architecture is based on multi-level memory hierarchy to reduce latency and ensure optimum resources utilization. It removes redundancy by reusing same functional blocks across different modes. The proposed architecture uses only 13% of the total LUTs available on the Xilinx FPGA XC5VLX50T.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess Mean-Squared Error (MSE) distortion caused due to channel errors. It is assumed that the source encoder is channel-agnostic, i.e., that a Vector Quantization (VQ) based compression designed for a noiseless channel is employed. The index output by the source encoder is sent over a noisy memoryless discrete symmetric channel, and the possibly incorrect received index is decoded by the corresponding VQ decoder. The output of the VQ decoder is processed by a receive filter to obtain an estimate of the source instantiation. In the sequel, the optimum linear receive filter structure to minimize the overall MSE is derived, and shown to have a minimum-mean squared error receiver type structure. Further, expressions are derived for the resulting high-rate MSE performance. The performance is compared with the MSE obtained using conventional VQ as well as the channel optimized VQ. The accuracy of the expressions is demonstrated through Monte Carlo simulations.
Resumo:
This paper is concerned with off-line signature verification. Four different types of pattern representation schemes have been implemented, viz., geometric features, moment-based representations, envelope characteristics and tree-structured Wavelet features. The individual feature components in a representation are weighed by their pattern characterization capability using Genetic Algorithms. The conclusions of the four subsystems teach depending on a representation scheme) are combined to form a final decision on the validity of signature. Threshold-based classifiers (including the traditional confidence-interval classifier), neighbourhood classifiers and their combinations were studied. Benefits of using forged signatures for training purposes have been assessed. Experimental results show that combination of the Feature-based classifiers increases verification accuracy. (C) 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess distortion caused due to channel errors. The index output by the source encoder is sent over a fading discrete binary symmetric channel and the possibly incorrect received index is mapped to the corresponding codeword by a Vector Quantization (VQ) decoder at the receiver. The output of the VQ decoder is then processed by a receive filter to obtain an estimate of the source instantiation. The distortion performance is analyzed for weighted mean square error (WMSE) and the optimum receive filter that minimizes the expected distortion is derived for two different cases of fading. It is shown that the performance of the system with the receive filter is strictly better than that of a conventional VQ and the difference becomes more significant as the number of bits transmitted increases. Theoretical expressions for an upper and lower bound on the WMSE performance of the system with the receive filter and a Rayleigh flat fading channel are derived. The design of a receive filter in the presence of channel mismatch is also studied and it is shown that a minimax solution is the one obtained by designing the receive filter for the worst possible channel. Simulation results are presented to validate the theoretical expressions and illustrate the benefits of receive filtering.
Resumo:
We consider a stochastic differential equation (SDE) model of slotted Aloha with the retransmission probability as the associated parameter. We formulate the problem in both (a) the finite horizon and (b) the infinite horizon average cost settings. We apply the algorithm of 3] for the first setting, while for the second, we adapt a related algorithm from 2] that was originally developed in the simulation optimization framework. In the first setting, we obtain an optimal parameter trajectory that prescribes the parameter to use at any given instant while in the second setting, we obtain an optimal time-invariant parameter. Our algorithms are seen to exhibit good performance.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A scatternet is established by linking several piconets together in ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a polling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in a Bluetooth Scatternet. Experimental results from our proposed algorithm show performance improvements over a well known existing algorithm.
Resumo:
High-rate analysis of channel-optimized vector quantizationThis paper considers the high-rate performance of channel optimized source coding for noisy discrete symmetric channels with random index assignment. Specifically, with mean squared error (MSE) as the performance metric, an upper bound on the asymptotic (i.e., high-rate) distortion is derived by assuming a general structure on the codebook. This structure enables extension of the analysis of the channel optimized source quantizer to one with a singular point density: for channels with small errors, the point density that minimizes the upper bound is continuous, while as the error rate increases, the point density becomes singular. The extent of the singularity is also characterized. The accuracy of the expressions obtained are verified through Monte Carlo simulations.
Resumo:
It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. (C) 2011 Elsevier Ltd. All rights reserved.