967 resultados para optimal fishing effort


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquatic Ecosystems perform numerous valuable environmental functions. They recycle nutrients, purify water, recharge ground water, augment and maintain stream flow, and provide habitat for a wide variety of flora and fauna and recreation for people. A rapid population increase accompanied by unplanned developmental works has led to the pollution of surface waters due to residential, agricultural, commercial and industrial wastes/effluents and decline in the number of water bodies. Increased demands for drainage of wetlands have been accommodated by channelisation, resulting in further loss of stream habitat, which has led to aquatic organisms becoming extinct or imperiled in increasing numbers and to the impairment of many beneficial uses of water, including drinking, swimming and fishing. Various anthropogenic activities have altered the physical, chemical and biological processes within aquatic ecosystems. An integrated and accelerated effort toward environmental restoration and preservation is needed to stop further degradation of these fragile ecosystems. Failure to restore these ecosystems will result in sharply increased environmental costs later, in the extinction of species or ecosystem types, and in permanent ecological damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stirred tank bioreactors, employed in the production of a variety of biologically active chemicals, are often operated in batch, fed-batch, and continuous modes of operation. The optimal design of bioreactor is dependent on the kinetics of the biological process, as well as the performance criteria (yield, productivity, etc.) under consideration. In this paper, a general framework is proposed for addressing the two key issues related to the optimal design of a bioreactor, namely, (i) choice of the best operating mode and (ii) the corresponding flow rate trajectories. The optimal bioreactor design problem is formulated with initial conditions and inlet and outlet flow rate trajectories as decision variables to maximize more than one performance criteria (yield, productivity, etc.) as objective functions. A computational methodology based on genetic algorithm approach is developed to solve this challenging multiobjective optimization problem with multiple decision variables. The applicability of the algorithm is illustrated by solving two challenging problems from the bioreactor optimization literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an effort to design efficient platform for siRNA delivery, we combine all atom classical and quantum simulations to study the binding of small interfering RNA (siRNA) by pristine single wall carbon nanotube (SWCNT). Our results show that siRNA strongly binds to SWCNT surface via unzipping its base-pairs and the propensity of unzipping increases with the increase in the diameter of the SWCNTs. The unzipping and subsequent wrapping events are initiated and driven by van der Waals interactions between the aromatic rings of siRNA nucleobases and the SWCNT surface. However, molecular dynamics (MD) simulations of double strand DNA (dsDNA) of the same sequence show that the dsDNA undergoes much less unzipping and wrapping on the SWCNT in the simulation time scale of 70 ns. This interesting difference is due to smaller interaction energy of thymidine of dsDNA with the SWCNT compared to that of uridine of siRNA, as calculated by dispersion corrected density functional theory (DFT) methods. After the optimal binding of siRNA to SWCNT, the complex is very stable which serves as one of the major mechanisms of siRNA delivery for biomedical applications. Since siRNA has to undergo unwinding process with the effect of RNA-induced silencing complex, our proposed delivery mechanism by SWCNT possesses potential advantages in achieving RNA interference. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.3682780]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerically stable sequential Primal–Dual LP algorithm for the reactive power optimisation (RPO) is presented in this article. The algorithm minimises the voltage stability index C 2 [1] of all the load buses to improve the system static voltage stability. Real time requirements such as numerical stability, identification of the most effective subset of controllers for curtailing the number of controllers and their movement can be handled effectively by the proposed algorithm. The algorithm has a natural characteristic of selecting the most effective subset of controllers (and hence curtailing insignificant controllers) for improving the objective. Comparison with transmission loss minimisation objective indicates that the most effective subset of controllers and their solution identified by the static voltage stability improvement objective is not the same as that of the transmission loss minimisation objective. The proposed algorithm is suitable for real time application for the improvement of the system static voltage stability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes with multiple queues and multiple grades of service. We present a closed-loop multi-layered pricing scheme and propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. This is different from most adaptive pricing schemes in the literature that do not obtain a closed-loop state dependent pricing policy. The method that we propose finds optimal price levels that are functions of the queue lengths at individual queues. Further, we also propose a variant of the above scheme that assigns prices to incoming packets at each node according to a weighted average queue length at that node. This is done to reduce frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using both of our schemes over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our first scheme exhibits a throughput improvement in the range of 67-82% among all routes over the above scheme. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A minimum weight design of laminated composite structures is carried out for different loading conditions and failure criteria using genetic algorithm. The phenomenological maximum stress (MS) and Tsai-Wu (TW) criteria and the micro-mechanism-based failure mechanism based (FMB) failure criteria are considered. A new failure envelope called the Most Conservative Failure Envelope (MCFE) is proposed by combining the three failure envelopes based on the lowest absolute values of the strengths predicted. The effect of shear loading on the MCFE is investigated. The interaction between the loading conditions, failure criteria, and strength-based optimal design is brought out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the reconstruction problem in frequency-domain optical-coherence tomography (FDOCT) from under-sampled measurements within the framework of compressed sensing (CS). Specifically, we propose optimal sparsifying bases for accurate reconstruction by analyzing the backscattered signal model. Although one might expect Fourier bases to be optimal for the FDOCT reconstruction problem, it turns out that the optimal sparsifying bases are windowed cosine functions where the window is the magnitude spectrum of the laser source. Further, the windowed cosine bases can be phase locked, which allows one to obtain higher accuracy in reconstruction. We present experimental validations on real data. The findings reported in this Letter are useful for optimal dictionary design within the framework of CS-FDOCT. (C) 2012 Optical Society of America