61 resultados para LIMITATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A mathematical model describing the dynamics of mammalian cell growth in hollow fibre bioreactor operated in closed shell mode is developed. Mammalian cells are assumed to grow as an expanding biofilm in the extra-capillary space surrounding the fibre. Diffusion is assumed to be the dominant process in the radial direction while axial convection dominates in the lumen of the bioreactor. The transient simulation results show that steep gradients in the cell number are possible under the condition of substrate limitation. The precise conditions which result in nonuniform growth of cells along the length of the bioreactor are delineated. The effect of various operating conditions, such as substrate feed rate, length of the bioreactor and diffusivity of substrate in different regions of the bioreactor, on the bioreactor performance are evaluated in terms of time required to attain the steady-state. The rime of growth is introduced as a measure of effectiveness factor for the bioreactor and is found to be dependent on two parameters, a modified Peclet number and a Thiele modulus. Diffusion, reaction and/or convection control regimes are identified based on these two parameters. The model is further extended to include dual substrate growth limitations, and the relative growth limiting characteristics of two substrates are evaluated. (C) 1997 Elsevier Science Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Video streaming applications have hitherto been supported by single server systems. A major drawback of such a solution is that it increases the server load. The server restricts the number of clients that can be simultaneously supported due to limitation in bandwidth. The constraints of a single server system can be overcome in video streaming if we exploit the endless resources available in a distributed and networked system. We explore a P2P system for streaming video applications. In this paper we build a P2P streaming video (SVP2P) service in which multiple peers co-operate to serve video segments for new requests, thereby reducing server load and bandwidth used. Our simulation shows the playback latency using SVP2P is roughly 1/4th of the latency incurred when the server directly streams the video. Bandwidth consumed for control messages (overhead) is as low as 1.5% of the total data transfered. The most important observation is that the capacity of the SVP2P grows dynamically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C:N ratio of lake sediments provide valuable information about the source and proportions of terrestrial, phytogenic and phycogenic carbon and nitrogen. This study has been carried out in Varthur lake which is receiving sewage since many decades apart from large scale land cover changes. C:N profile of the surficial sediment layer collected in the rainy and the dry seasons revealed higher C:N values[43] due to the accumulation of autochthonous organic material mostly at the deeper portions of the lake. This also highlights N limitation in the sludge either due to uptake by micro and macro-biota or rapid volatilization, denitrification and possible leaching in water. Organic Carbon was lower towards the inlets and higher near the deeper zones. This pattern of Organic C deposition was aided by gusty winds and high flow conditions together with impacts by the land use land cover changes in the watershed. Spatial variability of C:N in surficial sediments is significant compared to its seasonal variability. This communication provides an insight to the pattern in which nutrients are distributed in the sludge/sediment and its variation across seasons and space impacted by the biotic process accompanied by the hydrodynamic changes in the lake.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As aircraft technology is moving towards more electric architecture, use of electric motors in aircraft is increasing. Axial flux BLDC motors (brushless DC motors) are becoming popular in aero application because of their ability to meet the demand of light weight, high power density, high efficiency and high reliability. Axial flux BLDC motors, in general, and ironless axial flux BLDC motors, in particular, come with very low inductance Owing to this, they need special care to limit the magnitude of ripple current in motor winding. In most of the new more electric aircraft applications, BLDC motor needs to be driven from 300 or 600 Vdc bus. In such cases, particularly for operation from 600 Vdc bus, insulated-gate bipolar transistor (IGBT)-based inverters are used for BLDC motor drive. IGBT-based inverters have limitation on increasing the switching frequency, and hence they are not very suitable for driving BLDC motors with low winding inductance. In this study, a three-level neutral point clamped (NPC) inverter is proposed to drive axial flux BLDC motors. Operation of a BLDC motor driven from three-level NPC inverter is explained and experimental results are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental conditions or the presence of interacting components can lead to variations in the structural models of macromolecules. However, the role of these factors in conformational selection is often omitted by in silico methods to extract dynamic information from protein structural models. Structures of small peptides, considered building blocks for larger macromolecular structural models, can substantially differ in the context of a larger protein. This limitation is more evident in the case of modeling large multi-subunit macromolecular complexes using structures of the individual protein components. Here we report an analysis of variations in structural models of proteins with high sequence similarity. These models were analyzed for sequence features of the protein, the role of scaffolding segments including interacting proteins or affinity tags and the chemical components in the experimental conditions. Conformational features in these structural models could be rationalized by conformational selection events, perhaps induced by experimental conditions. This analysis was performed on a non-redundant dataset of protein structures from different SCOP classes. The sequence-conformation correlations that we note here suggest additional features that could be incorporated by in silico methods to extract dynamic information from protein structural models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the design and development of a novel low-cost, non-invasive type sensor suitable for human breath sensing is reported. It can be used to detect respiratory disorders like bronchial asthma by analyzing the recorded breathing pattern. Though there are devices like spirometer to diagnose asthma, they are very inconvenient for patient's use because patients are made to exhale air through mouth forcefully. Presently developed sensor will overcome this limitation and is helpful in the diagnosis of respiratory related abnormalities. Polyvinylidene fluoride (PVDF) film in cantilever configuration is used as a sensing element to form the breath sensor. Two identical sensors are mounted on a spectacle frame, such that the tidal flow of inhaled and exhale air will impinge on sensor, for sensing the breathing patterns. These patterns are recorded, filtered, analyzed and displayed using CRO. Further the sensor is calibrated using a U-tube water manometer. The added advantage of piezoelectric type sensing element is that it is self powered without the need of any external power source.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3-endo sugars and this demands C1-C1 distance of about 5.4 angstrom along the chains. Consideration of an energy penalty term for deviation of C1-C1 distance from the mean value, to the recent DFT-D functionals, specifically B97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. (c) 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most ecosystems have multiple predator species that not only compete for shared prey, but also pose direct threats to each other. These intraguild interactions are key drivers of carnivore community structure, with ecosystem-wide cascading effects. Yet, behavioral mechanisms for coexistence of multiple carnivore species remain poorly understood. The challenges of studying large, free-ranging carnivores have resulted in mainly coarse-scale examination of behavioral strategies without information about all interacting competitors. We overcame some of these challenges by examining the concurrent fine-scale movement decisions of almost all individuals of four large mammalian carnivore species in a closed terrestrial system. We found that the intensity of intraguild interactions did not follow a simple hierarchical allometric pattern, because spatial and behavioral tactics of subordinate species changed with threat and resource levels across seasons. Lions (Panthera leo) were generally unrestricted and anchored themselves in areas rich in not only their principal prey, but also, during periods of resource limitation (dry season), rich in the main prey for other carnivores. Because of this, the greatest cost (potential intraguild predation) for subordinate carnivores was spatially coupled with the highest potential benefit of resource acquisition (prey-rich areas), especially in the dry season. Leopard (P. pardus) and cheetah (Acinonyx jubatus) overlapped with the home range of lions but minimized their risk using fine-scaled avoidance behaviors and restricted resource acquisition tactics. The cost of intraguild competition was most apparent for cheetahs, especially during the wet season, as areas with energetically rewarding large prey (wildebeest) were avoided when they overlapped highly with the activity areas of lions. Contrary to expectation, the smallest species (African wild dog, Lycaon pictus) did not avoid only lions, but also used multiple tactics to minimize encountering all other competitors. Intraguild competition thus forced wild dogs into areas with the lowest resource availability year round. Coexistence of multiple carnivore species has typically been explained by dietary niche separation, but our multi-scaled movement results suggest that differences in resource acquisition may instead be a consequence of avoiding intraguild competition. We generate a more realistic representation of hierarchical behavioral interactions that may ultimately drive spatially explicit trophic structures of multi-predator communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Brutsaert and Neiber (1977), recession curves are widely used to analyse subsurface systems of river basins by expressing -dQ/dt as a function of Q, which typically take a power law form: -dQ/dt=kQ, where Q is the discharge at a basin outlet at time t. Traditionally recession flows are modelled by single reservoir models that assume a unique relationship between -dQ/dt and Q for a basin. However, recent observations indicate that -dQ/dt-Q relationship of a basin varies greatly across recession events, indicating the limitation of such models. In this study, the dynamic relationship between -dQ/dt and Q of a basin is investigated through the geomorphological recession flow model which models recession flows by considering the temporal evolution of its active drainage network (the part of the stream network of the basin draining water at time t). Two primary factors responsible for the dynamic relationship are identified: (i) degree of aquifer recharge (ii) spatial variation of rainfall. Degree of aquifer recharge, which is likely to be controlled by (effective) rainfall patterns, influences the power law coefficient, k. It is found that k has correlation with past average streamflow, which confirms the notion that dynamic -dQ/dt-Q relationship is caused by the degree of aquifer recharge. Spatial variation of rainfall is found to have control on both the exponent, , and the power law coefficient, k. It is noticed that that even with same and k, recession curves can be different, possibly due to their different (recession) peak values. This may also happen due to spatial variation of rainfall. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.