363 resultados para buffering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The area of Östersundom (29,1 square kilometers) was attached to Helsinki in the beginning of the year 2009. Östersundom is formed mostly from the municipality of Sipoo, and partly from the city of Vantaa. Nowadays Östersundom is still quite rural, but city planning has already started, and there are plans to develop Östersundom into a district with 45 000 inhabitants. In this study, the headwaters, streams and small lakes of Östersundom were studied to produce information as a basis for city planning. There are six main streams and five small lakes in Östersundom. The main methodology used in this study was the examination of the physical and the chemical quality of the water. The hygienic quality of the water was also studied. It was also examined whether the waters are in their natural state, or have they been treated and transformed by man. In addition, other factors affecting the waters were examined. Geographical information data was produced as a result of this work. Östersundom is the main area looked at in this study, some factors are examined in the scope of the catchment areas. Water samples were collected in three sampling periods: 31.8 4.9.2009, 3. 4.2.2010, and 10. 14.4.2010. There were 20 sampling points in Östersundom (5 in small lakes, 15 in streams). In the winter sampling period, only six samples were collected, from which one was taken from a small lake. Field measurements associated with water sampling included water temperature, oxygen concentration, pH and electoral conductivity. Water samples were analyzed in the Laboratories of Physical Geography in the University of Helsinki for the following properties: total suspended solids (TSS), total dissolved substances (TDS), organic matter, alkalinity, colour, principal anions and cations and trace elements. Metropolilab analyzed the amount of faecal coliform bacteria in the samples. The waters in Östersundom can be divided to three classes according to water quality and other characteristics: the upper course of the streams, the lower course of the streams and the small lakes. The streams in their upper course are in general acidic, and their acid neutralization capacity is low. The proportion of the organic matter is high. Also the concentrations of aluminium and iron tend to be high. The streams in the lower course have acidity closer to neutral, and the buffering capacity is good. The amounts of TSS and TDS are high, and as a result, the concentrations of many ions and trace elements are high as well. Bacteria were detected at times in the streams of the lower course. Four of the five small lakes in Östersundom are humic and acidic. TSS and TDS concentrations tend to be low, but the proportion of organic matter is often high. There were no bacteria in the small lakes. The fifth small lake (Landbonlampi) differs from the others by its water colour, which is very clear. This lake is very acidic, and its buffering capacity is extremely low. Compared to the headwaters in Finland in general, the concentrations of many ions and trace elements are higher in Östersundom. On the other hand, the characteristics of water were different according to the classification upper course streams, lower course streams, and small lakes. Generally, the best water quality was observed in the stream of Gumbölenpuro and in the lakes Storträsk, Genaträsk, Hältingträsk and Landbonlampi. Several valuable waters in their natural state were discovered from the area. The most representative example is the stream of Östersundominpuro in its lower course, where the stream flows through a broad-leaf forest area. The small lakes of Östersundom, and the biggest stream Krapuoja, with its meandering channel, are also valuable in their natural state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanism underlying homeostatic regulation of the plasma levels of free retinol-binding protein and free thyroxine, the systemic distribution of which is of great importance, has been investigated. A simple method has been developed to determine the rate of dissociation of a ligand from the binding protein. Analysis of the dissociation process of retinol-binding protein from prealbumin-2 reveals that the free retinol-binding protein pool undergoes massive flux, and the prealbumin-2 participates in homeostatic regulation of the free retinol-binding protein pool. Studies on the dissociation process of thyroxine from its plasma carrier proteins show that the various plasma carrier proteins share two roles. Of the two types of protein, the thyroxine-binding globulin (the high affinity binding protein) contributes only 27% of the free thyroxine in a rapid transition process, despite its being the major binding protein. But prealbumin-2, which has lower affinity towards thyroxine, participates mainly in a rapid flux of the free thyroxine pool. Thus thyroxine-binding globulin acts predominantly as a plasma reservoir of thyroxine, and also probably in the �buffering� action on plasma free thyroxine level, in the long term, while prealbumin-2 participates mainly in the maintainance of constancy of free thyroxine levels even in the short term. The existence of these two types of binding protein facilitates compensation for the metabolic flux of the free ligand and maintenance of the thyroxine pool within a very narrow range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperative communication using rateless codes, in which the source transmits an infinite number of parity bits to the destination until the receipt of an acknowledgment, has recently attracted considerable interest. It provides a natural and efficient mechanism for accumulating mutual information from multiple transmitting relays. We develop an analysis of queued cooperative relay systems that combines the communication-theoretic transmission aspects of cooperative communication using rateless codes over Rayleigh fading channels with the queuing-theoretic aspects associated with buffering messages at the relays. Relay cooperation combined with queuing reduces the message transmission times and also helps distribute the traffic load in the network, which improves throughput significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a cooperative relay-assisted communication system that uses rateless codes, packets get transmitted from a source to a destination at a rate that depends on instantaneous channel states of the wireless links between nodes. When multiple relays are present, the relay with the highest channel gain to the source is the first to successfully decode a packet from the source and forward it to the destination. Thus, the unique properties of rateless codes ensure that both rate adaptation and relay selection occur without the transmitting source or relays acquiring instantaneous channel knowledge. In this paper, we show that in such cooperative systems, buffering packets at relays significantly increases throughput. We develop a novel analysis of these systems that combines the communication-theoretic aspects of cooperation over fading channels with the queuing-theoretic aspects associated with buffering. Closed-form expressions are derived for the throughput and end-to-end delay for the general case in which the channels between various nodes are not statistically identical. Corresponding results are also derived for benchmark systems that either do not exploit spatial diversity or do not buffer packets. Altogether, our results show that buffering - a capability that will be commonly available in practical deployments of relays - amplifies the benefits of cooperation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is a formidable challenge to arrange tin nanoparticles in a porous matrix for the achievement of high specific capacity and current rate capability anode for lithium-ion batteries. This article discusses a simple and novel synthesis of arranging tin nanoparticles with carbon in a porous configuration for application as anode in lithium-ion batteries. Direct carbonization of synthesized three-dimensional Sn-based MOF: K2Sn2(1,4-bdc)(3)](H2O) (1) (bdc = benzenedicarboxylate) resulted in stabilization of tin nanoparticles in a porous carbon matrix (abbreviated as Sn@C). Sn@C exhibited remarkably high electrochemical lithium stability (tested over 100 charge and discharge cycles) and high specific capacities over a wide range of operating currents (0.2-5 Ag-1). The novel synthesis strategy to obtain Sn@C from a single precursor as discussed herein provides an optimal combination of particle size and dispersion for buffering severe volume changes due to Li-Sn alloying reaction and provides fast pathways for lithium and electron transport.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. Applications such as content-aware retargeting of videos to different aspect ratios while preserving informative regions and smart insertion of dialog (closed-caption text) into the video stream can significantly be improved using the predicted ROIs. We propose an interactive human-in-the-loop framework to model eye movements and predict visual saliency into yet-unseen frames. Eye tracking and video content are used to model visual attention in a manner that accounts for important eye-gaze characteristics such as temporal discontinuities due to sudden eye movements, noise, and behavioral artifacts. A novel statistical-and algorithm-based method gaze buffering is proposed for eye-gaze analysis and its fusion with content-based features. Our robust saliency prediction is instantiated for two challenging and exciting applications. The first application alters video aspect ratios on-the-fly using content-aware video retargeting, thus making them suitable for a variety of display sizes. The second application dynamically localizes active speakers and places dialog captions on-the-fly in the video stream. Our method ensures that dialogs are faithful to active speaker locations and do not interfere with salient content in the video stream. Our framework naturally accommodates personalisation of the application to suit biases and preferences of individual users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a method of rapidly producing computer-generated holograms that exhibit geometric occlusion in the reconstructed image. Conceptually, a bundle of rays is shot from every hologram sample into the object volume.We use z buffering to find the nearest intersecting object point for every ray and add its complex field contribution to the corresponding hologram sample. Each hologram sample belongs to an independent operation, allowing us to exploit the parallel computing capability of modern programmable graphics processing units (GPUs). Unlike algorithms that use points or planar segments as the basis for constructing the hologram, our algorithm's complexity is dependent on fixed system parameters, such as the number of ray-casting operations, and can therefore handle complicated models more efficiently. The finite number of hologram pixels is, in effect, a windowing function, and from analyzing the Wigner distribution function of windowed free-space transfer function we find an upper limit on the cone angle of the ray bundle. Experimentally, we found that an angular sampling distance of 0:01' for a 2:66' cone angle produces acceptable reconstruction quality. © 2009 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of three separate studies of roles that black holes might play in our universe.

In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.

In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.

In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MicroRNAs are a class of small non-coding RNAs that negatively regulate gene expression. Several microRNAs have been implicated in altering hematopoietic cell fate decisions. Importantly, deregulation of many microRNAs can lead to deleterious consequences in the hematopoietic system, including the onset of cancer, autoimmunity, or a failure to respond effectively to infection. As such, microRNAs fine-tune the balance between normal hematopoietic output and pathologic consequences. In this work, we explore the role of two microRNAs, miR-132 and miR-125b, in regulating hematopoietic stem cell (HSC) function and B cell development. In particular, we uncover the role of miR-132 in maintaining the appropriate balance between self-renewal, differentiation, and survival in aging HSCs by buffering the expression of a critical transcription factor, FOXO3. By maintain this balance, miR-132 may play a critical role in preventing aging-associated hematopoietic conditions such as autoimmune disease and cancer. We also find that miR-132 plays a critical role in B cell development by targeting a key transcription factor, Sox4, that is responsible for the differentiation of pro-B cells into pre-B cells. We find that miR-132 regulates B cell apoptosis, and by delivering miR-132 to mice that are predisposed to developing B cell cancers, we can inhibit the formation of these cancers and improve the survival of these mice. In addition to miR-132, we uncovered the role of another critical microRNA, miR-125b, that potentiates hematopoietic stem cell function. We found that enforced expression of miR-125b causes an aggressive myeloid leukemia by downregulation of its target Lin28a. Importantly, miR-125b also plays a critical role in inhibiting the formation of pro-B cells. Thus, we have discovered two microRNAs with important roles in regulating normal hematopoiesis, and whose dregulation can lead to deleterious consequences such as cancer in the aging hematopoietic system. Both miR-132 and miR-125b may therefore be targeted for therapeutics to inhibit age-related immune diseases associated with the loss of HSC function and cancer progression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is the episodic variations in stream water chemistry associated with acid rainfall and run-off and the effect on aquatic ecosystems, with particular reference to fish populations in North West England produced by the North West Water Authority in 1985. This report looks at the biological, physical and chemical information collected over a five year period from over 100 sites on upland streams in the North West Region of which drained rocks of low buffering capacity. In both Lake District and South Pennine sites striking differences were found between the composition of invertebrate communities inhabiting acid-stressed and less acid-stressed streams. Electric fishing surveys showed that acidic streams (geometric mean pH <5.5) generally had abnormally low densities of salmonids ( < 0 .2m2) and that 0+ fish were very few or absent. The latter indicates recruitment failure. Salmon were more sensitive than trout to low pH.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Event-sampling and scans were used for collecting data on male-infant-male triadic interactions, and their effects on member spacing respectively in a group of Macaca thibetana at Mt. Emei in 1989. The group was partially provisioned by human visitors in seasons other than winter, and could be observed closely. In addition, a stable linear male-hierarchy among five males existed for two years since the end of 1987, providing a good social condition for this topic. The triadic interactions were specific to the birth season, and recognized as three types being on a continuum functionally changing from passive ''agonistic buffering'' (4.8%) to active spatial cohesion, which resulted in a significant decline of intermale distances. Positive correlations were documented between the triad initiation rate and the number of females in consort with the males in the mating season (MS), and between the triad reception rate and the number of infants in proximity to the males in the MS when maternal care was significantly reduced. Thus the male's mating effort and kin/sexual selection may deeply be involved in the triad of this species. Considering that the two triad-species, M. sylvanus and M. thibetana, had different levels of paternity, but shared similar foraging conditions, and showed similar intensities of male-infant caretaking, the triad was very likely a byproduct of male-infant caretaking, which was probably shaped to compensate heavy maternal investment to young offspring in harsh conditions. Accordingly, the long-term arguments about the triad in M. sylvanus can be united to a model of the way in which ''male-infant caretaking'' hypothesis works ultimately, and ''regulating social relations'' hypothesis does proximately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examine the fluid mechanics of night purging in a two-storey naturally ventilated atrium building. We develop a mathematical model of a simplified atrium building and focus on the rate at which warm air purges from each storey and the atrium by displacement ventilation into a still cool night environment of a constant temperature. To develop a first insight into how the geometry of the building influences the rate at which warm air purges from each storey via the atrium we neglect heat exchange with the fabric (so there is no thermal buffering) and furthermore assume that the warm air layers in each storey and the atrium are of uniform temperature. The plumes of warm air that rise from the storeys into the atrium, causing the atrium to fill with warm air, have a very strong influence on the night purge. Modelling these as axisymmetric turbulent plumes, we identify three forms of purging behaviour. Each purge is characterised by five key times identified in the progression of the night purge and physical rationale for these differing behaviours is given. An interface velocity deficit and volumetric purge deficit are introduced as measures of the efficiency of a night purge. © 2010 Elsevier Ltd.