897 resultados para Tail-approximation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transient receptor potential melastatin 8 (TRPM8) is the principal cold and menthol receptor channel. Characterized primarily for its cold sensing role in sensory neurons, it is expressed and functional in several non-neuronal tissues, including vasculature. We previously demonstrated that menthol causes vasoconstriction and vasodilatation in isolated arteries, depending on vascular tone. Here we investigated calcium's role in responses mediated by TRPM8 ligands in rat tail artery myocytes using patch-clamp electrophysiology and ratiometric Ca2+ recording. Isometric contraction studies examined actions of TRPM8 ligands in the presence/absence of L-type calcium channel blocker. Menthol (300 μM), a concentration typically used to induce TRPM8 currents, strongly inhibited L-type voltage-dependent Ca2+ current (L-ICa) in myocytes, especially it's sustained component, most relevant for depolarisation-induced vasoconstriction. In contraction studies, with nifedipine present (10 μM) to abolish L-ICa contribution to phenylephrine (PE)-induced vasoconstrictions of vascular rings, a marked increase in tone was observed with menthol. Menthol-induced increases in PE-induced vasoconstrictions were mediated predominantly by Ca2+-release from sarcoplasmic reticulum, since they were significantly inhibited by cyclopiazonic acid. Pre-incubation of vascular rings with a TRPM8 antagonist strongly inhibited menthol-induced increases in PE-induced vasoconstrictions, thus confirming specific role of TRPM8. Finally, two other common TRPM8 agonists, WS-12 and icilin, inhibited L-ICa. Thus, TRPM8 channels are functionally active in rat tail artery myocytes and play a distinct direct stimulatory role in control of vascular tone. However, indirect effects of TRPM8 agonists, which are unrelated to TRPM8, are mediated by inhibition of L-type Ca2+ channels, and largely obscure TRPM8-mediated vasoconstriction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a different approach to that of Popa, we arrive at an alternative definition
of the positive approximation property for order complete Banach lattices.
Some results associated with this new approach may be of independent interest. We
also prove a Banach lattice analogue of an old characterization, due to Palmer, of
the metric approximation property in terms of the continuous bidual of the ideal of
approximable operators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interactive applications do not require more bandwidth to go faster. Instead, they require less latency. Unfortunately, the current design of transport protocols such as TCP limits possible latency reductions. In this paper we evaluate and compare different loss recovery enhancements to fight tail loss latency. The two recently proposed mechanisms "RTO Restart" (RTOR) and "Tail Loss Probe" (TLP) as well as a new mechanism that applies the logic of RTOR to the TLP timer management (TLPR) are considered. The results show that the relative performance of RTOR and TLP when tail loss occurs is scenario dependent, but with TLP having potentially larger gains. The TLPR mechanism reaps the benefits of both approaches and in most scenarios it shows the best performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The industrial production of aluminium is an electrolysis process where two superposed horizontal liquid layers are subjected to a mainly vertical electric current supplied by carbon electrodes. The lower layer consists of molten aluminium and lies on the cathode. The upper layer is the electrolyte and is covered by the anode. The interface between the two layers is often perturbed, leading to oscillations, or waves, similar to the waves on the surface of seas or lakes. The presence of electric currents and the resulting magnetic field are responsible for electromagnetic (Lorentz) forces within the fluid, which can amplify these oscillations and have an adverse influence on the process. The electrolytic bath vertical to horizontal aspect ratio is such, that it is advantageous to use the shallow water equations to model the interface motion. These are the depth-averaging the Navier-Stokes equations so that nonlinear and dispersion terms may be taken into account. Although these terms are essential to the prediction of wave dynamics, they are neglected in most of the literature on interface instabilities in aluminium reduction cells where only the linear theory is usually considered. The unknown variables are the two horizontal components of the fluid velocity, the height of the interface and the electric potential. In this application, a finite volume resolution of the double-layer shallow water equations including the electromagnetic sources has been developed, for incorporation into a generic three-dimensional computational fluid dynamics code that also deals with heat transfer within the cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article shows a general way to implement recursive functions calculation by linear tail recursion. It emphasizes the use of tail recursion to perform computations efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The approximation lemma is a simplification of the well-known take lemma, and is used to prove properties of programs that produce lists of values. We show how the approximation lemma, unlike the take lemma, can naturally be generalised from lists to a large class of datatypes, and present a generic approximation lemma that is parametric in the datatype to which it applies. As a useful by-product, we find that generalising the approximation lemma in this way also simplifies its proof.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A validation study examined the accuracy of a purpose-built single photon absorptiometry (SPA) instrument for making on-farm in vivo measurements of bone mineral density (BMD) in tail bones of cattle. In vivo measurements were made at the proximal end of the ninth coccygeal vertebra (Cy9) in steers of two age groups (each n = 10) in adequate or low phosphorus status. The tails of the steers were then resected and the BMD of the Cy9 bone was measured in the laboratory with SPA on the resected tails and then with established laboratory procedures on defleshed bone. Specific gravity and ash density were measured on the isolated Cy9 vertebrae and on 5-mm2 dorso-ventral cores of bone cut from each defleshed Cy9. Calculated BMD determined by SPA required a measure of tail bone thickness and this was estimated as a fraction of total tail thickness. Actual tail bone thickness was also measured on the isolated Cy9 vertebrae. The accuracy of measurement of BMD by SPA was evaluated by comparison with the ash density of the bone cores measured in the laboratory. In vivo SPA measurements of BMD were closely correlated with laboratory measurements of core ash density (r = 0.92). Ash density and specific gravity of cores, and all SPA measures of BMD, were affected by phosphorus status of the steers, but the effect of steer age was only significant (P < 0.05) for steers in adequate phosphorus status. The accuracy of SPA to determine BMD of tail bone may be improved by reducing error associated with in vivo estimation of tail bone thickness, and also by adjusting for displacement of soft tissue by bone mineral. In conclusion a purpose-built SPA instrument could be used to make on-farm sequential non-invasive in vivo measurements of the BMD of tailbone in cattle with accuracy acceptable for many animal studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stress serves as an adaptive mechanism and helps organisms to cope with life-threatening situations. However, individual vulnerability to stress and dysregulation of this system may precipitate stress-related disorders such as depression. The neurobiological circuitry in charge of dealing with stressors has been widely studied in animal models. Recently our group has demonstrated a role for lysophosphatidic acid (LPA) through the LPA1 receptor in vulnerability to stress, in particular the lack of this receptor relates to robust decrease of adult hippocampal neurogenesis and induction of anxious and depressive states. Nevertheless, the specific abnormalities in the limbic circuit in reaction to stress remains unclear. The aim of this study is to examine the differences in the brain activation pattern in the presence or absence of LPA1 receptor after acute stress. For this purpose, we have studied the response of maLPA1-null male mice and normal wild type mice to an intense stressor: Tail Suspension Test. Activation induced by behaviour of brain regions involved in mood regulation was analysed by stereological quantification of c-Fos immunoreactive positive cells. We also conducted multidimensional scaling analysis in order to unravel coativation between structures. Our results revealed hyperactivity of stress-related structures such as amygdala and paraventricular nucleus of the hypothalamus in the knockout model and different patterns of coactivation in both genotypes using a multidimensional map. This data provides further evidence to the engagement of the LPA1 receptors in stress regulation and sheds light on different neural pathways under normal and vulnerability conditions that can lead to mood disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The finite Dirichlet series of the title are defined by the condition that they vanish at as many initial zeros of the zeta function as possible. It turns out that such series can produce extremely good approximations to the values of Riemanns zeta function inside the critical strip. In addition, the coefficients of these series have remarkable number-theoretical properties discovered in large-scale high-precision numerical experiments. So far, we have found no theoretical explanation for the observed phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel route to prepare highly active and stable N2O decomposition catalysts is presented, based on Fe-exchanged beta zeolite. The procedure consists of liquid phase Fe(III) exchange at low pH. By varying the pH systematically from 3.5 to 0, using nitric acid during each Fe(III)-exchange procedure, the degree of dealumination was controlled, verified by ICP and NMR. Dealumination changes the presence of neighbouring octahedral Al sites of the Fe sites, improving the performance for this reaction. The so-obtained catalysts exhibit a remarkable enhancement in activity, for an optimal pH of 1. Further optimization by increasing the Fe content is possible. The optimal formulation showed good conversion levels, comparable to a benchmark Fe-ferrierite catalyst. The catalyst stability under tail gas conditions containing NO, O2 and H2O was excellent, without any appreciable activity decay during 70 h time on stream. Based on characterisation and data analysis from ICP, single pulse excitation NMR, MQ MAS NMR, N2 physisorption, TPR(H2) analysis and apparent activation energies, the improved catalytic performance is attributed to an increased concentration of active sites. Temperature programmed reduction experiments reveal significant changes in the Fe(III) reducibility pattern with the presence of two reduction peaks; tentatively attributed to the interaction of the Fe-oxo species with electron withdrawing extraframework AlO6 species, causing a delayed reduction. A low-temperature peak is attributed to Fe-species exchanged on zeolitic AlO4 sites, which are partially charged by the presence of the neighbouring extraframework AlO6 sites. Improved mass transport phenomena due to acid leaching is ruled out. The increased activity is rationalized by an active site model, whose concentration increases by selectively washing out the distorted extraframework AlO6 species under acidic (optimal) conditions, liberating active Fe species.