873 resultados para Tail-approximation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A validation study examined the accuracy of a purpose-built single photon absorptiometry (SPA) instrument for making on-farm in vivo measurements of bone mineral density (BMD) in tail bones of cattle. In vivo measurements were made at the proximal end of the ninth coccygeal vertebra (Cy9) in steers of two age groups (each n = 10) in adequate or low phosphorus status. The tails of the steers were then resected and the BMD of the Cy9 bone was measured in the laboratory with SPA on the resected tails and then with established laboratory procedures on defleshed bone. Specific gravity and ash density were measured on the isolated Cy9 vertebrae and on 5-mm2 dorso-ventral cores of bone cut from each defleshed Cy9. Calculated BMD determined by SPA required a measure of tail bone thickness and this was estimated as a fraction of total tail thickness. Actual tail bone thickness was also measured on the isolated Cy9 vertebrae. The accuracy of measurement of BMD by SPA was evaluated by comparison with the ash density of the bone cores measured in the laboratory. In vivo SPA measurements of BMD were closely correlated with laboratory measurements of core ash density (r = 0.92). Ash density and specific gravity of cores, and all SPA measures of BMD, were affected by phosphorus status of the steers, but the effect of steer age was only significant (P < 0.05) for steers in adequate phosphorus status. The accuracy of SPA to determine BMD of tail bone may be improved by reducing error associated with in vivo estimation of tail bone thickness, and also by adjusting for displacement of soft tissue by bone mineral. In conclusion a purpose-built SPA instrument could be used to make on-farm sequential non-invasive in vivo measurements of the BMD of tailbone in cattle with accuracy acceptable for many animal studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stress serves as an adaptive mechanism and helps organisms to cope with life-threatening situations. However, individual vulnerability to stress and dysregulation of this system may precipitate stress-related disorders such as depression. The neurobiological circuitry in charge of dealing with stressors has been widely studied in animal models. Recently our group has demonstrated a role for lysophosphatidic acid (LPA) through the LPA1 receptor in vulnerability to stress, in particular the lack of this receptor relates to robust decrease of adult hippocampal neurogenesis and induction of anxious and depressive states. Nevertheless, the specific abnormalities in the limbic circuit in reaction to stress remains unclear. The aim of this study is to examine the differences in the brain activation pattern in the presence or absence of LPA1 receptor after acute stress. For this purpose, we have studied the response of maLPA1-null male mice and normal wild type mice to an intense stressor: Tail Suspension Test. Activation induced by behaviour of brain regions involved in mood regulation was analysed by stereological quantification of c-Fos immunoreactive positive cells. We also conducted multidimensional scaling analysis in order to unravel coativation between structures. Our results revealed hyperactivity of stress-related structures such as amygdala and paraventricular nucleus of the hypothalamus in the knockout model and different patterns of coactivation in both genotypes using a multidimensional map. This data provides further evidence to the engagement of the LPA1 receptors in stress regulation and sheds light on different neural pathways under normal and vulnerability conditions that can lead to mood disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel route to prepare highly active and stable N2O decomposition catalysts is presented, based on Fe-exchanged beta zeolite. The procedure consists of liquid phase Fe(III) exchange at low pH. By varying the pH systematically from 3.5 to 0, using nitric acid during each Fe(III)-exchange procedure, the degree of dealumination was controlled, verified by ICP and NMR. Dealumination changes the presence of neighbouring octahedral Al sites of the Fe sites, improving the performance for this reaction. The so-obtained catalysts exhibit a remarkable enhancement in activity, for an optimal pH of 1. Further optimization by increasing the Fe content is possible. The optimal formulation showed good conversion levels, comparable to a benchmark Fe-ferrierite catalyst. The catalyst stability under tail gas conditions containing NO, O2 and H2O was excellent, without any appreciable activity decay during 70 h time on stream. Based on characterisation and data analysis from ICP, single pulse excitation NMR, MQ MAS NMR, N2 physisorption, TPR(H2) analysis and apparent activation energies, the improved catalytic performance is attributed to an increased concentration of active sites. Temperature programmed reduction experiments reveal significant changes in the Fe(III) reducibility pattern with the presence of two reduction peaks; tentatively attributed to the interaction of the Fe-oxo species with electron withdrawing extraframework AlO6 species, causing a delayed reduction. A low-temperature peak is attributed to Fe-species exchanged on zeolitic AlO4 sites, which are partially charged by the presence of the neighbouring extraframework AlO6 sites. Improved mass transport phenomena due to acid leaching is ruled out. The increased activity is rationalized by an active site model, whose concentration increases by selectively washing out the distorted extraframework AlO6 species under acidic (optimal) conditions, liberating active Fe species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we present an important improvement in our model of biped mechanism that allows the elevation in a stable form of the system's feet during the execution of trajectories. This improvement allows for simpler trajectory planning and also facilitates the reduction of losses in the collision between the feet and the ground. On the other hand, we add to the design phase the study of the displacement of the Zero Moment Point, as well as the variation of the normal component of the ground reaction force during the motion of the system. Consideration of the above mentioned magnitudes in the design phase allows us to design the necessary support area of the system. These magnitudes will be used as a smoothness criterion of the ground contact to facilitate the selection of robot parameters and trajectories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamic interaction between building systems and external climate is extremely complex, involving a large number of difficult-to-predict variables. In order to study the impact of global warming on the built environment, the use of building simulation techniques together with forecast weather data are often necessary. Since all building simulation programs require hourly meteorological input data for their thermal comfort and energy evaluation, the provision of suitable weather data becomes critical. Based on a review of the existing weather data generation models, this paper presents an effective method to generate approximate future hourly weather data suitable for the study of the impact of global warming. Depending on the level of information available for the prediction of future weather condition, it is shown that either the method of retaining to current level, constant offset method or diurnal modelling method may be used to generate the future hourly variation of an individual weather parameter. An example of the application of this method to the different global warming scenarios in Australia is presented. Since there is no reliable projection of possible change in air humidity, solar radiation or wind characters, as a first approximation, these parameters have been assumed to remain at the current level. A sensitivity test of their impact on the building energy performance shows that there is generally a good linear relationship between building cooling load and the changes of weather variables of solar radiation, relative humidity or wind speed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Characterization of indoor particle sources from 14 residential houses in Brisbane, Australia, was performed. The approximation of PM2.5 and the submicrometre particle number concentrations were measured simultaneously for more than 48 h in the kitchen of all the houses by using a photometer (DustTrak) and a condensation particle counter (CPC), respectively. From the real time indoor particle concentration data and a diary of indoor activities, the indoor particle sources were identified. The study found that among the indoor activities recorded in this study, frying, grilling, stove use, toasting, cooking pizza, smoking, candle vaporizing eucalyptus oil and fan heater use, could elevate the indoor particle number concentration levels by more than five times. The indoor approximation of PM2.5 concentrations could be close to 90 times, 30 times and three times higher than the background levels during grilling, frying and smoking, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider boundary layer flow of a micropolar fluid driven by a porous stretching sheet. A similarity solution is defined, and numerical solutions using Runge-Kutta and quasilinearisation schemes are obtained. A perturbation analysis is also used to derive analytic solutions to first order in the perturbing parameter. The resulting closed form solutions involve relatively complex expressions, and the analysis is made more tractable by a combination of offline and online work using a computational algebra system (CAS). For this combined numerical and analytic approach, the perturbation analysis yields a number of benefits with regard to the numerical work. The existence of a closed form solution helps to discriminate between acceptable and spurious numerical solutions. Also, the expressions obtained from the perturbation work can provide an accurate description of the solution for ranges of parameters where the numerical approaches considered here prove computationally more difficult.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit the classical Karman rotating disk problem. A series analysis is used to derive estimates of boundary conditions at the surface. Using these estimates, computed thermal and flow fields for large mass transfer through the disk are readily obtained using a shooting method. The relevance of the problem to practical flows is discussed briefly.