1000 resultados para Brachistochrone Problem
Resumo:
In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.
Resumo:
The combinatorial Dirichlet problem is formulated, and an algorithm for solving it is presented. This provides an effective method for interpolating missing data on weighted graphs of arbitrary connectivity. Image processing examples are shown, and the relation to anistropic diffusion is discussed.
Resumo:
An incremental, nonparametric probability estimation procedure using the fuzzy ARTMAP neural network is introduced. In slow-learning mode, fuzzy ARTMAP searches for patterns of data on which to build ever more accurate estimates. In max-nodes mode, the network initially learns a fixed number of categories, and weights are then adjusted gradually.
Resumo:
A neural network model of 3-D visual perception and figure-ground separation by visual cortex is introduced. The theory provides a unified explanation of how a 2-D image may generate a 3-D percept; how figures pop-out from cluttered backgrounds; how spatially sparse disparity cues can generate continuous surface representations at different perceived depths; how representations of occluded regions can be completed and recognized without usually being seen; how occluded regions can sometimes be seen during percepts of transparency; how high spatial frequency parts of an image may appear closer than low spatial frequency parts; how sharp targets are detected better against a figure and blurred targets are detector better against a background; how low spatial frequency parts of an image may be fused while high spatial frequency parts are rivalrous; how sparse blue cones can generate vivid blue surface percepts; how 3-D neon color spreading, visual phantoms, and tissue contrast percepts are generated; how conjunctions of color-and-depth may rapidly pop-out during visual search. These explanations arise derived from an ecological analysis of how monocularly viewed parts of an image inherit the appropriate depth from contiguous binocularly viewed parts, as during DaVinci stereopsis. The model predicts the functional role and ordering of multiple interactions within and between the two parvocellular processing streams that join LGN to prestriate area V4. Interactions from cells representing larger scales and disparities to cells representing smaller scales and disparities are of particular importance.
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
Gemstone Team GREEN JUSTICE
Resumo:
This paper proposes that atherosclerosis is initiated by a signaling event that deposits calcium hydroxyapatite (Ca-HAP). This event is preceded by a loss of mechanical structure in the arterial wall. After Ca-HAP has been deposited, it is unlikely that it will be reabsorbed because the solubility product constant (K sp) is very small, and the large stores of Ca +2 and PO 4-3 in the bones oppose any attempts to dissolve Ca-HAP by decreasing the common ions. The hydroxide ion (OH -) of Ca-HAP can be displaced in nature by fluoride (F -) and carbonate (CO 3-2) ions, and it is proposed that anions associated with cholesterol ester hydrolysis and, in very small quantities, the enolate of 7-ketocholesterol could also displace the OH -of Ca-HAP, forming an ionic bond. The free energy of hydration of Ca-HAP at 310 K is most likely negative, and the ionic radii of the anions associated with the hydrolysis of cholesterol ester are compatible with the substitution. Furthermore, examination of the pathology of atherosclerotic lesions by Raman and NMR spectroscopy and confocal microscopy supports deposition of Ca-HAP associated with cholesterol. Investigating the affinity of intermediates of cholesterol hydrolysis for Ca-HAP compared to lipoproteins such as HDL, LDL, and VLDL using isothermic titration calorimetry could add proof of this concept and may lead to the development of a new class of medications targeted at the deposition of cholesterol within Ca-HAP. Treatment of acute ischemic events as a consequence of atherosclerosis with denitrogenation and oxygenation is discussed. © the author(s), publisher and licensee Libertas Academica Ltd.
Resumo:
info:eu-repo/semantics/published
Resumo:
info:eu-repo/semantics/published
Resumo:
In this paper we present a procedure to describe strategies in problems which can be solved using inductive reasoning. This procedure is based on some aspects of the analysis of the specific subject matter, concretely on the elements, the representation systems and the transformations involved. We show an example of how we used this procedure for the tiles problem. Finally we present some results and conclusions.
Resumo:
The concept of 'nested methods' is adopted to solve the location-routeing problem. Unlike the sequential and iterative approaches, in this method we treat the routeing element as a sub-problem within the larger problem of location. Efficient techniques that take into account the above concept and which use a neighbourhood structure inspired from computational geometry are presented. A simple version of tabu search is also embedded into our methods to improve the solutions further. Computational testing is carried out on five sets of problems of 400 customers with five levels of depot fixed costs, and the results obtained are encouraging.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
In this paper the many to many location routing problem is introduced, and its relationship to various problems in distribution management is emphasised. Useful mathematical formulations which can be easily extended to cater for other related problems are produced. Techniques for tackling this complex distribution problem are also outlined.