895 resultados para Anchoring heuristic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross [1] to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our Group-Tree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, broad accessibility has not been a priority in the design of formal verification tools that can provide these benefits. We propose a few design criteria to address these issues: a simple, familiar, and conventional concrete syntax that is independent of any environment, application, or verification strategy, and the possibility of reducing workload and entry costs by employing features selectively. We demonstrate the feasibility of satisfying such criteria by presenting our own formal representation and verification system. Our system’s concrete syntax overlaps with English, LATEX and MediaWiki markup wherever possible, and its verifier relies on heuristic search techniques that make the formal authoring process more manageable and consistent with prevailing practices. We employ techniques and algorithms that ensure a simple, uniform, and flexible definition and design for the system, so that it easy to augment, extend, and improve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross (Kumar and Ross, 2006) to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our GroupTree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most real-time scheduling problems are known to be NP-complete. To enable accurate comparison between the schedules of heuristic algorithms and the optimal schedule, we introduce an omniscient oracle. This oracle provides schedules for periodic task sets with harmonic periods and variable resource requirements. Three different job value functions are described and implemented. Each corresponds to a different system goal. The oracle is used to examine the performance of different on-line schedulers under varying loads, including overload. We have compared the oracle against Rate Monotonic Scheduling, Statistical Rate Monotonic Scheduling, and Slack Stealing Job Admission Control Scheduling. Consistently, the oracle provides an upper bound on performance for the metric under consideration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study develops a neuromorphic model of human lightness perception that is inspired by how the mammalian visual system is designed for this function. It is known that biological visual representations can adapt to a billion-fold change in luminance. How such a system determines absolute lightness under varying illumination conditions to generate a consistent interpretation of surface lightness remains an unsolved problem. Such a process, called "anchoring" of lightness, has properties including articulation, insulation, configuration, and area effects. The model quantitatively simulates such psychophysical lightness data, as well as other data such as discounting the illuminant, the double brilliant illusion, and lightness constancy and contrast effects. The model retina embodies gain control at retinal photoreceptors, and spatial contrast adaptation at the negative feedback circuit between mechanisms that model the inner segment of photoreceptors and interacting horizontal cells. The model can thereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A new anchoring mechanism, called the Blurred-Highest-Luminance-As-White (BHLAW) rule, helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural color images under variable lighting conditions, and is compared with the popular RETINEX model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clause-learning and clause-weighting in boolean satisfiability (SAT), nogood and explanation-based learning, and constraint weighting in constraint satisfaction problems (CSPs). Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine-grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this work we present two methods for learning from search using restarts, in order to identify these critical variables prior to solving. Both methods are based on the conflict-directed heuristic (weighted-degree heuristic) introduced by Boussemart et al. and are aimed at producing a better-informed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimizing the overhead of these restarts. We further examine the impact of different sampling strategies and different measurements of contention, and assess different restarting strategies for the heuristic. Finally, two applications for constraint weighting are considered in detail: dynamic constraint satisfaction problems and unary resource scheduling problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis examines cultural processes underpinning the emergence, institutionalisation and reproduction of class boundaries in Limerick city. The research aims to bring a new understanding to the contemporary context of the city’s urban regeneration programme. Acknowledging and recognising other contemporary studies of division and exclusion, the thesis creates a distinctive approach which focuses on uncovering the cultural roots of inequality, educational disadvantage, stigma and social exclusion and the dynamics of their social reproduction. Using Bateson’s concept of schismogenesis (1953), the thesis looks to the persistent, but fragmented culture of community and develops a heuristic ‘symbolic order of the city’. This is defined as “…a cultural structure, the meaning making aspect of hierarchy, the categorical structures of world understanding, the way Limerick people understand themselves, their local and larger world” (p. 37). This provides a very different departure point for exploring the basis for urban regeneration in Limerick (and everywhere). The central argument is that if we want to understand the present (multiple) crises in Limerick we need to understand the historical, anthropological and recursive processes underpinning ‘generalised patterns of rivalry and conflict’. In addition to exploring the historical roots of status and stigma in Limerick, the thesis explores the mythopoesis of persistent, recurrent narratives and labels that mark the boundaries of the city’s identities. The thesis examines the cultural and social function of ‘slagging’ as a vernacular and highly particularised form of ironic, ritualised and, often, ‘cruel’ medium of communication (often exclusion). This is combined with an etymology of the vocabulary of Limerick slang and its mythological base. By tracing the origins of many normalised patterns of Limerick speech ‘sayings’, which have long since forgotten their roots, the thesis demonstrates how they perform a significant contemporary function in maintaining and reinforcing symbolic mechanisms of inclusion/exclusion. The thesis combines historical and archival data with biographical interviews, ethnographic data married to a deep historical hermeneutic analysis of this political community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface modification of silicon with organic monolayers tethered to the surface by different linkers is an important process in realizing future (opto-)electronic devices. Understanding the role played by the nature of the linking group and the chain length on the adsorption structures and electronic properties of these assemblies is vital to advance this technology. This Thesis is a study of such properties and contributes in particular to a microscopic understanding of induced changes in the work function of experimentally studied functionalized silicon surfaces. Using first-principles density functional theory (DFT), at the first step, we provide predictions for chemical trends in the work function of hydrogenated silicon (111) surfaces modified with various terminations. For nonpolar terminating atomic species such as F, Cl, Br, and I, the change in the work function is directly proportional to the amount of charge transferred from the surface, thus relating to the difference in electronegativity of the adsorbate and silicon atoms. The change is a monotonic function of coverage in this case, and the work function increases with increasing electronegativity. Polar species such as −TeH, −SeH, −SH, −OH, −NH2, −CH3, and −BH2 do not follow this trend due to the interaction of their dipole with the induced electric field at the surface. In this case, the magnitude and sign of the surface dipole moment need to be considered in addition to the bond dipole to generally describe the change in work function. Compared to hydrogenated surfaces, there is slight increase in the work function of H:Si(111)-XH, where X = Te, Se, and S, whereas reduction is observed for surfaces covered with −OH, −CH3, and −NH2. Next, we study the hydrogen passivated Si(111) surface modified with alkyl chains of the general formula H:Si–(CH2)n–CH2 and H:Si–X–(CH2)n–CH3, where X = NH, O, S and n = (0, 1, 3, 5, 7, 9, 11), at half coverage. For (X)–Hexyl and (X)–Dodecyl functionalization, we also examined various coverages up to full monolayer grafting in order to validate the result of half covered surface and the linker effect on the coverage. We find that it is necessary to take into account the van der Waals interaction between the alkyl chains. The strongest binding is for the oxygen linker, followed by S, N, and C, irrespective of chain length. The result revealed that the sequence of the stability is independent of coverage; however, linkers other than carbon can shift the optimum coverage considerably and allow further packing density. For all linkers apart from sulfur, structural properties, in particular, surface-linker-chain angles, saturate to a single value once n > 3. For sulfur, we identify three regimes, namely, n = 0–3, n = 5–7, and n = 9–11, each with its own characteristic adsorption structures. Where possible, our computational results are shown to be consistent with the available experimental data and show how the fundamental structural properties of modified Si surfaces can be controlled by the choice of linking group and chain length. Later we continue by examining the work function tuning of H:Si(111) over a range of 1.73 eV through adsorption of alkyl monolayers with general formula -[Xhead-group]-(CnH2n)-[Xtail-group], X = O(H), S(H), NH(2). The work function is practically converged at 4 carbons (8 for oxygen), for head-group functionalization. For tail-group functionalization and with both head- and tail-groups, there is an odd-even effect in the behavior of the work function, with peak-to-peak amplitudes of up to 1.7 eV in the oscillations. This behavior is explained through the orientation of the terminal-group's dipole. The shift in the work function is largest for NH2-linked and smallest for SH-linked chains and is rationalized in terms of interface dipoles. Our study reveals that the choice of the head- and/or tail-groups effectively changes the impact of the alkyl chain length on the work function tuning using self-assembled monolayers and this is an important advance in utilizing hybrid functionalized Si surfaces. Bringing together the understanding gained from studying single type functionalization of H:Si(111) with different alkyl chains and bearing in mind how to utilize head-group, tail-group or both as well as monolayer coverage, in the final part of this Thesis we study functionalized H:Si(111) with binary SAMs. Aiming at enhancing work function adjustment together with SAM stability and coverage we choose a range of terminations and linker-chains denoted as –X–(Alkyl) with X = CH3, O(H), S(H), NH(2) and investigate the stability and work function of various binary components grafted onto H:Si(111) surface. Using binary functionalization with -[NH(2)/O(H)/S(H)]-[Hexyl/Dodecyl] we show that work function can be tuned within the interval of 3.65-4.94 eV and furthermore, enhance the SAM’s stability. Although direct Si-C grafted SAMs are less favourable compared to their counterparts with O, N or S linkage, regardless of the ratio, binary functionalized alkyl monolayers with X-alkyl (X = NH, O) is always more stable than single type alkyl functionalization with the same coverage. Our results indicate that it is possible to go beyond the optimum coverage of pure alkyl functionalized SAMs (50%) by adding a linker with the correct choice of the linker. This is very important since dense packed monolayers have fewer defects and deliver higher efficiency. Our results indicate that binary anchoring can modify the charge injection and therefore bond stability while preserving the interface electronic structure.