925 resultados para Search-based algorithms


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A propriedade de auto-cura, em redes inteligente de distribuição de energia elétrica, consiste em encontrar uma proposta de reconfiguração do sistema de distribuição com o objetivo de recuperar parcial ou totalmente o fornecimento de energia aos clientes da rede, na ocorrência de uma falha na rede que comprometa o fornecimento. A busca por uma solução satisfatória é um problema combinacional cuja complexidade está ligada ao tamanho da rede. Um método de busca exaustiva se torna um processo muito demorado e muitas vezes computacionalmente inviável. Para superar essa dificuldade, pode-se basear nas técnicas de geração de árvores de extensão mínima do grafo, representando a rede de distribuição. Porém, a maioria dos estudos encontrados nesta área são implementações centralizadas, onde proposta de reconfiguração é obtida por um sistema de supervisão central. Nesta dissertação, propõe-se uma implementação distribuída, onde cada chave da rede colabora na elaboração da proposta de reconfiguração. A solução descentralizada busca uma redução no tempo de reconfiguração da rede em caso de falhas simples ou múltiplas, aumentando assim a inteligência da rede. Para isso, o algoritmo distribuído GHS é utilizado como base na elaboração de uma solução de auto-cura a ser embarcada nos elementos processadores que compõem as chaves de comutação das linhas da rede inteligente de distribuição. A solução proposta é implementada utilizando robôs como unidades de processamento que se comunicam via uma mesma rede, constituindo assim um ambiente de processamento distribuído. Os diferentes estudos de casos testados mostram que, para redes inteligentes de distribuição compostas por um único alimentador, a solução proposta obteve sucesso na reconfiguração da rede, indiferentemente do número de falhas simultâneas. Na implementação proposta, o tempo de reconfiguração da rede não depende do número de linhas nela incluídas. A implementação apresentou resultados de custo de comunicação e tempo dentro dos limites teóricos estabelecidos pelo algoritmo GHS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is essential to monitor deteriorated civil engineering structures cautiously to detect symptoms of their serious disruptions. A wireless sensor network can be an effective system for monitoring civil engineering structures. It is fast to deploy sensors especially in difficult-to-access areas, and it is extendable without any cable extensions. Since our target is to monitor deteriorations of civil engineering structures such as cracks at tunnel linings, most of the locations of sensors are known, and sensors are not required to move dynamically. Therefore, we focus on developing a deployment plan of a static network in order to reduce the value of a cost function such as initial installation cost and summation of communication distances of the network. The key issue of the deployment is the location of relays that forward sensing data from sensors to a data collection device called a gateway. In this paper, we propose a relay deployment-planning tool that can be used to design a wireless sensor network for monitoring civil engineering structures. For the planning tool, we formalize the model and implement a local search based algorithm to find a quasi-optimal solution. Our solution guarantees two routings from a sensor to a gateway, which can provide higher reliability of the network. We also show the application of our experimental tool to the actual environment in the London Underground.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a new formally syntax-based method for statistical machine translation. Transductions between parsing trees are transformed into a problem of sequence tagging, which is then tackled by a search- based structured prediction method. This allows us to automatically acquire transla- tion knowledge from a parallel corpus without the need of complex linguistic parsing. This method can achieve compa- rable results with phrase-based method (like Pharaoh), however, only about ten percent number of translation table is used. Experiments show that the structured pre- diction approach for SMT is promising for its strong ability at combining words.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

随着全球化的不断深入和互联网的发展,人们在日常工作和生活中常常需要面对大量的非母语信息,如何利用计算机实现不同语言之间的自动转换以克服人类的语言障碍已成为当前人们的迫切需求。由于统计机器翻译方法具有语言依赖性弱、系统开发周期短、翻译质量较好等优点,目前已成为机器翻译研究界的热点研究方向。本文从形式化句法的角度,针对目前统计机器翻译方法中存在的非连续短语处理、训练和搜索相独立以及短语顺序的调整等问题开展了研究工作。论文的主要工作和创新点总结如下: 1. 改进的短语翻译模型 目前常用的基于短语的翻译模型没有考虑对非连续短语的处理。我们提出了一种基于非连续短语的统计翻译模型,利用该模型可以使翻译的基本单元从连续短语扩展到带有间隔的非连续短语,通过借助上下文词汇信息以达到改善翻译结果的目的。同时,由于该方法抽取的短语数量较少,也使得解码的效率得到了提高。实验表明,改进的非连续短语模型在取得与层次型短语模型相当的翻译性能的情况下,计算效率也得到了提高。 2. 基于SEARN的形式化句法模型 在目前的机器学习方法中,训练和搜索的过程相对独立,训练时采用的复杂结构信息在搜索过程中常常难于保持。我们提出了一种转换操作,将集成训练和搜索的结构化预测方法(Search-based Structured Prediction,SEARN)用于转换后的序列标注问题,以解决双语句法树之间的映射。实验表明,该方法在抽取短语数量只有短语模型十分之一的情形下,仍可取得与短语模型相当的翻译性能。 3. 基于压平的双语句法树结构的形式化句法模型 短语顺序是翻译时需要关注的关键问题之一,目前的方法通常只是以源语言端的信息作为调序的依据。我们提出了一种基于压平的双语句法树结构的形式化句法方法。其核心是抽取带有方向属性的短语对,利用这种方向属性辅助目标句子的生成,从而改善目标句子的内部结构(短语顺序)以提高翻译质量。在NIST MT08机器翻译评测数据上的实验表明,这种方法和基于短语的系统相比BLUE值获得7%的提高。 4. 基于序列标注的形式化句法模型 在翻译时句子中的某些区域通常倾向于作为一个整体来进行翻译,目前的方法对句子中的所有词都允许任意位置的词序调整,带来了很多不合理的顺序调整结果。我们提出了一种基于序列标注的形式化句法模型。首先利用压平的双语句法树结构表示双语句子,然后通过我们定义的标签对树节点进行标记,最后借助条件随机场模型对这些标签进行学习。通过这些标签可以区分出能够作为整体翻译的区域,以及句子中难于翻译的部分。同时,对不同的翻译区域可以采用不同的翻译方法,通过局部翻译质量的提高可以带来了整个句子翻译质量的改善。该模型和基于层次型短语的模型比较,BLUE值得到了5%的提高。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

从无线传感器网络(WSN)环境数值监测应用的实际需求出发,提出了一种应用于该类场景中的等值线绘制(CMBC)算法.CMBC算法基于图形学中常用的贝塞尔(Bezier)曲线理论,通过选择部分节点提供信息给网关节点绘制等值线.此方法有效解决了监测应用场景中对最终监测精度的需求与大量报告节点所引发的高流量负载和网络能耗之间的矛盾.仿真结果表明,CMBC算法和已有研究工作相比能够使用更少的汇报节点完成高精度等值线的绘制,因此CMBC算法能够节省节点的能量,延长网络的生存期.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

M. Galea and Q. Shen. Iterative vs Simultaneous Fuzzy Rule Induction. Proceedings of the 14th International Conference on Fuzzy Systems, pages 767-772.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

M. Galea, Q. Shen and V. Singh. Encouraging Complementary Fuzzy Rules within Iterative Rule Learning. Proceedings of the 2005 UK Workshop on Computational Intelligence, pages 15-22.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speculative Concurrency Control (SCC) [Best92a] is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In [Best92b], SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various concurrency control algorithms differ in the time when conflicts are detected, and in the way they are resolved. In that respect, the Pessimistic and Optimistic Concurrency Control (PCC and OCC) alternatives represent two extremes. PCC locking protocols detect conflicts as soon as they occur and resolve them using blocking. OCC protocols detect conflicts at transaction commit time and resolve them using rollbacks (restarts). For real-time databases, blockages and rollbacks are hazards that increase the likelihood of transactions missing their deadlines. We propose a Speculative Concurrency Control (SCC) technique that minimizes the impact of blockages and rollbacks. SCC relies on the use of added system resources to speculate on potential serialization orders and to ensure that if such serialization orders materialize, the hazards of blockages and roll-backs are minimized. We present a number of SCC-based algorithms that differ in the level of speculation they introduce, and the amount of system resources (mainly memory) they require. We show the performance gains (in terms of number of satisfied timing constraints) to be expected when a representative SCC algorithm (SCC-2S) is adopted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In terms of a general time theory which addresses time-elements as typed point-based intervals, a formal characterization of time-series and state-sequences is introduced. Based on this framework, the subsequence matching problem is specially tackled by means of being transferred into bipartite graph matching problem. Then a hybrid similarity model with high tolerance of inversion, crossover and noise is proposed for matching the corresponding bipartite graphs involving both temporal and non-temporal measurements. Experimental results on reconstructed time-series data from UCI KDD Archive demonstrate that such an approach is more effective comparing with the traditional similarity model based algorithms, promising robust techniques for lager time-series databases and real-life applications such as Content-based Video Retrieval (CBVR), etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image segmentation plays an important role in the analysis of retinal images as the extraction of the optic disk provides important cues for accurate diagnosis of various retinopathic diseases. In recent years, gradient vector flow (GVF) based algorithms have been used successfully to successfully segment a variety of medical imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods can lead to less accurate segmentation results in certain cases. In this paper, we propose the use of a new mean shift-based GVF segmentation algorithm that drives the internal/external energies towards the correct direction. The proposed method incorporates a mean shift operation within the standard GVF cost function to arrive at a more accurate segmentation. Experimental results on a large dataset of retinal images demonstrate that the presented method optimally detects the border of the optic disc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a dynamic verification approach for large-scale message passing programs to locate correctness bugs caused by unforeseen nondeterministic interactions. This approach hinges on an efficient protocol to track the causality between nondeterministic message receive operations and potentially matching send operations. We show that causality tracking protocols that rely solely on logical clocks fail to capture all nuances of MPI program behavior, including the variety of ways in which nonblocking calls can complete. Our approach is hinged on formally defining the matches-before relation underlying the MPI standard, and devising lazy update logical clock based algorithms that can correctly discover all potential outcomes of nondeterministic receives in practice. can achieve the same coverage as a vector clock based algorithm while maintaining good scalability. LLCP allows us to analyze realistic MPI programs involving a thousand MPI processes, incurring only modest overheads in terms of communication bandwidth, latency, and memory consumption. © 2011 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.