991 resultados para Exhaustive search


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trivium is a stream cipher candidate of the eStream project. It has successfully moved into phase three of the selection process under the hardware category. No attacks faster than the exhaustive search have so far been reported on Trivium. Bivium-A and Bivium-B are simplified versions of Trivium that are built on the same design principles but with two registers. The simplified design is useful in investigating Trivium type ciphers with a reduced complexity and provides insight into effective attacks which could be extended to Trivium. This paper focuses on an algebraic analysis which uses the boolean satisfiability problem in propositional logic. For reduced variants of the cipher, this analysis recovers the internal state with a minimal amount of keystream observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims: To gain insight on the immunological processes behind cow’s milk allergy (CMA) and the development of oral tolerance. To furthermore investigate the associations of HLA II and filaggrin genotypes with humoral responses to early oral antigens. Methods: The study population was from a cohort of 6209 healthy, full-term infants who in a double-blind randomized trial received supplementary feeding at maternity hospitals (mean duration 4 days): cow’s milk (CM) formula, extensively hydrolyzed whey formula or donor breast milk. Infants who developed CM associated symptoms that subsided during elimination diet (n=223) underwent an open oral CM challenge (at mean age 7 months). The challenge was negative in 112, and in 111 it confirmed CMA, which was IgE-mediated in 83. Patients with CMA were followed until recovery, and 94 of them participated in a follow-up study at age 8-9 years. We investigated serum samples at diagnosis (mean age 7 months, n=111), one year later (19 months, n=101) and at follow-up (8.6 years, n=85). At follow-up, also 76 children randomly selected from the original cohort and without CM associated symptoms were included. We measured CM specific IgE levels with UniCAP (Phadia, Uppsala, Sweden), and β-lactoglobulin, α-casein and ovalbumin specific IgA, IgG1, IgG4 and IgG levels with enzyme-linked immunosorbent assay in sera. We applied a microarray based immunoassay to measure the binding of IgE, IgG4 and IgA serum antibodies to sequential epitopes derived from five major CM proteins at the three time points in 11 patients with active IgE-mediated CMA at age 8-9 years and in 12 patients who had recovered from IgE-mediated CMA by age 3 years. We used bioinformatic methods to analyze the microarray data. We studied T cell expression profile in peripheral blood mononuclear cell (PBMC) samples from 57 children aged 5-12 years (median 8.3): 16 with active CMA, 20 who had recovered from CMA by age 3 years, 21 non-atopic control subjects. Following in vitro β-lactoglobulin stimulation, we measured the mRNA expression in PBMCs of 12 T-cell markers (T-bet, GATA-3, IFN-γ, CTLA4, IL-10, IL-16, TGF-β, FOXP3, Nfat-C2, TIM3, TIM4, STIM-1) with quantitative real time polymerase chain reaction, and the protein expression of CD4, CD25, CD127, FoxP3 with flow cytometry. To optimally distinguish the three study groups, we performed artificial neural networks with exhaustive search for all marker combinations. For genetic associations with specific humoral responses, we analyzed 14 HLA class II haplotypes, the PTPN22 1858 SNP (R620W allele) and 5 known filaggrin null mutations from blood samples of 87 patients with CMA and 76 control subjects (age 8.0-9.3 years). Results: High IgG and IgG4 levels to β-lactoglobulin and α-casein were associated with the HLA (DR15)-DQB1*0602 haplotype in patients with CMA, but not in control subjects. Conversely, (DR1/10)-DQB1*0501 was associated with lower IgG and IgG4 levels to these CM antigens, and to ovalbumin, most significantly among control subjects. Infants with IgE-mediated CMA had lower β -lactoglobulin and α-casein specific IgG1, IgG4 and IgG levels (p<0.05) at diagnosis than infants with non-IgE-mediated CMA or control subjects. When CMA persisted beyond age 8 years, CM specific IgE levels were higher at all three time points investigated and IgE epitope binding pattern remained stable (p<0.001) compared with recovery from CMA by age 3 years. Patients with persisting CMA at 8-9 years had lower serum IgA levels to β-lactoglobulin at diagnosis (p=0.01), and lower IgG4 levels to β-lactoglobulin (p=0.04) and α-casein (p=0.05) at follow-up compared with patients who recovered by age 3 years. In early recovery, signal of IgG4 epitope binding increased while that of IgE decreased over time, and binding patterns of IgE and IgG4 overlapped. In T cell expression profile in response to β –lactoglobulin, the combination of markers FoxP3, Nfat-C2, IL-16, GATA-3 distinguished patients with persisting CMA most accurately from patients who had become tolerant and from non-atopic subjects. FoxP3 expression at both RNA and protein level was higher in children with CMA compared with non-atopic children. Conclusions: Genetic factors (the HLA II genotype) are associated with humoral responses to early food allergens. High CM specific IgE levels predict persistence of CMA. Development of tolerance is associated with higher specific IgA and IgG4 levels and lower specific IgE levels, with decreased CM epitope binding by IgE and concurrent increase in corresponding epitope binding by IgG4. Both Th2 and Treg pathways are activated upon CM antigen stimulation in patients with CMA. In the clinical management of CMA, HLA II or filaggrin genotyping are not applicable, whereas the measurement of CM specific antibodies may assist in estimating the prognosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Heat exchanger design is a complex task involving the selection of a large number of interdependent design parameters. There are no established general techniques for optimizing the design, though a few earlier attempts provide computer software based on gradient methods, case study methods, etc. The authors felt that it would be useful to determine the nature of the optimal and near-optimal feasible designs to devise an optimization technique. Therefore, in this article they have obtained a large number of feasible designs of shell and tube heat exchangers, intended to perform a given heat duty, by an exhaustive search method. They have studied how their capital and operating costs varied. The study reveals several interesting aspects of the dependence of capital and total costs on various design parameters. The authors considered a typical shell and tube heat exchanger used in an oil refinery. Its heat duty, inlet temperature and other details are given.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider the problem of association of wireless stations (STAs) with an access network served by a wireless local area network (WLAN) and a 3G cellular network. There is a set of WLAN Access Points (APs) and a set of 3G Base Stations (BSs) and a number of STAs each of which needs to be associated with one of the APs or one of the BSs. We concentrate on downlink bulk elastic transfers. Each association provides each ST with a certain transfer rate. We evaluate an association on the basis of the sum log utility of the transfer rates and seek the utility maximizing association. We also obtain the optimal time scheduling of service from a 3G BS to the associated STAs. We propose a fast iterative heuristic algorithm to compute an association. Numerical results show that our algorithm converges in a few steps yielding an association that is within 1% (in objective value) of the optimal (obtained through exhaustive search); in most cases the algorithm yields an optimal solution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we consider polynomial representability of functions defined over , where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that (i) answers the decision problem: to determine whether a given function over is polynomially representable or not, and (ii) finds the polynomial if it is polynomially representable. The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240-266, 1921) and Carlitz (Acta Arith. 9(1), 67-78, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Displacement estimation is a key step in the evaluation of tissue elasticity by quasistatic strain imaging. An efficient approach may incorporate a tracking strategy whereby each estimate is initially obtained from its neighbours' displacements and then refined through a localized search. This increases the accuracy and reduces the computational expense compared with exhaustive search. However, simple tracking strategies fail when the target displacement map exhibits complex structure. For example, there may be discontinuities and regions of indeterminate displacement caused by decorrelation between the pre- and post-deformation radio frequency (RF) echo signals. This paper introduces a novel displacement tracking algorithm, with a search strategy guided by a data quality indicator. Comparisons with existing methods show that the proposed algorithm is more robust when the displacement distribution is challenging.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A propriedade de auto-cura, em redes inteligente de distribuição de energia elétrica, consiste em encontrar uma proposta de reconfiguração do sistema de distribuição com o objetivo de recuperar parcial ou totalmente o fornecimento de energia aos clientes da rede, na ocorrência de uma falha na rede que comprometa o fornecimento. A busca por uma solução satisfatória é um problema combinacional cuja complexidade está ligada ao tamanho da rede. Um método de busca exaustiva se torna um processo muito demorado e muitas vezes computacionalmente inviável. Para superar essa dificuldade, pode-se basear nas técnicas de geração de árvores de extensão mínima do grafo, representando a rede de distribuição. Porém, a maioria dos estudos encontrados nesta área são implementações centralizadas, onde proposta de reconfiguração é obtida por um sistema de supervisão central. Nesta dissertação, propõe-se uma implementação distribuída, onde cada chave da rede colabora na elaboração da proposta de reconfiguração. A solução descentralizada busca uma redução no tempo de reconfiguração da rede em caso de falhas simples ou múltiplas, aumentando assim a inteligência da rede. Para isso, o algoritmo distribuído GHS é utilizado como base na elaboração de uma solução de auto-cura a ser embarcada nos elementos processadores que compõem as chaves de comutação das linhas da rede inteligente de distribuição. A solução proposta é implementada utilizando robôs como unidades de processamento que se comunicam via uma mesma rede, constituindo assim um ambiente de processamento distribuído. Os diferentes estudos de casos testados mostram que, para redes inteligentes de distribuição compostas por um único alimentador, a solução proposta obteve sucesso na reconfiguração da rede, indiferentemente do número de falhas simultâneas. Na implementação proposta, o tempo de reconfiguração da rede não depende do número de linhas nela incluídas. A implementação apresentou resultados de custo de comunicação e tempo dentro dos limites teóricos estabelecidos pelo algoritmo GHS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta dissertação apresenta o desenvolvimento de um sistema de tomada de decisão que propõe uma metodologia inteligente, de tal maneira a efetuar a melhor alocação possível de um grupo de usuários a um grupo de recursos em um espaço geográfico. Tal metodologia se baseou na lógica fuzzy e ao longo da dissertação foram feitas comparações com outras técnicas, como o Algoritmo Ingênuo e a Busca Exaustiva. O conjunto de dados que foi adotado como o escopo desse trabalho foi a matrícula de alunos do município de Nova Iguaçu.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Engineering changes (ECs) are raised throughout the lifecycle of engineering products. A single change to one component produces knock-on effects on others necessitating additional changes. This change propagation significantly affects the development time and cost and determines the product's success. Predicting and managing such ECs is, thus, essential to companies. Some prediction tools model change propagation by algorithms, whereof a subgroup is numerical. Current numerical change propagation algorithms either do not account for the exclusion of cyclic propagation paths or are based on exhaustive searching methods. This paper presents a new matrix-calculation-based algorithm which can be applied directly to a numerical product model to analyze change propagation and support change prediction. The algorithm applies matrix multiplications on mutations of a given design structure matrix accounting for the exclusion of self-dependences and cyclic propagation paths and delivers the same results as the exhaustive search-based Trail Counting algorithm. Despite its factorial time complexity, the algorithm proves advantageous because of its straightforward matrix-based calculations which avoid exhaustive searching. Thereby, the algorithm can be implemented in established numerical programs such as Microsoft Excel which promise a wider application of the tools within and across companies along with better familiarity, usability, practicality, security, and robustness. © 1988-2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Finding countermodels is an effective way of disproving false conjectures. In first-order predicate logic, model finding is an undecidable problem. But if a finite model exists, it can be found by exhaustive search. The finite model generation problem in the first-order logic can also be translated to the satisfiability problem in the propositional logic. But a direct translation may not be very efficient. This paper discusses how to take the symmetries into account so as to make the resulting problem easier. A static method for adding constraints is presented, which can be thought of as an approximation of the least number heuristic (LNH). Also described is a dynamic method, which asks a model searcher like SEM to generate a set of partial models, and then gives each partial model to a propositional prover. The two methods are analyzed, and compared with each other.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a method for recovering the spatial and temporal alignment between two or more views of objects moving over a ground plane. Existing approaches either assume that the streams are globally synchronized, so that only solving the spatial alignment is needed, or that the temporal misalignment is small enough so that exhaustive search can be performed. In contrast, our approach can recover both the spatial and temporal alignment. We compute for each trajectory a number of interesting segments, and we use their description to form putative matches between trajectories. Each pair of corresponding interesting segments induces a temporal alignment, and defines an interval of common support across two views of an object that is used to recover the spatial alignment. Interesting segments and their descriptors are defined using algebraic projective invariants measured along the trajectories. Similarity between interesting segments is computed taking into account the statistics of such invariants. Candidate alignment parameters are verified checking the consistency, in terms of the symmetric transfer error, of all the putative pairs of corresponding interesting segments. Experiments are conducted with two different sets of data, one with two views of an outdoor scene featuring moving people and cars, and one with four views of a laboratory sequence featuring moving radio-controlled cars.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new method for complex power flow tracing that can be used for allocating the transmission loss to loads or generators. Two algorithms for upstream tracing (UST) and downstream tracing (DST) of the complex power are introduced. UST algorithm traces the complex power extracted by loads back to source nodes and assigns a fraction of the complex power flow through each line to each load. DST algorithm traces the output of the generators down to the sink nodes determining the contributions of each generator to the complex power flow and losses through each line. While doing so, active- and reactive-power flows as well as complex losses are considered simultaneously, not separately as most of the available methods do. Transmission losses are taken into consideration during power flow tracing. Unbundling line losses are carried out using an equation, which has a physical basis, and considers the coupling between active- and reactive-power flows as well as the cross effects of active and reactive powers on active and reactive losses. The tracing algorithms introduced can be considered direct to a good extent, as there is no need for exhaustive search to determine the flow paths as these are determined in a systematic way during the course of tracing. Results of application of the proposed method are also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new method for transmission loss allocation. The method is based on tracing the complex power flow through the network and determining the share of each load on the flow and losses through each line. Transmission losses are taken into consideration during power flow tracing. Unbundling line losses is carried out using an equation, which has a physical basis, and considers the coupling between active and reactive power flows as well as the cross effects of active and reactive power on active and reactive losses. A tracing algorithm which can be considered direct to a good extent, as there is no need for exhaustive search to determine the flow paths as these are determined in a systematic way during the course of tracing. Results of application of the proposed method are also presented.