941 resultados para Algebraic lattices


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of decision making in an uncertain environment arises in many diverse contexts: deciding whether to keep a hard drive spinning in a net-book; choosing which advertisement to post to a Web site visitor; choosing how many newspapers to order so as to maximize profits; or choosing a route to recommend to a driver given limited and possibly out-of-date information about traffic conditions. All are sequential decision problems, since earlier decisions affect subsequent performance; all require adaptive approaches, since they involve significant uncertainty. The key issue in effectively solving problems like these is known as the exploration/exploitation trade-off: If I am at a cross-roads, when should I go in the most advantageous direction among those that I have already explored, and when should I strike out in a new direction, in the hopes I will discover something better?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate the heuristic construction of bijective s-boxes that satisfy a wide range of cryptographic criteria including algebraic complexity, high nonlinearity, low autocorrelation and have none of the known weaknesses including linear structures, fixed points or linear redundancy. We demonstrate that the power mappings can be evolved (by iterated mutation operators alone) to generate bijective s-boxes with the best known tradeoffs among the considered criteria. The s-boxes found are suitable for use directly in modern encryption algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inverse problems based on using experimental data to estimate unknown parameters of a system often arise in biological and chaotic systems. In this paper, we consider parameter estimation in systems biology involving linear and non-linear complex dynamical models, including the Michaelis–Menten enzyme kinetic system, a dynamical model of competence induction in Bacillus subtilis bacteria and a model of feedback bypass in B. subtilis bacteria. We propose some novel techniques for inverse problems. Firstly, we establish an approximation of a non-linear differential algebraic equation that corresponds to the given biological systems. Secondly, we use the Picard contraction mapping, collage methods and numerical integration techniques to convert the parameter estimation into a minimization problem of the parameters. We propose two optimization techniques: a grid approximation method and a modified hybrid Nelder–Mead simplex search and particle swarm optimization (MH-NMSS-PSO) for non-linear parameter estimation. The two techniques are used for parameter estimation in a model of competence induction in B. subtilis bacteria with noisy data. The MH-NMSS-PSO scheme is applied to a dynamical model of competence induction in B. subtilis bacteria based on experimental data and the model for feedback bypass. Numerical results demonstrate the effectiveness of our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond.We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assembly of retroviruses such as HIV-1 is driven by oligomerization of their major structural protein, Gag. Gag is a multidomain polyprotein including three conserved folded domains: MA (matrix), CA (capsid) and NC (nucleocapsid)(1). Assembly of an infectious virion proceeds in two stages(2). In the first stage, Gag oligomerization into a hexameric protein lattice leads to the formation of an incomplete, roughly spherical protein shell that buds through the plasma membrane of the infected cell to release an enveloped immature virus particle. In the second stage, cleavage of Gag by the viral protease leads to rearrangement of the particle interior, converting the non-infectious immature virus particle into a mature infectious virion. The immature Gag shell acts as the pivotal intermediate in assembly and is a potential target for anti-retroviral drugs both in inhibiting virus assembly and in disrupting virus maturation(3). However, detailed structural information on the immature Gag shell has not previously been available. For this reason it is unclear what protein conformations and interfaces mediate the interactions between domains and therefore the assembly of retrovirus particles, and what structural transitions are associated with retrovirus maturation. Here we solve the structure of the immature retroviral Gag shell from Mason-Pfizer monkey virus by combining cryo-electron microscopy and tomography. The 8-angstrom resolution structure permits the derivation of a pseudo-atomic model of CA in the immature retrovirus, which defines the protein interfaces mediating retrovirus assembly. We show that transition of an immature retrovirus into its mature infectious form involves marked rotations and translations of CA domains, that the roles of the amino-terminal and carboxy-terminal domains of CA in assembling the immature and mature hexameric lattices are exchanged, and that the CA interactions that stabilize the immature and mature viruses are almost completely distinct.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Pattern and Structure Mathematics Awareness Project (PASMAP) has investigated the development of patterning and early algebraic reasoning among 4 to 8 year olds over a series of related studies. We assert that an awareness of mathematical pattern and structure enables mathematical thinking and simple forms of generalisation from an early age. The project aims to promote a strong foundation for mathematical development by focusing on critical, underlying features of mathematics learning. This paper provides an overview of key aspects of the assessment and intervention, and analyses of the impact of PASMAP on students’ representation, abstraction and generalisation of mathematical ideas. A purposive sample of four large primary schools, two in Sydney and two in Brisbane, representing 316 students from diverse socio-economic and cultural contexts, participated in the evaluation throughout the 2009 school year and a follow-up assessment in 2010. Two different mathematics programs were implemented: in each school, two Kindergarten teachers implemented the PASMAP and another two implemented their regular program. The study shows that both groups of students made substantial gains on the ‘I Can Do Maths’ assessment and a Pattern and Structure Assessment (PASA) interview, but highly significant differences were found on the latter with PASMAP students outperforming the regular group on PASA scores. Qualitative analysis of students’ responses for structural development showed increased levels for the PASMAP students; those categorised as low ability developed improved structural responses over a relatively short period of time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The steady problem of free surface flow due to a submerged line source is revisited for the case in which the fluid depth is finite and there is a stagnation point on the free surface directly above the source. Both the strength of the source and the fluid speed in the far field are measured by a dimensionless parameter, the Froude number. By applying techniques in exponential asymptotics, it is shown that there is a train of periodic waves on the surface of the fluid with an amplitude which is exponentially small in the limit that the Froude number vanishes. This study clarifies that periodic waves do form for flows due to a source, contrary to a suggestion by Chapman & Vanden-Broeck (2006, J. Fluid Mech., 567, 299--326). The exponentially small nature of the waves means they appear beyond all orders of the original power series expansion; this result explains why attempts at describing these flows using a finite number of terms in an algebraic power series incorrectly predict a flat free surface in the far field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Triangle-shaped nanohole, nanodot, and lattice antidot structures in hexagonal boron-nitride (h-BN) monolayer sheets are characterized with density functional theory calculations utilizing the local spin density approximation. We find that such structures may exhibit very large magnetic moments and associated spin splitting. N-terminated nanodots and antidots show strong spin anisotropy around the Fermi level, that is, half-metallicity. While B-terminated nanodots are shown to lack magnetism due to edge reconstruction, B-terminated nanoholes can retain magnetic character due to the enhanced structural stability of the surrounding two-dimensional matrix. In spite of significant lattice contraction due to the presence of multiple holes, antidot super lattices are predicted to be stable, exhibiting amplified magnetism as well as greatly enhanced half-metallicity. Collectively, the results indicate new opportunities for designing h-BNbased nanoscale devices with potential applications in the areas of spintronics, light emission, and photocatalysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assembly of retroviruses is driven by oligomerization of the Gag polyprotein. We have used cryo-electron tomography together with subtomogram averaging to describe the three-dimensional structure of in vitro-assembled Gag particles from human immunodeficiency virus, Mason-Pfizer monkey virus, and Rous sarcoma virus. These represent three different retroviral genera: the lentiviruses, betaretroviruses and alpharetroviruses. Comparison of the three structures reveals the features of the supramolecular organization of Gag that are conserved between genera and therefore reflect general principles of Gag-Gag interactions and the features that are specific to certain genera. All three Gag proteins assemble to form approximately spherical hexameric lattices with irregular defects. In all three genera, the N-terminal domain of CA is arranged in hexameric rings around large holes. Where the rings meet, 2-fold densities, assigned to the C-terminal domain of CA, extend between adjacent rings, and link together at the 6-fold symmetry axis with a density, which extends toward the center of the particle into the nucleic acid layer. Although this general arrangement is conserved, differences can be seen throughout the CA and spacer peptide regions. These differences can be related to sequence differences among the genera. We conclude that the arrangement of the structural domains of CA is well conserved across genera, whereas the relationship between CA, the spacer peptide region, and the nucleic acid is more specific to each genus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.