909 resultados para Rademacher complexity bound
Resumo:
In this paper we present F LQ, a quadratic complexity bound on the values of the positive roots of polynomials. This bound is an extension of FirstLambda, the corresponding linear complexity bound and, consequently, it is derived from Theorem 3 below. We have implemented FLQ in the Vincent-Akritas-Strzeboński Continued Fractions method (VAS-CF) for the isolation of real roots of polynomials and compared its behavior with that of the theoretically proven best bound, LM Q. Experimental results indicate that whereas F LQ runs on average faster (or quite faster) than LM Q, nonetheless the quality of the bounds computed by both is about the same; moreover, it was revealed that when VAS-CF is run on our benchmark polynomials using F LQ, LM Q and min(F LQ, LM Q) all three versions run equally well and, hence, it is inconclusive which one should be used in the VAS-CF method.
Resumo:
Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reflects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest first steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologies
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Let g be the genus of the Hermitian function field H/F(q)2 and let C-L(D,mQ(infinity)) be a typical Hermitian code of length n. In [Des. Codes Cryptogr., to appear], we determined the dimension/length profile (DLP) lower bound on the state complexity of C-L(D,mQ(infinity)). Here we determine when this lower bound is tight and when it is not. For m less than or equal to n-2/2 or m greater than or equal to n-2/2 + 2g, the DLP lower bounds reach Wolf's upper bound on state complexity and thus are trivially tight. We begin by showing that for about half of the remaining values of m the DLP bounds cannot be tight. In these cases, we give a lower bound on the absolute state complexity of C-L(D,mQ(infinity)), which improves the DLP lower bound. Next we give a good coordinate order for C-L(D,mQ(infinity)). With this good order, the state complexity of C-L(D,mQ(infinity)) achieves its DLP bound (whenever this is possible). This coordinate order also provides an upper bound on the absolute state complexity of C-L(D,mQ(infinity)) (for those values of m for which the DLP bounds cannot be tight). Our bounds on absolute state complexity do not meet for some of these values of m, and this leaves open the question whether our coordinate order is best possible in these cases. A straightforward application of these results is that if C-L(D,mQ(infinity)) is self-dual, then its state complexity (with respect to the lexicographic coordinate order) achieves its DLP bound of n /2 - q(2)/4, and, in particular, so does its absolute state complexity.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
Overcommitment of development capacity or development resource deficiencies are important problems in new product development (NPD). Existing approaches to development resource planning have largely neglected the issue of resource magnitude required for NPD. This research aims to fill the void by developing a simple higher-level aggregate model based on an intuitive idea: The number of new product families that a firm can effectively undertake is bound by the complexity of its products or systems and the total amount of resources allocated to NPD. This study examines three manufacturing companies to verify the proposed model. The empirical results confirm the study`s initial hypothesis: The more complex the product family, the smaller the number of product families that are launched per unit of revenue. Several suggestions and implications for managing NPD resources are discussed, such as how this study`s model can establish an upper limit for the capacity to develop and launch new product families.
Resumo:
We reinterpret the state space dimension equations for geometric Goppa codes. An easy consequence is that if deg G less than or equal to n-2/2 or deg G greater than or equal to n-2/2 + 2g then the state complexity of C-L(D, G) is equal to the Wolf bound. For deg G is an element of [n-1/2, n-3/2 + 2g], we use Clifford's theorem to give a simple lower bound on the state complexity of C-L(D, G). We then derive two further lower bounds on the state space dimensions of C-L(D, G) in terms of the gonality sequence of F/F-q. (The gonality sequence is known for many of the function fields of interest for defining geometric Goppa codes.) One of the gonality bounds uses previous results on the generalised weight hierarchy of C-L(D, G) and one follows in a straightforward way from first principles; often they are equal. For Hermitian codes both gonality bounds are equal to the DLP lower bound on state space dimensions. We conclude by using these results to calculate the DLP lower bound on state complexity for Hermitian codes.
Resumo:
This paper characterizes when a Delone set X in R-n is an ideal crystal in terms of restrictions on the number of its local patches of a given size or on the heterogeneity of their distribution. For a Delone set X, let N-X (T) count the number of translation-inequivalent patches of radius T in X and let M-X (T) be the minimum radius such that every closed ball of radius M-X(T) contains the center of a patch of every one of these kinds. We show that for each of these functions there is a gap in the spectrum of possible growth rates between being bounded and having linear growth, and that having sufficiently slow linear growth is equivalent to X being an ideal crystal. Explicitly, for N-X (T), if R is the covering radius of X then either N-X (T) is bounded or N-X (T) greater than or equal to T/2R for all T > 0. The constant 1/2R in this bound is best possible in all dimensions. For M-X(T), either M-X(T) is bounded or M-X(T) greater than or equal to T/3 for all T > 0. Examples show that the constant 1/3 in this bound cannot be replaced by any number exceeding 1/2. We also show that every aperiodic Delone set X has M-X(T) greater than or equal to c(n)T for all T > 0, for a certain constant c(n) which depends on the dimension n of X and is > 1/3 when n > 1.
Resumo:
A novel unsymmetric dinucleating ligand (LN3N4) combining a tridentate and a tetradentate binding sites linked through a m-xylyl spacer was synthesized as ligand scaffold for preparing homo- and dimetallic complexes, where the two metal ions are bound in two different coordination environments. Site-selective binding of different metal ions is demonstrated. LN3N4 is able to discriminate between CuI and a complementary metal (M′ = CuI, ZnII, FeII, CuII, or GaIII) so that pure heterodimetallic complexes with a general formula [CuIM′(LN3N4)]n+ are synthesized. Reaction of the dicopper(I) complex [CuI 2(LN3N4)]2+ with O2 leads to the formation of two different copper-dioxygen (Cu2O2) intermolecular species (O and TP) between two copper atoms located in the same site from different complex molecules. Taking advantage of this feature, reaction of the heterodimetallic complexes [CuM′(LN3N4)]n+ with O2 at low temperature is used as a tool to determine the final position of the CuI center in the system because only one of the two Cu2O2 species is formed
Resumo:
This doctoral dissertation analyzes two novels by the American novelist Robert Coover as examples of hypertextual writing on the book bound page, as tokens of hyperfiction. The complexity displayed in the novels, John's Wife and The Adventures of Lucky Pierre, integrates the cultural elements that characterize the contemporary condition of capitalism and technologized practices that have fostered a different subjectivity evidenced in hypertextual writing and reading, the posthuman subjectivity. The models that account for the complexity of each novel are drawn from the concept of strange attractors in Chaos Theory and from the concept of rhizome in Nomadology. The transformations the characters undergo in the degree of their corporeality sets the plane on which to discuss turbulence and posthumanity. The notions of dynamic patterns and strange attractors, along with the concept of the Body without Organs and Rhizome are interpreted, leading to the revision of narratology and to analytical categories appropriate to the study of the novels. The reading exercised throughout this dissertation enacts Daniel Punday's corporeal reading. The changes in the characters' degree of materiality are associated with the stages of order, turbulence and chaos in the story, bearing on the constitution of subjectivity within and along the reading process. Coover's inscription of planes of consistency to counter linearity and accommodate hypertextual features to the paper supported narratives describes the characters' trajectory as rhizomatic. The study led to the conclusion that narrative today stands more as a regime in a rhizomatic relation with other regimes in cultural practice than as an exclusively literary form and genre. Besides this, posthuman subjectivity emerges as class identity, holding hypertextual novels as their literary form of choice.
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.
Resumo:
Myc is a transcription factor that can activate transcription of several hundreds genes by direct binding to their promoters at specific DNA sequences (E-box). However, recent studies have also shown that it can exert its biological role by repressing transcription. Such studies collectively support a model in which c-Myc-mediated repression occurs through interactions with transcription factors bound to promoter DNA regions but not through direct recognition of typical E-box sequences. Here, we investigated whether N-Myc can also repress gene transcription, and how this is mechanistically achieved. We used human neuroblastoma cells as a model system in that N-MYC amplification/over-expression represents a key prognostic marker of this tumour. By means of transcription profile analyses we could identify at least 5 genes (TRKA, p75NTR, ABCC3, TG2, p21) that are specifically repressed by N-Myc. Through a dual-step-ChIP assay and genetic dissection of gene promoters, we found that N-Myc is physically associated with gene promoters in vivo, in proximity of the transcription start site. N-Myc association with promoters requires interaction with other proteins, such as Sp1 and Miz1 transcription factors. Furthermore, we found that N-Myc may repress gene expression by interfering directly with Sp1 and/or with Miz1 activity (i.e. TRKA, p75NTR, ABCC3, p21) or by recruiting Histone Deacetylase 1 (Hdac1) (i.e. TG2). In vitro analyses show that distinct N-Myc domains can interact with Sp1, Miz1 and Hdac1, supporting the idea that Myc may participate in distinct repression complexes by interacting specifically with diverse proteins. Finally, results show that N-Myc, through repressed genes, affects important cellular functions, such as apoptosis, growth, differentiation and motility. Overall, our results support a model in which N-Myc, like c-Myc, can repress gene transcription by direct interaction with Sp1 and/or Miz1, and provide further lines of evidence on the importance of transcriptional repression by Myc factors in tumour biology.
Resumo:
Justification Logic studies epistemic and provability phenomena by introducing justifications/proofs into the language in the form of justification terms. Pure justification logics serve as counterparts of traditional modal epistemic logics, and hybrid logics combine epistemic modalities with justification terms. The computational complexity of pure justification logics is typically lower than that of the corresponding modal logics. Moreover, the so-called reflected fragments, which still contain complete information about the respective justification logics, are known to be in~NP for a wide range of justification logics, pure and hybrid alike. This paper shows that, under reasonable additional restrictions, these reflected fragments are NP-complete, thereby proving a matching lower bound. The proof method is then extended to provide a uniform proof that the corresponding full pure justification logics are $\Pi^p_2$-hard, reproving and generalizing an earlier result by Milnikel.
Resumo:
Myxobacteria are single-celled, but social, eubacterial predators. Upon starvation they build multicellular fruiting bodies using a developmental program that progressively changes the pattern of cell movement and the repertoire of genes expressed. Development terminates with spore differentiation and is coordinated by both diffusible and cell-bound signals. The growth and development of Myxococcus xanthus is regulated by the integration of multiple signals from outside the cells with physiological signals from within. A collection of M. xanthus cells behaves, in many respects, like a multicellular organism. For these reasons M. xanthus offers unparalleled access to a regulatory network that controls development and that organizes cell movement on surfaces. The genome of M. xanthus is large (9.14 Mb), considerably larger than the other sequenced delta-proteobacteria. We suggest that gene duplication and divergence were major contributors to genomic expansion from its progenitor. More than 1,500 duplications specific to the myxobacterial lineage were identified, representing >15% of the total genes. Genes were not duplicated at random; rather, genes for cell-cell signaling, small molecule sensing, and integrative transcription control were amplified selectively. Families of genes encoding the production of secondary metabolites are overrepresented in the genome but may have been received by horizontal gene transfer and are likely to be important for predation.
Resumo:
Transcription factors control eukaryotic polymerase II function by influencing the recruitment of multiprotein complexes to promoters and their subsequent integrated function. The complexity of the functional ‘transcriptosome’ has necessitated biochemical fractionation and subsequent protein sequencing on a grand scale to identify individual components. As a consequence, much is now known of the basal transcription complex. In contrast, less is known about the complexes formed at distal promoter elements. The c-fos SRE, for example, is known to bind Serum Response Factor (SRF) and ternary complex factors such as Elk-1. Their interaction with other factors at the SRE is implied but, to date, none have been identified. Here we describe the use of mass-spectrometric sequencing to identify six proteins, SRF, Elk-1 and four novel proteins, captured on SRE duplexes linked to magnetic beads. This approach is generally applicable to the characterisation of nucleic acid-bound protein complexes and the post-translational modification of their components.