972 resultados para One-inclusion mistake bounds
Resumo:
An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.
Resumo:
In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.
Resumo:
Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
The behaviors of double proton transfer (DPT) occurring in a representative glycinamide-formamidine complex have been investigated employing the B3LYP/6-311++G** level of theory. Computational results suggest that the participation of a formamidine molecule favors the proceeding of the proton transfer (PT) for glycinamide compared with that without mediator-assisted case. The DPT process proceeds with a concerted mechanism rather than a stepwise one since no zwitterionic complexes have been located during the DPT process. The barrier heights are 14.4 and 3.9 kcal/mol for the forward and reverse directions, respectively. However, both of them have been reduced by 3.1 and 2.9 kcal/mol to 11.3 and 1.0 kcal/mol with further inclusion of zero-point vibrational energy (ZPVE) corrections, where the lower reverse barrier height implies that the reverse reaction should proceed easily at any temperature of biological importance. Additionally, the one-electron oxidation process for the double H-bonded glycinamide-formamidine complex has also been investigated. The oxidated product is characterized by a distonic radical cation due to the fact that one-electron oxidation takes place on glycinamide fragment and a proton has been transferred from glycinamide to formamidine fragment spontaneously. As a result, the vertical and adiabatic ionization potentials for the neutral double H-bonded complex have been determined to be about 8.46 and 7.73 eV, respectively, where both of them have been reduced by about 0.79 and 0.87 eV relative to those of isolated glycinamide due to the formation of the intermolecular H-bond with formamidine. Finally, the differences between model system and adenine-thymine base pair have been discussed briefly.
Resumo:
In this paper, we introduce the method of leaps and bounds regression which can be used to select variables quickly and obtain the best regression models. These models contain one variable, two variables, three variables and so on. The results obtained by using leaps and bounds regression were compared with those achieved by using stepwise regression to lead to the conclusion that leaps and bounds regression is an effective method.
Resumo:
Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates.
Resumo:
One of the main pillars in the development of inclusive schools is the initial teacher training. Before determining if it is necessary to make changes (and of what type) in training programs or curriculum guides related to the attention to diversity and inclusive education, the attitudes of future education professionals in this area should be analyzed. This includes the identification of the relevant predictors of inclusive attitudes. The research reported in this article pursued this objective, doing so with a quantitative survey methodology based on the use of cross-sectional structured data collection and statistical analyses related to the quality of the attitude questionnaire (factor analysis and Cronbach's alpha), descriptive statistics, correlations, hypothesis tests for difference of means, and regression analysis in order to predict attitudes towards inclusion in education. Firstly, the results show that the participants held very positive attitudes toward the inclusion of students with special educational needs. Particularly, older respondents, those with a longer training and, to a lesser extent, women and those who had been in touch with disabled people stood out within this attitude. Secondly, it is evidenced that self-transcendence values and, more weakly, contact, function as robust predictors of attitudes of future practitioners towards the inclusion of students with special needs. Some applications for the initial professionalization of educators are suggested in the discussion.
Resumo:
English law has long struggled to understand the effect of a fundamental common mistake in contract formation. Bell v Lever Brothers Ltd [1932] AC 161 recognises that a common mistake which totally undermines a contract renders it void. Solle v Butcher [1950] 1 KB 671 recognises a doctrine of 'mistake in equity' under which a serious common mistake in contract formation falling short of totally undermining the contract could give an adversely affected party the right to rescind the contract. This article accepts that the enormous difficulty in differentiating these two kinds of mistake justifies the insistence by the Court of Appeal in The Great Peace [2003] QB 679 that there can be only one doctrine of common mistake. However, the article proceeds to argue that where the risk of the commonly mistaken matter is not allocated by the contract itself a better doctrine would be that the contract is voidable.
Resumo:
S. C. Wright, A. Aron, T. McLaughlin-Volpe, and S. A. Ropp (1997) proposed that the benefits associated with cross-group friendship might also stem from vicarious experiences of friendship. Extended contact was proposed to reduce prejudice by reducing intergroup anxiety, by generating perceptions of positive ingroup and outgroup norms regarding the other group, and through inclusion of the outgroup in the self. This article documents the first test of Wright et al.'s model, which used structural equation modeling among two independent samples in the context of South Asian-White relations in the United Kingdom. Supporting the model, all four variables mediated the relationship between extended contact and outgroup attitude, controlling for the effect of direct contact. A number of alternative models were ruled out, indicating that the four mediators operate concurrently rather than predicting one another.
Resumo:
This article presents findings from a qualitative study of social
dancing for successful aging amongst senior citizens in three locales:
in Blackpool (GB), around Belfast (NI), and in Sacramento (US). Social
dancers are found to navigate an intense space in society, one of
wellbeing accompanied by a beneficial sense of youthfulness. Besides
such renewal and self-actualisation, findings also attest to the perceived
social, psychological and health benefits of social dancing amongst senior
citizens. They also articulate three different social dancing practices:
social dance as tea dance (Sacramento), social dance as practice dance
(Blackpool), social dance as motility (Belfast and environs).
Resumo:
The ability to distribute quantum entanglement is a prerequisite for many fundamental tests of quantum theory and numerous quantum information protocols. Two distant parties can increase the amount of entanglement between them by means of quantum communication encoded in a carrier that is sent from one party to the other. Intriguingly, entanglement can be increased even when the exchanged carrier is not entangled with the parties. However, in light of the defining property of entanglement stating that it cannot increase under classical communication, the carrier must be quantum. Here we show that, in general, the increase of relative entropy of entanglement between two remote parties is bounded by the amount of nonclassical correlations of the carrier with the parties as quantified by the relative entropy of discord. We study implications of this bound, provide new examples of entanglement distribution via unentangled states, and put further limits on this phenomenon.
Resumo:
This paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in O(1) rounds and (with high probability) uses only O(√ √nlog<sup>3/2</sup>n) messages to elect a unique leader (with high probability). When considering the "explicit" variant of leader election where eventually every node knows the identity of the leader, our algorithm yields the asymptotically optimal bounds of O(1) rounds and O(. n) messages. This algorithm is then extended to one solving leader election on any connected non-bipartite n-node graph G in O(τ(. G)) time and O(τ(G)n√log<sup>3/2</sup>n) messages, where τ(. G) is the mixing time of a random walk on G. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient deterministic leader election algorithms. Finally, we present an almost matching lower bound for randomized leader election, showing that Ω(n) messages are needed for any leader election algorithm that succeeds with probability at least 1/. e+. ε, for any small constant ε. >. 0. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.