975 resultados para Binary Cyclically Permutable Constant Weight Codes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LIGO and Virgo gravitational-wave observatories are complex and extremely sensitive strain detectors that can be used to search for a wide variety of gravitational waves from astrophysical and cosmological sources. In this thesis, I motivate the search for the gravitational wave signals from coalescing black hole binary systems with total mass between 25 and 100 solar masses. The mechanisms for formation of such systems are not well-understood, and we do not have many observational constraints on the parameters that guide the formation scenarios. Detection of gravitational waves from such systems — or, in the absence of detection, the tightening of upper limits on the rate of such coalescences — will provide valuable information that can inform the astrophysics of the formation of these systems. I review the search for these systems and place upper limits on the rate of black hole binary coalescences with total mass between 25 and 100 solar masses. I then show how the sensitivity of this search can be improved by up to 40% by the the application of the multivariate statistical classifier known as a random forest of bagged decision trees to more effectively discriminate between signal and non-Gaussian instrumental noise. I also discuss the use of this classifier in the search for the ringdown signal from the merger of two black holes with total mass between 50 and 450 solar masses and present upper limits. I also apply multivariate statistical classifiers to the problem of quantifying the non-Gaussianity of LIGO data. Despite these improvements, no gravitational-wave signals have been detected in LIGO data so far. However, the use of multivariate statistical classification can significantly improve the sensitivity of the Advanced LIGO detectors to such signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first experimental evidence that the heat capacity of superfluid 4He, at temperatures very close to the lambda transition temperature, Tλ,is enhanced by a constant heat flux, Q. The heat capacity at constant Q, CQ,is predicted to diverge at a temperature Tc(Q) < Tλ at which superflow becomes unstable. In agreement with previous measurements, we find that dissipation enters our cell at a temperature, TDAS(Q),below the theoretical value, Tc(Q). Our measurements of CQ were taken using the discrete pulse method at fourteen different heat flux values in the range 1µW/cm2 ≤ Q≤ 4µW /cm2. The excess heat capacity ∆CQ we measure has the predicted scaling behavior as a function of T and Q:∆CQ • tα ∝ (Q/Qc)2, where QcT) ~ t is the critical heat current that results from the inversion of the equation for Tc(Q). We find that if the theoretical value of Tc( Q) is correct, then ∆CQ is considerably larger than anticipated. On the other hand,if Tc(Q)≈ TDAS(Q),then ∆CQ is the same magnitude as the theoretically predicted enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As pontes rodoviárias de concreto armado estão sujeitas às ações dinâmicas variáveis devido ao tráfego de veículos sobre o tabuleiro. Estas ações dinâmicas podem gerar o surgimento das fraturas ou mesmo a sua propagação na estrutura. A correta consideração destes aspectos objetivou o desenvolvimento de um estudo, de forma a avaliar os esforços do tráfego de veículos pesados sobre o tabuleiro. As técnicas para a contagem de ciclos de esforços e a aplicação das regras de dano acumulado foram analisadas através das curvas S-N de diversas normas estudadas. A ponte rodoviária investigada é constituída por quatro vigas longitudinais, três transversinas e por um tabuleiro de concreto armado. O modelo computacional, desenvolvido para a análise dinâmica da ponte, foi concebido com base no emprego de técnicas usuais de discretização através do método dos elementos finitos. O modelo estrutural da obra de arte rodoviária estudada foi simulado com base no emprego de elementos finitos sólidos tridimensionais. Os veículos são representados a partir de sistemas massa-mola-amortecedor. O tráfego dessas viaturas é considerado mediante a simulação de comboios semi-infinitos, deslocando-se com velocidade constante sobre o tabuleiro da ponte. As conclusões deste trabalho versam acerca da vida útil de serviço dos elementos estruturais de pontes rodoviárias de concreto armado submetidas às ações dinâmicas provenientes do tráfego de veículos pesados sobre o tabuleiro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As is known, copepods play an important role in the nutrition of fish. Therefore with a view to facilitating research on the study of the quantitative side of feeding, there have recently appeared a considerable number of papers devoted to the development of methods for determining the wet. weight of these crustaceans. For the further facilitating of research in the nutrition of fish it would be of great interest to clarify the problem, is there not some kind of rule in the growth of the crustaceans during metamorphosis, and if there is such a rule is it not possible, to determine the length of the larvae at each stage, not by measuring them, but by using the formulae derived on the basis of these rules. This article examines the growth curves of different species of freshwater Copepoda, obtained on the basis of experimental observations in cultures or by way of measurement of mass material at all stages of development in samples from water-bodies. The authors study in particular the ratio of the mean diameter of the eggs to the mean length of the egg-bearing females.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nesta Dissertação foi utilizado um sistema catalítico Zieger-Natta à base de neodímio para avaliar a influência do agente de halogenação e da razão molar halogênio:Nd sobre a atividade catalítica, a constante de velocidade de propagação, a conversão da polimerização, a microestrutura, a massa molecular e a polidispersão do polibutadieno 1,4-cis. O sistema utilizado era constituído por versatato de neodímio (NdV), hidreto de diisobutilalumínio (DIBAH) e um agente de halogenação. Os agentes halogenantes estudados foram: cloreto de t-butila (t-BuCl), sesquicloreto de etilalumínio (EASC) e cloreto de dietilalumínio (DEAC), em valores de razão molar Cl:Nd que variaram entre 0,5:1 e 5:1 e o dietil-eterato de trifluoreto de boro (BF3.Et2O), na razão molar F:Nd = 3:1. Os polímeros foram caracterizados por espectroscopia na região do infravermelho para determinação da microestrutura e por cromatografia de exclusão por tamanho para determinação das massas moleculares. O teor de unidades 1,4-cis variou de 90 a 98%, a massa molecular numérica média ( ) permaneceu na faixa entre 0,2 e 2x105, e a massa molecular ponderal média ( ) variou de 1,4 a 4x105

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Observations of individual weight, duration of development and production of different stages of Tropodiaptomus incognitus are presented. The study is based on data gathered from Lake Chad in 1968.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is no doubt that determination of the biomass of zooplankton (primarily of crustaceans) will be taken into consideration in practice and limnological works, especially after the recent publication of fairly comprehensive tables of weights of a whole range of species of freshwater copepods and cladocerans. The usefulness of applying formulae of determining the biomass of marine crustaceans for freshwater copepods is discussed.