992 resultados para Linear FIR hypothesis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electron microscopy was used to monitor the fate of reconstituted nucleosome cores during in vitro transcription of long linear and supercoiled multinucleosomic templates by the prokaryotic T7 RNA polymerase and the eukaryotic RNA polymerase II. Transcription by T7 RNA polymerase disrupted the nucleosomal configuration in the transcribed region, while nucleosomes were preserved upstream of the transcription initiation site and in front of the polymerase. Nucleosome disruption was independent of the topology of the template, linear or supercoiled, and of the presence or absence of nucleosome positioning sequences in the transcribed region. In contrast, the nucleosomal configuration was preserved during transcription from the vitellogenin B1 promoter with RNA polymerase II in a rat liver total nuclear extract. However, the persistence of nucleosomes on the template was not RNA polymerase II-specific, but was dependent on another activity present in the nuclear extract. This was demonstrated by addition of the extract to the T7 RNA polymerase transcription reaction, which resulted in retention of the nucleosomal configuration. This nuclear activity, also found in HeLa cell nuclei, is heat sensitive and could not be substituted by nucleoplasmin, chromatin assembly factor (CAF-I) or a combination thereof. Altogether, these results identify a novel nuclear activity, called herein transcription-dependent chromatin stabilizing activity I or TCSA-I, which may be involved in a nucleosome transfer mechanism during transcription.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Smokers have a lower body weight compared to non-smokers. Smoking cessation is associated with weight gain in most cases. A hormonal mechanism of action might be implicated in weight variations related to smoking, and leptin might be implicated. We made secondary analyses of an RCT, with a hypothesis-free exploratory approach to study the dynamic of leptin following smoking cessation. METHODS: We measured serum leptin levels among 271 sedentary smokers willing to quit who participated in a randomized controlled trial assessing a 9-week moderate-intensity physical activity intervention as an aid for smoking cessation. We adjusted leptin for body fat levels. We performed linear regressions to test for an association between leptin levels and the study group over time. RESULTS: One year after smoking cessation, the mean serum leptin change was +3.23 mg/l (SD 4.89) in the control group and +1.25 mg/l (SD 4.86) in the intervention group (p of the difference < 0.05). When adjusted for body fat levels, leptin was higher in the control group than in the intervention group (p of the difference < 0.01). The mean weight gain was +2.91 (SD 6.66) Kg in the intervention and +3.33 (SD 4.47) Kg in the control groups, respectively (p not significant). CONCLUSIONS: Serum leptin levels significantly increased after smoking cessation, in spite of substantial weight gain. The leptin dynamic might be different in chronic tobacco users who quit smoking, and physical activity might impact the dynamic of leptin in such a situation. CLINICAL TRIAL REGISTRATION NUMBER: NCT00521391.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O problema de otimização de mínimos quadrados e apresentado como uma classe importante de problemas de minimização sem restrições. A importância dessa classe de problemas deriva das bem conhecidas aplicações a estimação de parâmetros no contexto das analises de regressão e de resolução de sistemas de equações não lineares. Apresenta-se uma revisão dos métodos de otimização de mínimos quadrados lineares e de algumas técnicas conhecidas de linearização. Faz-se um estudo dos principais métodos de gradiente usados para problemas não lineares gerais: Métodos de Newton e suas modificações incluindo os métodos Quasi-Newton mais usados (DFP e BFGS). Introduzem-se depois métodos específicos de gradiente para problemas de mínimos quadrados: Gauss-Newton e Levenberg-Larquardt. Apresenta-se uma variedade de exemplos selecionados na literatura para testar os diferentes métodos usando rotinas MATLAB. Faz-se uma an alise comparativa dos algoritmos baseados nesses ensaios computacionais que exibem as vantagens e desvantagens dos diferentes métodos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the approach of using Total Unduplicated Reach and Frequency analysis (TURF) to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. The results obtained through our exact algorithm are presented, and this method shows to be extremely efficient both in obtaining optimal solutions and in computing time for very large instances of the problem at hand. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Body condition can affect coloration of traits used in sexual selection and parent-offspring communication by inducing rapid internal changes in pigment concentration or aggregation, thickness of collagen arrays, or blood flux. The recent "makeup hypothesis" proposes an alternative honesty-reinforcing mechanism, with behaviorally mediated deposition of substances on body surfaces ("cosmetics") generating covariation between body condition and coloration. In birds, the uropygial gland wax is actively spread on feathers using the bill and changes in its deposition rate may cause rapid changes in bill and plumage coloration. Using tawny owl nestlings, we tested 3 predictions of the makeup hypothesis, namely that 1) quantity of preen wax deposited accounts for variation in bill coloration, 2) an immune stimulation (induced by injection of a lipopolysaccharide [LPS]) impairs uropygial gland wax production, and 3) different intensities of immune stimulations (strong vs. weak stimulations induced by injections of either LPS or phytohemagglutinin [PHA], respectively) and high versus low food availabilities result in different bill colorations. We found that 1) preen wax reduced bill brightness, 2) a challenge with LPS impaired uropygial gland development, and 3) nestlings challenged with LPS had a brighter bill than PHA-injected nestlings, whereas diet manipulation had no significant effect. Altogether, these results suggest that a strong immune challenge may decrease preen wax deposition rate on the bill of nestling birds, at least by impairing gland wax production, which causes a change in bill coloration. Our study therefore highlights that cosmetic colors might signal short-term variation in immunological status.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.