347 resultados para Randomization
Resumo:
A simple protein-DNA interaction analysis has been developed using both a high-affinity/high-specificity zinc finger protein and a low-specificity zinc finger protein with nonspecific DNA binding capability. The latter protein is designed to mimic background binding by proteins generated in randomized or shuffled gene libraries. In essence, DNA is immobilized onto the surface of microplate wells via streptavidin capture, and green fluorescent protein (GFP)-labeled protein is added in solution as part of a crude cell lysate or protein mixture. After incubation and washing, bound protein is detected in a standard microplate reader. The minimum sensitivity of the assay is approximately 0.4 nM protein. The assay format is ideally suited to investigate the interactions of DNA binding proteins from within crude cell extracts and/or mixtures of proteins that may be encountered in protein libraries generated by codon randomization or gene shuffling.
Resumo:
The two-way design has been variously described as a matched-sample F-test, a simple within-subjects ANOVA, a one-way within-groups ANOVA, a simple correlated-groups ANOVA, and a one-factor repeated measures design! This confusion of terminology is likely to lead to problems in correctly identifying this analysis within commercially available software. The essential feature of the design is that each treatment is allocated by randomization to one experimental unit within each group or block. The block may be a plot of land, a single occasion in which the experiment was performed, or a human subject. The ‘blocking’ is designed to remove an aspect of the error variation and increase the ‘power’ of the experiment. If there is no significant source of variation associated with the ‘blocking’ then there is a disadvantage to the two-way design because there is a reduction in the DF of the error term compared with a fully randomised design thus reducing the ‘power’ of the analysis.
Resumo:
We introduce the concept of noncoherent optical pulse discrimination from a coherent (or partially coherent) signal of the same energy using the phenomenon of soliton generation. The impact of randomization of the optical signal content on the observable characteristics of soliton generation is examined and quantified for the particular example of a rectangular pulse.
Resumo:
PURPOSE: To compare the efficacy and safety of ranibizumab and bevacizumab intravitreal injections to treat neovascular age-related macular degeneration (nAMD). DESIGN: Multicenter, noninferiority factorial trial with equal allocation to groups. The noninferiority limit was 3.5 letters. This trial is registered (ISRCTN92166560). PARTICIPANTS: People >50 years of age with untreated nAMD in the study eye who read =25 letters on the Early Treatment Diabetic Retinopathy Study chart. METHODS: We randomized participants to 4 groups: ranibizumab or bevacizumab, given either every month (continuous) or as needed (discontinuous), with monthly review. MAIN OUTCOME MEASURES: The primary outcome is at 2 years; this paper reports a prespecified interim analysis at 1 year. The primary efficacy and safety outcome measures are distance visual acuity and arteriothrombotic events or heart failure. Other outcome measures are health-related quality of life, contrast sensitivity, near visual acuity, reading index, lesion morphology, serum vascular endothelial growth factor (VEGF) levels, and costs. RESULTS: Between March 27, 2008 and October 15, 2010, we randomized and treated 610 participants. One year after randomization, the comparison between bevacizumab and ranibizumab was inconclusive (bevacizumab minus ranibizumab -1.99 letters, 95% confidence interval [CI], -4.04 to 0.06). Discontinuous treatment was equivalent to continuous treatment (discontinuous minus continuous -0.35 letters; 95% CI, -2.40 to 1.70). Foveal total thickness did not differ by drug, but was 9% less with continuous treatment (geometric mean ratio [GMR], 0.91; 95% CI, 0.86 to 0.97; P = 0.005). Fewer participants receiving bevacizumab had an arteriothrombotic event or heart failure (odds ratio [OR], 0.23; 95% CI, 0.05 to 1.07; P = 0.03). There was no difference between drugs in the proportion experiencing a serious systemic adverse event (OR, 1.35; 95% CI, 0.80 to 2.27; P = 0.25). Serum VEGF was lower with bevacizumab (GMR, 0.47; 95% CI, 0.41 to 0.54; P
Resumo:
Background - To assess potentially elevated cardiovascular risk related to new antihyperglycemic drugs in patients with type 2 diabetes, regulatory agencies require a comprehensive evaluation of the cardiovascular safety profile of new antidiabetic therapies. We assessed cardiovascular outcomes with alogliptin, a new inhibitor of dipeptidyl peptidase 4 (DPP-4), as compared with placebo in patients with type 2 diabetes who had had a recent acute coronary syndrome. Methods - We randomly assigned patients with type 2 diabetes and either an acute myocardial infarction or unstable angina requiring hospitalization within the previous 15 to 90 days to receive alogliptin or placebo in addition to existing antihyperglycemic and cardiovascular drug therapy. The study design was a double-blind, noninferiority trial with a prespecified noninferiority margin of 1.3 for the hazard ratio for the primary end point of a composite of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. Results - A total of 5380 patients underwent randomization and were followed for up to 40 months (median, 18 months). A primary end-point event occurred in 305 patients assigned to alogliptin (11.3%) and in 316 patients assigned to placebo (11.8%) (hazard ratio, 0.96; upper boundary of the one-sided repeated confidence interval, 1.16; P<0.001 for noninferiority). Glycated hemoglobin levels were significantly lower with alogliptin than with placebo (mean difference, -0.36 percentage points; P<0.001). Incidences of hypoglycemia, cancer, pancreatitis, and initiation of dialysis were similar with alogliptin and placebo. Conclusions - Among patients with type 2 diabetes who had had a recent acute coronary syndrome, the rates of major adverse cardiovascular events were not increased with the DPP-4 inhibitor alogliptin as compared with placebo. (Funded by Takeda Development Center Americas; EXAMINE ClinicalTrials.gov number, NCT00968708.)
Resumo:
The problem of sequent two-block decomposition of a Boolean function is regarded in case when a good solution does exist. The problem consists mainly in finding an appropriate weak partition on the set of arguments of the considered Boolean function, which should be decomposable at that partition. A new fast heuristic combinatorial algorithm is offered for solving this task. At first the randomized search for traces of such a partition is fulfilled. The recognized traces are represented by some "triads" - the simplest weak partitions corresponding to non-trivial decompositions. After that the whole sought-for partition is restored from the discovered trace by building a track initialized by the trace and leading to the solution. The results of computer experiments testify the high practical efficiency of the algorithm.
Resumo:
A new method for solving some hard combinatorial optimization problems is suggested, admitting a certain reformulation. Considering such a problem, several different similar problems are prepared which have the same set of solutions. They are solved on computer in parallel until one of them will be solved, and that solution is accepted. Notwithstanding the evident overhead, the whole run-time could be significantly reduced due to dispersion of velocities of combinatorial search in regarded cases. The efficiency of this approach is investigated on the concrete problem of finding short solutions of non-deterministic system of linear logical equations.
Resumo:
AMS subject classification: 90C31, 90A09, 49K15, 49L20.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
Our sleep timing preference, or chronotype, is a manifestation of our internal biological clock. Variation in chronotype has been linked to sleep disorders, cognitive and physical performance, and chronic disease. Here we perform a genome-wide association study of self-reported chronotype within the UK Biobank cohort (n=100,420). We identify 12 new genetic loci that implicate known components of the circadian clock machinery and point to previously unstudied genetic variants and candidate genes that might modulate core circadian rhythms or light-sensing pathways. Pathway analyses highlight central nervous and ocular systems and fear-response-related processes. Genetic correlation analysis suggests chronotype shares underlying genetic pathways with schizophrenia, educational attainment and possibly BMI. Further, Mendelian randomization suggests that evening chronotype relates to higher educational attainment. These results not only expand our knowledge of the circadian system in humans but also expose the influence of circadian characteristics over human health and life-history variables such as educational attainment.
Resumo:
A new correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is introduced for finite games. After randomization over the outcome space, players have the choice either to follow the recommendation of an umpire blindly or freely choose some other action except the one suggested. This scheme can lead to Pareto-better outcomes than the simple extension introduced by [Moulin, H., Vial, J.-P., 1978. Strategically zero-sum games: the class of games whose completely mixed equilibria cannot be improved upon. International Journal of Game Theory 7, 201–221]. The informational and interpretational aspects of soft correlated equilibria are also discussed in detail. The power of the generalization is illustrated in the prisoners’s dilemma and a congestion game.
Resumo:
A correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is applied for two-person finite games in extensive form with perfect information. Randomization by an umpire takes place over the leaves of the game tree. At every decision point players have the choice either to follow the recommendation of the umpire blindly or freely choose any other action except the one suggested. This scheme can lead to Pareto-improved outcomes of other correlated equilibria. Computational issues of maximizing a linear function over the set of soft correlated equilibria are considered and a linear-time algorithm in terms of the number of edges in the game tree is given for a special procedure called “subgame perfect optimization”.
Resumo:
Although calorie information at the point-of-purchase at fast food restaurants is proposed as a method to decrease calorie choices and combat obesity, research results have been mixed. Much of the supportive research has weak methodology, and is limited. There is a demonstrated need to develop better techniques to assist consumers to make lower calorie food choices. Eating at fast food restaurants has been positively associated with weight gain. The current study explored the possibility of adding exercise equivalents (EE) (physical activity required to burn off the calories in the food), along with calorie information as a possible way to facilitate lower calorie choice at the point-of-choice in fast food restaurants. This three-group experimental study, in 18-34 year old, overweight and obese women, examines whether presenting caloric information in the form of EE at the point-of-choice at fast food restaurants, will lead to lower calorie food choices compared to presenting simple caloric information or no information at all. Methods. A randomized repeated measures experiment was conducted. Participants ordered a fast food meal from Burger King with menus that contained only the names of the food choices (Lunch 1). One week later (Lunch 2), study participants were given one of three menus that varied: no information, calorie information, or calorie information and EE. Study participants included 62 college aged students. Additionally, the study controlled for dietary restraint by blocking participants, before randomization, to the three groups. Results. A repeated measures analysis of variance was conducted. The study was not sufficiently powered, and while the study was designed to determine large effect sizes, a small effect size of .026, was determined. No significant differences were found in the foods ordered among the various menu conditions. Conclusion. Menu labeling alone might not be enough to reduce calories at the point-of-choice at restaurants. Additional research is necessary to determine if calorie information and EE at the point-of-choice would lead to fewer calories chosen at a meal. Studies should also look at long-term, repeated exposure to determine the effectiveness of calories and or EE at the point-of-choice at fast food restaurants.
Resumo:
In this study, the formation of stripe domains in permalloy (NisoFe20) thin films was investigated mainly utilizing magnetic force microscopy. Stripe domains are a known phenomenon, which reduces the "softness" of magnetic material and introduces a significant source of noise when used in perpendicular magnetic media. For the particular setup mentioned in this report, a critical thickness for stripe domains initiation depended on the sputtering rate, the substrate temperature, and the film thickness. Beyond the stripe domain formation, an increase in the periodicity of highly ordered stripe domains was evident with increasing film thickness. Above a particular thickness, stripe domains periodicity decreased along with magnetic domain randomization. The results led to the inference that the perpendicular anisotropy responsible for the formation of stripe domains originated mainly from magnetostriction.
Resumo:
Although calorie information at the point-of-purchase at fast food restaurants is proposed as a method to decrease calorie choices and combat obesity, research results have been mixed. Much of the supportive research has weak methodology, and is limited. There is a demonstrated need to develop better techniques to assist consumers to make lower calorie food choices. Eating at fast food restaurants has been positively associated with weight gain. The current study explored the possibility of adding exercise equivalents (EE) (physical activity required to burn off the calories in the food), along with calorie information as a possible way to facilitate lower calorie choice at the point-of-choice in fast food restaurants. This three-group experimental study, in 18-34 year old, overweight and obese women, examines whether presenting caloric information in the form of EE at the point-of-choice at fast food restaurants, will lead to lower calorie food choices compared to presenting simple caloric information or no information at all. Methods: A randomized repeated measures experiment was conducted. Participants ordered a fast food meal from Burger King with menus that contained only the names of the food choices (Lunch 1). One week later (Lunch 2), study participants were given one of three menus that varied: no information, calorie information, or calorie information and EE. Study participants included 62 college aged students. Additionally, the study controlled for dietary restraint by blocking participants, before randomization, to the three groups. Results: A repeated measures analysis of variance was conducted. The study was not sufficiently powered, and while the study was designed to determine large effect sizes, a small effect size of .026, was determined. No significant differences were found in the foods ordered among the various menu conditions. Conclusion: Menu labeling alone might not be enough to reduce calories at the point-of-choice at restaurants. Additional research is necessary to determine if calorie information and EE at the point-of-choice would lead to fewer calories chosen at a meal. Studies should also look at long-term, repeated exposure to determine the effectiveness of calories and or EE at the point-of-choice at fast food restaurants.