993 resultados para Generalization Problem
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
Axisymmetric radiating and scattering structures whose rotational invariance is broken by non-axisymmetric excitations present an important class of problems in electromagnetics. For such problems, a cylindrical wave decomposition formalism can be used to efficiently obtain numerical solutions to the full-wave frequency-domain problem. Often, the far-field, or Fraunhofer region is of particular interest in scattering cross-section and radiation pattern calculations; yet, it is usually impractical to compute full-wave solutions for this region. Here, we propose a generalization of the Stratton-Chu far-field integral adapted for 2.5D formalism. The integration over a closed, axially symmetric surface is analytically reduced to a line integral on a meridional plane. We benchmark this computational technique by comparing it with analytical Mie solutions for a plasmonic nanoparticle, and apply it to the design of a three-dimensional polarization-insensitive cloak.
Resumo:
Gemstone Team GREEN JUSTICE
Resumo:
This paper proposes that atherosclerosis is initiated by a signaling event that deposits calcium hydroxyapatite (Ca-HAP). This event is preceded by a loss of mechanical structure in the arterial wall. After Ca-HAP has been deposited, it is unlikely that it will be reabsorbed because the solubility product constant (K sp) is very small, and the large stores of Ca +2 and PO 4-3 in the bones oppose any attempts to dissolve Ca-HAP by decreasing the common ions. The hydroxide ion (OH -) of Ca-HAP can be displaced in nature by fluoride (F -) and carbonate (CO 3-2) ions, and it is proposed that anions associated with cholesterol ester hydrolysis and, in very small quantities, the enolate of 7-ketocholesterol could also displace the OH -of Ca-HAP, forming an ionic bond. The free energy of hydration of Ca-HAP at 310 K is most likely negative, and the ionic radii of the anions associated with the hydrolysis of cholesterol ester are compatible with the substitution. Furthermore, examination of the pathology of atherosclerotic lesions by Raman and NMR spectroscopy and confocal microscopy supports deposition of Ca-HAP associated with cholesterol. Investigating the affinity of intermediates of cholesterol hydrolysis for Ca-HAP compared to lipoproteins such as HDL, LDL, and VLDL using isothermic titration calorimetry could add proof of this concept and may lead to the development of a new class of medications targeted at the deposition of cholesterol within Ca-HAP. Treatment of acute ischemic events as a consequence of atherosclerosis with denitrogenation and oxygenation is discussed. © the author(s), publisher and licensee Libertas Academica Ltd.
Resumo:
Fear conditioning is an established model for investigating posttraumatic stress disorder (PTSD). However, symptom triggers may vaguely resemble the initial traumatic event, differing on a variety of sensory and affective dimensions. We extended the fear-conditioning model to assess generalization of conditioned fear on fear processing neurocircuitry in PTSD. Military veterans (n=67) consisting of PTSD (n=32) and trauma-exposed comparison (n=35) groups underwent functional magnetic resonance imaging during fear conditioning to a low fear-expressing face while a neutral face was explicitly unreinforced. Stimuli that varied along a neutral-to-fearful continuum were presented before conditioning to assess baseline responses, and after conditioning to assess experience-dependent changes in neural activity. Compared with trauma-exposed controls, PTSD patients exhibited greater post-study memory distortion of the fear-conditioned stimulus toward the stimulus expressing the highest fear intensity. PTSD patients exhibited biased neural activation toward high-intensity stimuli in fusiform gyrus (P<0.02), insula (P<0.001), primary visual cortex (P<0.05), locus coeruleus (P<0.04), thalamus (P<0.01), and at the trend level in inferior frontal gyrus (P=0.07). All regions except fusiform were moderated by childhood trauma. Amygdala-calcarine (P=0.01) and amygdala-thalamus (P=0.06) functional connectivity selectively increased in PTSD patients for high-intensity stimuli after conditioning. In contrast, amygdala-ventromedial prefrontal cortex (P=0.04) connectivity selectively increased in trauma-exposed controls compared with PTSD patients for low-intensity stimuli after conditioning, representing safety learning. In summary, fear generalization in PTSD is biased toward stimuli with higher emotional intensity than the original conditioned-fear stimulus. Functional brain differences provide a putative neurobiological model for fear generalization whereby PTSD symptoms are triggered by threat cues that merely resemble the index trauma.
Resumo:
info:eu-repo/semantics/published
Resumo:
info:eu-repo/semantics/published
Resumo:
In this paper we present a procedure to describe strategies in problems which can be solved using inductive reasoning. This procedure is based on some aspects of the analysis of the specific subject matter, concretely on the elements, the representation systems and the transformations involved. We show an example of how we used this procedure for the tiles problem. Finally we present some results and conclusions.
Resumo:
Pattern generalization is considered one of the prominent routes for in-troducing students to algebra. However, not all generalizations are al-gebraic. In the use of pattern generalization as a route to algebra, we —teachers and educators— thus have to remain vigilant in order not to confound algebraic generalizations with other forms of dealing with the general. But how to distinguish between algebraic and non-algebraic generalizations? On epistemological and semiotic grounds, in this arti-cle I suggest a characterization of algebraic generalizations. This char-acterization helps to bring about a typology of algebraic and arithmetic generalizations. The typology is illustrated with classroom examples.
Resumo:
In this paper we present different ways used by Secondary students to generalize when they try to solve problems involving sequences. 359 Spanish students solved generalization problems in a written test. These problems were posed through particular terms expressed in different representations. We present examples that illustrate different ways of achieving various types of generalization and how students express generalization. We identify graphical representation of generalization as a useful tool of getting other ways of expressing generalization, and we analyze its connection with other ways of expressing it.
Resumo:
In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Resumo:
The concept of 'nested methods' is adopted to solve the location-routeing problem. Unlike the sequential and iterative approaches, in this method we treat the routeing element as a sub-problem within the larger problem of location. Efficient techniques that take into account the above concept and which use a neighbourhood structure inspired from computational geometry are presented. A simple version of tabu search is also embedded into our methods to improve the solutions further. Computational testing is carried out on five sets of problems of 400 customers with five levels of depot fixed costs, and the results obtained are encouraging.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
In this paper the many to many location routing problem is introduced, and its relationship to various problems in distribution management is emphasised. Useful mathematical formulations which can be easily extended to cater for other related problems are produced. Techniques for tackling this complex distribution problem are also outlined.