11 resultados para best estimate method
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Quan ens referim a alumnes excepcionals (a partir d'ara en aquest treball ens referirem a superdotats o a alumnes amb talent) ens preguntem entre altres qüestions: com s'identifica un alumne excepcional?, com ha de diagnosticar-se?, quin mètode educatiu és el millor? Per a poder donar una resposta a aquestes preguntes, primer hem de tenir clar què és un alumne/a excepcional (superdotat o amb talent). Amb aquest estudi voldria contribuir a ampliar la informació als docents i facilitar-los els elements motivadors per a millorar la pràctica educativa, amb els canvis organitzatius i metodològics necessaris per a poder atendre els alumnes superdotats o amb talent dintre de la diversitat.
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
A change in paradigm is needed in the prevention of toxic effects on the nervous system, moving from its present reliance solely on data from animal testing to a prediction model mostly based on in vitro toxicity testing and in silico modeling. According to the report published by the National Research Council (NRC) of the US National Academies of Science, high-throughput in vitro tests will provide evidence for alterations in"toxicity pathways" as the best possible method of large scale toxicity prediction. The challenges to implement this proposal are enormous, and provide much room for debate. While many efforts address the technical aspects of implementing the vision, many questions around it need also to be addressed. Is the overall strategy the only one to be pursued? How can we move from current to future paradigms? Will we ever be able to reliably model for chronic and developmental neurotoxicity in vitro? This paper summarizes four presentations from a symposium held at the International Neurotoxicology Conference held in Xi"an, China, in June 2011. A. Li reviewed the current guidelines for neurotoxicity and developmental neurotoxicity testing, and discussed the major challenges existing to realize the NCR vision for toxicity testing. J. Llorens reviewed the biology of mammalian toxic avoidance in view of present knowledge on the physiology and molecular biology of the chemical senses, taste and smell. This background information supports the hypothesis that relating in vivo toxicity to chemical epitope descriptors that mimic the chemical encoding performed by the olfactory system may provide a way to the long term future of complete in silico toxicity prediction. S. Ceccatelli reviewed the implementation of rodent and human neural stem cells (NSCs) as models for in vitro toxicity testing that measures parameters such as cell proliferation, differentiation and migration. These appear to be sensitive endpoints that can identify substances with developmental neurotoxic potential. C. Sun ol reviewed the use of primary neuronal cultures in testing for neurotoxicity of environmental pollutants, including the study of the effects of persistent exposures and/or in differentiating cells, which allow recording of effects that can be extrapolated to human developmental neurotoxicity.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
The aim of this paper is to suggest a method to find endogenously the points that group the individuals of a given distribution in k clusters, where k is endogenously determined. These points are the cut-points. Thus, we need to determine a partition of the N individuals into a number k of groups, in such way that individuals in the same group are as alike as possible, but as distinct as possible from individuals in other groups. This method can be applied to endogenously identify k groups in income distributions: possible applications can be poverty
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.
Resumo:
The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models
Resumo:
We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
Resumo:
Egesta of a cave-dwelling mysid (Hemimysis speluncola Ledoyer, 1963) was studied in a submarine cave of Medes Islands, NW Mediterranean by in situ fecal pellet collecting. Fecal pellet production and gut fullness of mysids during incubation experiments are used to estimate mysid egestion rates. Intrinsic factors related with the natural history of this species such as population structure, density of mysids, daily rhythms and pellet decomposition rates are tested for their influence on the egestion rate. The effects of methodological artifacts, such as the stress induced by both incubation and preservation procedures, are also studied. An average mysid egests about 2.5 pellets per day into the cave. The time of day is the main factor affecting egestion. The highest deposition rate is between 2 to 4 hours after sunrise when about 38 % of the total daily pellet production becomes egested. Fecal pellet morphology changes with mysid demographic classes: immature mysids produce slender and thick pellets, whereas mature mysids produce only thick pellets. Immature classes show higher percentages of full guts than mature ones. Mysid density in the incubators does not affect the results on gut fullness, but it causes a decrease in the number of pellets collected after incubation. Coprorhexia seems to be the only plausible process to explain this paradox. The incubation procedure does not increase deposition rate significantly. Time of incubation is critical because the half-life of fecal pellets is about 2.5 hours. Fixation with liquid nitrogen decreases gut fullness and also deposition rates. Higher values are obtained with 70 % ethanol and 5 % formalin solutions which show very similar results for both gut fullness and pellet deposition rates. Nevertheless, ethanol is not suitable as fixative because it enhances the opacity of the body. Several suggestions are given in order to optimize the reliability of further in situ experiments for evaluation of egesta of Hemimysis speluncola in submarine caves.