952 resultados para Non-response model approach
Resumo:
Le sujet principal de cette thèse porte sur l'étude de l'estimation de la variance d'une statistique basée sur des données d'enquête imputées via le bootstrap (ou la méthode de Cyrano). L'application d'une méthode bootstrap conçue pour des données d'enquête complètes (en absence de non-réponse) en présence de valeurs imputées et faire comme si celles-ci étaient de vraies observations peut conduire à une sous-estimation de la variance. Dans ce contexte, Shao et Sitter (1996) ont introduit une procédure bootstrap dans laquelle la variable étudiée et l'indicateur de réponse sont rééchantillonnés ensemble et les non-répondants bootstrap sont imputés de la même manière qu'est traité l'échantillon original. L'estimation bootstrap de la variance obtenue est valide lorsque la fraction de sondage est faible. Dans le chapitre 1, nous commençons par faire une revue des méthodes bootstrap existantes pour les données d'enquête (complètes et imputées) et les présentons dans un cadre unifié pour la première fois dans la littérature. Dans le chapitre 2, nous introduisons une nouvelle procédure bootstrap pour estimer la variance sous l'approche du modèle de non-réponse lorsque le mécanisme de non-réponse uniforme est présumé. En utilisant seulement les informations sur le taux de réponse, contrairement à Shao et Sitter (1996) qui nécessite l'indicateur de réponse individuelle, l'indicateur de réponse bootstrap est généré pour chaque échantillon bootstrap menant à un estimateur bootstrap de la variance valide même pour les fractions de sondage non-négligeables. Dans le chapitre 3, nous étudions les approches bootstrap par pseudo-population et nous considérons une classe plus générale de mécanismes de non-réponse. Nous développons deux procédures bootstrap par pseudo-population pour estimer la variance d'un estimateur imputé par rapport à l'approche du modèle de non-réponse et à celle du modèle d'imputation. Ces procédures sont également valides même pour des fractions de sondage non-négligeables.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers' consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
Mestrado em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers’ consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed.
Resumo:
This paper proposes a novel demand response model using a fuzzy subtractive cluster approach. The model development provides support to domestic consumer decisions on controllable loads management, considering consumers’ consumption needs and the appropriate load shape or rescheduling in order to achieve possible economic benefits. The model based on fuzzy subtractive clustering method considers clusters of domestic consumption covering an adequate consumption range. Analysis of different scenarios is presented considering available electric power and electric energy prices. Simulation results are presented and conclusions of the proposed demand response model are discussed.
Resumo:
Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.
Resumo:
BACKGROUND: So far, none of the existing methods on Murray's law deal with the non-Newtonian behavior of blood flow although the non-Newtonian approach for blood flow modelling looks more accurate. MODELING: In the present paper, Murray's law which is applicable to an arterial bifurcation, is generalized to a non-Newtonian blood flow model (power-law model). When the vessel size reaches the capillary limitation, blood can be modeled using a non-Newtonian constitutive equation. It is assumed two different constraints in addition to the pumping power: the volume constraint or the surface constraint (related to the internal surface of the vessel). For a seek of generality, the relationships are given for an arbitrary number of daughter vessels. It is shown that for a cost function including the volume constraint, classical Murray's law remains valid (i.e. SigmaR(c) = cste with c = 3 is verified and is independent of n, the dimensionless index in the viscosity equation; R being the radius of the vessel). On the contrary, for a cost function including the surface constraint, different values of c may be calculated depending on the value of n. RESULTS: We find that c varies for blood from 2.42 to 3 depending on the constraint and the fluid properties. For the Newtonian model, the surface constraint leads to c = 2.5. The cost function (based on the surface constraint) can be related to entropy generation, by dividing it by the temperature. CONCLUSION: It is demonstrated that the entropy generated in all the daughter vessels is greater than the entropy generated in the parent vessel. Furthermore, it is shown that the difference of entropy generation between the parent and daughter vessels is smaller for a non-Newtonian fluid than for a Newtonian fluid.
Resumo:
In this article a two-dimensional transient boundary element formulation based on the mass matrix approach is discussed. The implicit formulation of the method to deal with elastoplastic analysis is considered, as well as the way to deal with viscous damping effects. The time integration processes are based on the Newmark rhoand Houbolt methods, while the domain integrals for mass, elastoplastic and damping effects are carried out by the well known cell approximation technique. The boundary element algebraic relations are also coupled with finite element frame relations to solve stiffened domains. Some examples to illustrate the accuracy and efficiency of the proposed formulation are also presented.
Resumo:
Response Surface Methodology (RSM) was applied to evaluate the chromatic features and sensory acceptance of emulsions that combine Soy Protein (SP) and red Guava Juice (GJ). The parameters analyzed were: instrumental color based on the coordinates a* (redness), b* (yellowness), L* (lightness), C* (chromaticity), h* (hue angle), visual color, acceptance, and appearance. The analyses of the results showed that GJ was responsible for the high measured values of red color, hue angle, chromaticity, acceptance, and visual color, whereas SP was the variable that increased the yellowness intensity of the assays. The redness (R²adj = 74.86%, p < 0.01) and hue angle (R²adj = 80.96%, p < 0.01) were related to the independent variables by linear models, while the sensory data (color and acceptance) could not be modeled due to a high variability. The models of yellowness, lightness, and chromaticity did not present lack of fit but presented adjusted determination coefficients bellow 70%. Notwithstanding, the linear correlations between sensory and instrumental data were not significant (p > 0.05) and low Pearson coefficients were obtained. The results showed that RSM is a useful tool to develop soy-based emulsions and model some chromatic features of guava-based emulsions through RSM.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
Practical Bayesian inference depends upon detailed examination of posterior distribution. When the prior and likelihood are conjugate, this is easily carried out; however, in general, one must resort to numerical approximation. In this paper, our aim is to solve, using MAPLE, the Bayesian paradigm, for a very special data collecting procedure, known as the randomized-response technique. This allows researchers to obtain sensitive information while guaranteeing privacy to respondents. This approach intends to reduce false responses on sensitive questions. Exact methods and approximations will be compared from the accuracy point of view as well as for the computational effort.
Resumo:
BACKGROUND:: The interaction of sevoflurane and opioids can be described by response surface modeling using the hierarchical model. We expanded this for combined administration of sevoflurane, opioids, and 66 vol.% nitrous oxide (N2O), using historical data on the motor and hemodynamic responsiveness to incision, the minimal alveolar concentration, and minimal alveolar concentration to block autonomic reflexes to nociceptive stimuli, respectively. METHODS:: Four potential actions of 66 vol.% N2O were postulated: (1) N2O is equivalent to A ng/ml of fentanyl (additive); (2) N2O reduces C50 of fentanyl by factor B; (3) N2O is equivalent to X vol.% of sevoflurane (additive); (4) N2O reduces C50 of sevoflurane by factor Y. These four actions, and all combinations, were fitted on the data using NONMEM (version VI, Icon Development Solutions, Ellicott City, MD), assuming identical interaction parameters (A, B, X, Y) for movement and sympathetic responses. RESULTS:: Sixty-six volume percentage nitrous oxide evokes an additive effect corresponding to 0.27 ng/ml fentanyl (A) with an additive effect corresponding to 0.54 vol.% sevoflurane (X). Parameters B and Y did not improve the fit. CONCLUSION:: The effect of nitrous oxide can be incorporated into the hierarchical interaction model with a simple extension. The model can be used to predict the probability of movement and sympathetic responses during sevoflurane anesthesia taking into account interactions with opioids and 66 vol.% N2O.
Resumo:
Nitrogen sputtering yields as high as 104 atoms/ion, are obtained by irradiating N-rich-Cu3N films (N concentration: 33 ± 2 at.%) with Cu ions at energies in the range 10?42 MeV. The kinetics of N sputtering as a function of ion fluence is determined at several energies (stopping powers) for films deposited on both, glass and silicon substrates. The kinetic curves show that the amount of nitrogen release strongly increases with rising irradiation fluence up to reaching a saturation level at a low remaining nitrogen fraction (5?10%), in which no further nitrogen reduction is observed. The sputtering rate for nitrogen depletion is found to be independent of the substrate and to linearly increase with electronic stopping power (Se). A stopping power (Sth) threshold of ?3.5 keV/nm for nitrogen depletion has been estimated from extrapolation of the data. Experimental kinetic data have been analyzed within a bulk molecular recombination model. The microscopic mechanisms of the nitrogen depletion process are discussed in terms of a non-radiative exciton decay model. In particular, the estimated threshold is related to a minimum exciton density which is required to achieve efficient sputtering rates.