943 resultados para Approximate Bayesian computation, Posterior distribution, Quantile distribution, Response time data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Kombination magnetischer Nanopartikel (NP) mit temperatursensitiven Polymeren führt zur Bildung neuer Komposit-Materialien mit interessanten Eigenschaften, die auf vielfältige Weise genutzt werden können. Mögliche Anwendungsgebiete liegen in der magnetischen Trennung, der selektiven Freisetzung von Medikamenten, dem Aufbau von Sensoren und Aktuatoren. Als Polymerkomponente können z.B. Hydrogele dienen. Die Geschwindigkeit der Quellgradänderung mittels externer Stimuli kann durch eine Reduzierung des Hydrogelvolumens erhöht werden, da das Quellen ein diffusionskontrollierter Prozess ist. rnIm Rahmen dieser Arbeit wurde ein durch ultraviolettes Licht vernetzbares Hydrogel aus N-isopropylacrylamid, Methacrylsäure und dem Vernetzer 4-Benzoylphenylmethacrylat hergestellt (PNIPAAm-Hydrogel) und mit magnetischen Nanopartikeln aus Magnetit (Fe3O4) kombiniert. Dabei wurde die Temperatur- und die pH-Abhängigkeit des Quellgrades im Hinblick auf die Verwendung als nanomechanische Cantilever Sensoren (NCS) untersucht. Desweiteren erfolgte eine Charakterisierung durch Oberflächenplasmonen- und optischer Wellenleitermoden-Resonanz Spektroskopie (SPR/OWS). Die daraus erhaltenen Werte für den pKa-Wert und die lower critical solution Temperatur (LCST) stimmten mit den bekannten Literaturwerten überein. Es konnte gezeigt werden, dass eine stärkere Vernetzung zu einer geringeren LCST führt. Die Ergebnisse mittels NCS wiesen zudem auf einen skin-effect während des Heizens von höher vernetzten Polymeren hin.rnDie Magnetit Nanopartikel wurden ausgehend von Eisen(II)acetylacetonat über eine Hochtemperaturreaktion synthetisiert. Durch Variation der Reaktionstemperatur konnte die Größe der hergestellten Nanopartikel zwischen 3.5 und 20 nm mit einer Größenverteilung von 0.5-2.5 nm eingestellt werden. Durch geeignete Oberflächenfunktionalisierung konnten diese in Wasser stabilisiert werden. Dazu wurde nach zwei Strategien verfahren: Zum einen wurden die Nanopartikel mittels einer Silika-Schale funktionalisiert und zum anderen Zitronensäure als Tensid eingesetzt. Wasserstabilität ist vor allem für biologische Anwendungen wünschenswert. Die magnetischen Partikel wurden mit Hilfe von Transmissionselektronenmikroskopie (TEM), und superconductive quantum interference device (SQUID) charakterisiert. Dabei wurde eine Größenabhängigkeit der magnetischen Eigenschaften sowie superparamagnetisches Verhalten beobachtet. Außerdem wurde die Wärmeerzeugung der magnetischen Nanopartikel in einem AC Magnetfeld untersucht. rnDie Kombination beider Komponenten in Form eines Ferrogels wurde durch Mischen Benzophenon funktionalisierter magnetischer Nanopartikel mit Polymer erreicht. Durch Aufschleudern (Spin-Coaten) wurden dünne Filme erzeugt und diese im Hinblick auf ihr Verhalten in einem Magnetfeld untersucht. Dabei wurde eine geringes Plastikverhalten beobachtet. Die experimentellen Ergebnisse wurden anschließend mit theoretisch berechneten Erwartungswerten verglichen und mit den unterschiedlichen Werten für dreidimensionale Ferrogele in Zusammenhang gestellt. rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to make scientific findings reproducible is increasingly important in areas where substantive results are the product of complex statistical computations. Reproducibility can allow others to verify the published findings and conduct alternate analyses of the same data. A question that arises naturally is how can one conduct and distribute reproducible research? This question is relevant from the point of view of both the authors who want to make their research reproducible and readers who want to reproduce relevant findings reported in the scientific literature. We present a framework in which reproducible research can be conducted and distributed via cached computations and describe specific tools for both authors and readers. As a prototype implementation we introduce three software packages written in the R language. The cacheSweave and stashR packages together provide tools for caching computational results in a key-value style database which can be published to a public repository for readers to download. The SRPM package provides tools for generating and interacting with "shared reproducibility packages" (SRPs) which can facilitate the distribution of the data and code. As a case study we demonstrate the use of the toolkit on a national study of air pollution exposure and mortality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This cross-sectional study is based on the qualitative and quantitative research design to review health policy decisions, their practice and implications during 2009 H1N1 influenza pandemic in the United States and globally. The “Future Pandemic Influenza Control (FPIC) related Strategic Management Plan” was developed based on the incorporation of the “National Strategy for Pandemic Influenza (2005)” for the United States from the U.S. Homeland Security Council and “The Canadian Pandemic Influenza Plan for the Health Sector (2006)” from the Canadian Pandemic Influenza Committee for use by the public health agencies in the United States as well as globally. The “global influenza experts’ survey” was primarily designed and administered via email through the “Survey Monkey” system to the 2009 H1N1 influenza pandemic experts as the study respondents. The effectiveness of this plan was confirmed and the approach of the study questionnaire was validated to be convenient and the excellent quality of the questions provided an efficient opportunity to the study respondents to evaluate the effectiveness of predefined strategies/interventions for future pandemic influenza control.^ The quantitative analysis of the responses to the Likert-scale based questions in the survey about predefined strategies/interventions, addressing five strategic issues to control future pandemic influenza. The effectiveness of strategies defined as pertinent interventions in this plan was evaluated by targeting five strategic issues regarding pandemic influenza control. For the first strategic issue pertaining influenza prevention and pre pandemic planning; the confirmed effectiveness (agreement) for strategy (1a) 87.5%, strategy (1b) 91.7% and strategy (1c) 83.3%. The assessment of the priority level for strategies to address the strategic issue no. (1); (1b (High Priority) > 1a (Medium Priority) > 1c (Low Priority) based on the available resources of the developing and developed countries. For the second Strategic Issue encompassing the preparedness and communication regarding pandemic influenza control; the confirmed effectiveness (agreement) for the strategy (2a) 95.6%, strategy (2b) 82.6%, strategy (2c) 91.3% and Strategy (2d) 87.0%. The assessment of the priority level for these strategies to address the strategic issue no. (2); (2a (highest priority) > 2c (high priority) >2d (medium priority) > 2b (low priority). For the third strategic issue encompassing the surveillance and detection of pandemic influenza; the confirmed effectiveness (agreement) for the strategy (3a) 90.9% and strategy (3b) 77.3%. The assessment of the priority level for theses strategies to address the strategic Issue No. (3) (3a (high priority) > 3b (medium/low priority). For the fourth strategic issue pertaining the response and containment of pandemic influenza; the confirmed effectiveness (agreement) for the strategy (4a) 63.6%, strategy (4b) 81.8%, strategy (4c) 86.3%, and strategy (4d) 86.4%. The assessment of the priority level for these strategies to address the strategic issue no. (4); (4d (highest priority) > 4c (high priority) > 4b (medium priority) > 4a (low priority). The fifth strategic issue about recovery from influenza and post pandemic planning; the confirmed effectiveness (agreement) for the strategy (5a) 68.2%, strategy (5b) 36.3% and strategy (5c) 40.9%. The assessment of the priority level for strategies to address the strategic issue no. (5); (5a (high priority) > 5c (medium priority) > 5b (low priority).^ The qualitative analysis of responses to the open-ended questions in the study questionnaire was performed by means of thematic content analysis. The following recurrent or common “themes” were determined for the future implementation of various predefined strategies to address five strategic issues from the “FPIC related Strategic Management Plan” to control future influenza pandemics. (1) Pre Pandemic Influenza Prevention, (2) Seasonal Influenza Control, (3) Cost Effectiveness of Non Pharmaceutical Interventions (NPI), (4) Raising Global Public Awareness, (5) Global Influenza Vaccination Campaigns, (6)Priority for High Risk Population, (7) Prompt Accessibility and Distribution of Influenza Vaccines and Antiviral Drugs, (8) The Vital Role of Private Sector, (9) School Based Influenza Containment, (10) Efficient Global Risk Communication, (11) Global Research Collaboration, (12) The Critical Role of Global Public Health Organizations, (13) Global Syndromic Surveillance and Surge Capacity and (14) Post Pandemic Recovery and Lessons Learned. The future implementation of these strategies with confirmed effectiveness to primarily “reduce the overall response time’ in the process of ‘early detection’, ‘strategies (interventions) formulation’ and their ‘implementation’ to eventually ensure the following health outcomes: (a) reduced influenza transmission, (b) prompt and effective influenza treatment and control, (c) reduced influenza related morbidity and mortality.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os controladores eletrônicos de pulverização visam minimizar a variação das taxas de insumos aplicadas no campo. Eles fazem parte de um sistema de controle, e permitem a compensação da variação de velocidade de deslocamento do pulverizador durante a operação. Há vários tipos de controladores eletrônicos de pulverização disponíveis no mercado e uma das formas de selecionar qual o mais eficiente nas mesmas condições, ou seja, em um mesmo sistema de controle, é quantificar o tempo de resposta do sistema para cada controlador específico. O objetivo desse trabalho foi estimar os tempos de resposta para mudanças de velocidade de um sistema eletrônico de pulverização via modelos de regressão não lineares, estes, resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada. Os dados foram obtidos no Laboratório de Tecnologia de Aplicação, localizado no Departamento de Engenharia de Biossistemas da Escola Superior de Agricultura \"Luiz de Queiroz\", Universidade de São Paulo, no município de Piracicaba, São Paulo, Brasil. Os modelos utilizados foram o logístico e de Gompertz, que resultam de uma soma ponderada de duas regressões lineares constantes com peso dado pela função distribuição acumulada logística e Gumbell, respectivamente. Reparametrizações foram propostas para inclusão do tempo de resposta do sistema de controle nos modelos, com o objetivo de melhorar a interpretação e inferência estatística dos mesmos. Foi proposto também um modelo de regressão não linear difásico que resulta da soma ponderada de regressões lineares constantes com peso dado pela função distribuição acumulada Cauchy seno hiperbólico exponencial. Um estudo de simulação foi feito, utilizando a metodologia de Monte Carlo, para avaliar as estimativas de máxima verossimilhança dos parâmetros do modelo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many online applications, we need to maintain quantile statistics for a sliding window on a data stream. The sliding windows in natural form are defined as the most recent N data items. In this paper, we study the problem of estimating quantiles over other types of sliding windows. We present a uniform framework to process quantile queries for time constrained and filter based sliding windows. Our algorithm makes one pass on the data stream and maintains an E-approximate summary. It uses O((1)/(epsilon2) log(2) epsilonN) space where N is the number of data items in the window. We extend this framework to further process generalized constrained sliding window queries and proved that our technique is applicable for flexible window settings. Our performance study indicates that the space required in practice is much less than the given theoretical bound and the algorithm supports high speed data streams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel approach, based on statistical mechanics, to analyze typical performance of optimum code-division multiple-access (CDMA) multiuser detectors is reviewed. A `black-box' view ot the basic CDMA channel is introduced, based on which the CDMA multiuser detection problem is regarded as a `learning-from-examples' problem of the `binary linear perceptron' in the neural network literature. Adopting Bayes framework, analysis of the performance of the optimum CDMA multiuser detectors is reduced to evaluation of the average of the cumulant generating function of a relevant posterior distribution. The evaluation of the average cumulant generating function is done, based on formal analogy with a similar calculation appearing in the spin glass theory in statistical mechanics, by making use of the replica method, a method developed in the spin glass theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major and growing problems faced by modern society is the high production of waste and related effects they produce, such as environmental degradation and pollution of various ecosystems, with direct effects on quality of life. The thermal treatment technologies have been widely used in the treatment of these wastes and thermal plasma is gaining importance in processing blanketing. This work is focused on developing an optimized system of supervision and control applied to a processing plant and petrochemical waste effluents using thermal plasma. The system is basically composed of a inductive plasma torch reactors washing system / exhaust gases and RF power used to generate plasma. The process of supervision and control of the plant is of paramount importance in the development of the ultimate goal. For this reason, various subsidies were created in the search for greater efficiency in the process, generating events, graphics / distribution and storage of data for each subsystem of the plant, process execution, control and 3D visualization of each subsystem of the plant between others. A communication platform between the virtual 3D plant architecture and a real control structure (hardware) was created. The goal is to use the concepts of mixed reality and develop strategies for different types of controls that allow manipulating 3D plant without restrictions and schedules, optimize the actual process. Studies have shown that one of the best ways to implement the control of generation inductively coupled plasma techniques is to use intelligent control, both for their efficiency in the results is low for its implementation, without requiring a specific model. The control strategy using Fuzzy Logic (Fuzzy-PI) was developed and implemented, and the results showed satisfactory condition on response time and viability

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ultraviolet (UV) nonionizing continuum and mid-infrared (IR) emission constitute the basis of two widely used star formation (SF) indicators at intermediate and high redshifts. We study 2430 galaxies with z < 1.4 in the Extended Groth Strip with deep MIPS 24 μm observations from FIDEL, spectroscopy from DEEP2, and UV, optical, and near-IR photometry from the AEGIS. The data are coupled with dust-reddened stellar population models and Bayesian spectral energy distribution (SED) fitting to estimate dust-corrected star formation rates (SFRs). In order to probe the dust heating from stellar populations of various ages, the derived SFRs were averaged over various timescales—from 100 Myr for "current" SFR (corresponding to young stars) to 1-3 Gyr for long-timescale SFRs (corresponding to the light-weighted age of the dominant stellar populations). These SED-based UV/optical SFRs are compared to total IR luminosities extrapolated from 24 μm observations, corresponding to 10-18 μm rest frame. The total IR luminosities are in the range of normal star-forming galaxies and luminous IR galaxies (10^10-10^12 L_☉). We show that the IR luminosity can be estimated from the UV and optical photometry to within a factor of 2, implying that most z < 1.4 galaxies are not optically thick. We find that for the blue, actively star-forming galaxies the correlation between the IR luminosity and the UV/optical SFR shows a decrease in scatter when going from shorter to longer SFR-averaging timescales. We interpret this as the greater role of intermediate age stellar populations in heating the dust than what is typically assumed. Equivalently, we observe that the IR luminosity is better correlated with dust-corrected optical luminosity than with dust-corrected UV light. We find that this holds over the entire redshift range. Many so-called green valley galaxies are simply dust-obscured actively star-forming galaxies. However, there exist 24 μm detected galaxies, some with L_IR>10^11 L_☉, yet with little current SF. For them a reasonable amount of dust absorption of stellar light (but presumably higher than in nearby early-type galaxies) is sufficient to produce the observed levels of IR, which includes a large contribution from intermediate and old stellar populations. In our sample, which contains very few ultraluminous IR galaxies, optical and X-ray active galactic nuclei do not contribute on average more than ~50% to the mid-IR luminosity, and we see no evidence for a large population of "IR excess" galaxies.