990 resultados para pseudo-marginal methods


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Application of `advanced analysis' methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A concentrated plasticity method suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in this paper. The pseudo plastic zone method implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the method for the analysis of steel frames comprising non-compact sections is established by comparison with a comprehensive range of analytical benchmark frame solutions. The pseudo plastic zone method is shown to be more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the utility to computational Bayesian analyses of a particular family of recursive marginal likelihood estimators characterized by the (equivalent) algorithms known as "biased sampling" or "reverse logistic regression" in the statistics literature and "the density of states" in physics. Through a pair of numerical examples (including mixture modeling of the well-known galaxy dataset) we highlight the remarkable diversity of sampling schemes amenable to such recursive normalization, as well as the notable efficiency of the resulting pseudo-mixture distributions for gauging prior-sensitivity in the Bayesian model selection context. Our key theoretical contributions are to introduce a novel heuristic ("thermodynamic integration via importance sampling") for qualifying the role of the bridging sequence in this procedure, and to reveal various connections between these recursive estimators and the nested sampling technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several analytical methods for Dynamic System Optimum (DSO) assignment have been proposed but they are basically classified into two kinds. This chapter attempts to establish DSO by equilbrating the path dynamic marginal time (DMT). The authors analyze the path DMT for a single path with tandem bottlenecks and showed that the path DMT is not the simple summation of DMT associated with each bottleneck along the path. Next, the authors examined the DMT of several paths passing through a common bottleneck. It is shown that the externality at the bottleneck is shared by the paths in proportion to their demand from the current time until the queue vanishes. This share of the externality is caused by the departure rate shift under first in first out (FIFO) and the externality propagates to the downstream bottlenecks. However, the externalities propagates to the downstream are calculated out if downstream bottlenecks exist. Therefore, the authors concluded that the path DMT can be evaluated without considering the propagation of the externalities, but just as in the evaluation of the path DMT for a single path passing through a series of bottlenecks between the origin and destination. Based on the DMT analysis, the authors finally proposed a heuristic solution algorithm and verified it by comparing the numerical solution with the analytical one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To estimate the relative inpatient costs of hospital-acquired conditions. Methods: Patient level costs were estimated using computerized costing systems that log individual utilization of inpatient services and apply sophisticated cost estimates from the hospital's general ledger. Occurrence of hospital-acquired conditions was identified using an Australian ‘condition-onset' flag for diagnoses not present on admission. These were grouped to yield a comprehensive set of 144 categories of hospital-acquired conditions to summarize data coded with ICD-10. Standard linear regression techniques were used to identify the independent contribution of hospital-acquired conditions to costs, taking into account the case-mix of a sample of acute inpatients (n = 1,699,997) treated in Australian public hospitals in Victoria (2005/06) and Queensland (2006/07). Results: The most costly types of complications were post-procedure endocrine/metabolic disorders, adding AU$21,827 to the cost of an episode, followed by MRSA (AU$19,881) and enterocolitis due to Clostridium difficile (AU$19,743). Aggregate costs to the system, however, were highest for septicaemia (AU$41.4 million), complications of cardiac and vascular implants other than septicaemia (AU$28.7 million), acute lower respiratory infections, including influenza and pneumonia (AU$27.8 million) and UTI (AU$24.7 million). Hospital-acquired complications are estimated to add 17.3% to treatment costs in this sample. Conclusions: Patient safety efforts frequently focus on dramatic but rare complications with very serious patient harm. Previous studies of the costs of adverse events have provided information on ‘indicators’ of safety problems rather than the full range of hospital-acquired conditions. Adding a cost dimension to priority-setting could result in changes to the focus of patient safety programmes and research. Financial information should be combined with information on patient outcomes to allow for cost-utility evaluation of future interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eucalyptus argophloia Blakely (Western white gum) has shown potential as a commercial forestry timber species in marginal environments of north-eastern Australia. We measured early pollination success in Eucalyptus argophloia to compare pollination methods, determine the timing of stigma receptivity and compare fresh and stored pollen. Early pollination success was measured by counting pollen tubes in the style of E. argophloia 12 days after pollination. We compared the early pollination success of 1) Artificially Induced Protogyny (AIP), one-stop and three-stop methods of pollination; 2) flowers pollinated at 2 day intervals between 2 days before and 6 days after anthesis and 3) fresh pollen and pollen that had been stored for 9 months. Our results show significantly more pollen tubes from unpollinated AIP and AIP treatments than either the one-stop pollination or three-stop pollination treatments. This indicates that self-pollination occurs in the unpollinated AIP treatment. There was very little pollen tube growth in the one-stop method indicating that the three-stop method is the most suitable for this species. Stigma receptivity in E. argophloia commenced six days after anthesis and no pollen tube growth was observed prior to this. Fresh pollen resulted in pollen tube growth in the style whereas the stored pollen resulted in a total absence of pollen tube growth. We recommend that breeding programs incorporating E. argophloia as a female parent use the three-stop pollination method, and controlled pollination be carried out at least six days after anthesis using fresh pollen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a self-regularized pseudo-time marching strategy for ill-posed, nonlinear inverse problems involving recovery of system parameters given partial and noisy measurements of system response. While various regularized Newton methods are popularly employed to solve these problems, resulting solutions are known to sensitively depend upon the noise intensity in the data and on regularization parameters, an optimal choice for which remains a tricky issue. Through limited numerical experiments on a couple of parameter re-construction problems, one involving the identification of a truss bridge and the other related to imaging soft-tissue organs for early detection of cancer, we demonstrate the superior features of the pseudo-time marching schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time,recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of thePD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid frictional-kinetic equations are used to predict the velocity, grain temperature, and stress fields in hoppers. A suitable choice of dimensionless variables permits the pseudo-thermal energy balance to be decoupled from the momentum balance. These balances contain a small parameter, which is analogous to a reciprocal Reynolds number. Hence an approximate semi-analytical solution is constructed using perturbation methods. The energy balance is solved using the method of matched asymptotic expansions. The effect of heat conduction is confined to a very thin boundary layer near the exit, where it causes a marginal change in the temperature. Outside this layer, the temperature T increases rapidly as the radial coordinate r decreases. In particular, the conduction-free energy balance yields an asymptotic solution, valid for small values of r, of the form T proportional r-4. There is a corresponding increase in the kinetic stresses, which attain their maximum values at the hopper exit. The momentum balance is solved by a regular perturbation method. The contribution of the kinetic stresses is important only in a small region near the exit, where the frictional stresses tend to zero. Therefore, the discharge rate is only about 2.3% lower than the frictional value, for typical parameter values. As in the frictional case, the discharge rate for deep hoppers is found to be independent of the head of material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.

In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.