966 resultados para Two-point boundary value problems
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Aims: Measurement of glycated hemoglobin (HbA1c) is an important indicator of glucose control over time. Point-of-care (POC) devices allow for rapid and convenient measurement of HbA1c, greatly facilitating diabetes care. We assessed two POC analyzers in the Peruvian Amazon where laboratory-based HbA1c testing is not available.
Methods: Venous blood samples were collected from 203 individuals from six different Amazonian communities with a wide range of HbA1c, 4.4-9.0% (25-75 mmol/mol). The results of the Afinion AS100 and the DCA Vantage POC analyzers were compared to a central laboratory using the Premier Hb9210 high-performance liquid chromatography (HPLC) method. Imprecision was assessed by performing 14 successive tests of a single blood sample.
Results: The correlation coefficient r for POC and HPLC results was 0.92 for the Afinion and 0.93 for the DCA Vantage. The Afinion generated higher HbA1c results than the HPLC (mean difference = +0.56% [+6 mmol/mol]; p < 0.001), as did the DCA Vantage (mean difference = +0.32% [4 mmol/mol]). The bias observed between POC and HPLC did not vary by HbA1c level for the DCA Vantage (p = 0.190), but it did for the Afinion (p < 0.001). Imprecision results were: CV = 1.75% for the Afinion, CV = 4.01% for the DCA Vantage. Sensitivity was 100% for both devices, specificity was 48.3% for the Afinion and 85.1% for the DCA Vantage, positive predictive value (PPV) was 14.4% for the Afinion and 34.9% for the DCA Vantage, and negative predictive value (NPV) for both devices was 100%. The area under the receiver operating characteristic (ROC) curve was 0.966 for the Afinion and 0.982 for the DCA Vantage. Agreement between HPLC and POC in classifying diabetes and prediabetes status was slight for the Afinion (Kappa = 0.12) and significantly different (McNemar’s statistic = 89; p < 0.001), and moderate for the DCA Vantage (Kappa = 0.45) and significantly different (McNemar’s statistic = 28; p < 0.001).
Conclusions: Despite significant variation of HbA1c results between the Afinion and DCA Vantage analyzers compared to HPLC, we conclude that both analyzers should be considered in health clinics in the Peruvian Amazon for therapeutic adjustments if healthcare workers are aware of the differences relative to testing in a clinical laboratory. However, imprecision and bias were not low enough to recommend either device for screening purposes, and the local prevalence of anemia and malaria may interfere with diagnostic determinations for a substantial portion of the population.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
This manuscript presents three approaches : analytical, experimental and numerical, to study the behaviour of a flexible membrane tidal energy converter. This technology, developed by the EEL Energy company, is based on periodic deformations of a pre-stressed flexible structure. Energy converters, located on each side of the device, are set into motion by the wave-like motion. In the analytical model, the membrane is represented by a linear beam model at one dimension and the flow by a 3 dimensions potential fluid. The fluid forces are evaluated by the elongated body theory. Energy is dissipated all over the length of the membrane. A 20th scale experimental prototype has been designed with micro-dampers to simulate the power take-off. Trials have allowed to validate the undulating membrane energy converter concept. A numerical model has been developed. Each element of the device is represented and the energy dissipation is done by dampers element with a damping law linear to damper velocity. Comparison of the three approaches validates their ability to represent the membrane behaviour without damping. The energy dissipation applied with the analytical model is clearly different from the two other models because of the location (where the energy is dissipated) and damping law. The two others show a similar behaviour and the same order of power take off repartition but value of power take off are underestimated by the numerical model. This three approaches have allowed to put forward key-parameters on which depend the behaviour of the membrane and the parametric study highlights the complementarity and the advantage of developing three approaches in parallel to answer industrial optimization problems. To make the link between trials in flume tank and sea trials, a 1/6th prototype has been built. To do so, the change of scale was studied. The behaviour of both prototypes is compared and differences could be explained by differences of boundary conditions and confinement effects. To evaluated membrane long-term behaviour at sea, a method of ageing accelerated by temperature and fatigue tests have been carried out on prototype materials samples submerged in sea water.
Resumo:
The aim of this study was to evaluate the viability of the use of spent laying hens' meat in the manufacturing of mortadella-type sausages with healthy appeal by using vegetable oil instead of animal fat. 120 Hy-line® layer hens were distributed in a completely randomized design into two treatments of six replicates with ten birds each. The treatments were birds from light Hy-line® W36 and semi-heavy Hy-line® Brown lines. Cold carcass, wing, breast and leg fillets yields were determined. Dry matter, protein, and lipid contents were determined in breast and leg fillets. The breast and legg fillets of three replicates per treatment were used to manufacture mortadella. After processing, sausages were evaluated for proximal composition, objective color, microbiological parameters, fatty acid profile and sensory acceptance. The meat of light and semi-heavy spent hens presented good yield and composition, allowing it to be used as raw material for the manufacture of processed products. Mortadellas were safe from microbiological point of view, and those made with semi-heavy hens fillets were redder and better accepted by consumers. Values for all sensory attributes were evaluated over score 5 (neither liked nor disliked). Both products presented high polyunsaturated fatty acid contents and good polyunsaturated to saturated fatty acid ratio. The excellent potential for the use of meat from spent layer hens of both varieties in the manufacturing of healthier mortadella-type sausage was demonstrated.
Resumo:
In vitro culture of the mutualistic fungus of leaf-cutting ants is troublesome due to its low growth rate, which leads to storage problems and contaminants accumulation. This paper aims at comparing the radial growth rate of the mutualistic fungus of Atta sexdens rubropilosa Forel in two different culture media (Pagnocca B and MEA LP). Although total MEA LP radial growth was greater all along the bioassay, no significant difference was detected between growth efficiencies of the two media. Previous evidences of low growth rate for this fungus were confirmed. Since these data cannot point greater efficiency of one culture medium over the other, MEA LP medium is indicated for in vitro studies with this mutualistic fungus due its simpler composition and translucent color, making the analysis easier.
Resumo:
cDNA coding for two digestive lysozymes (MdL1 and MdL2) of the Musca domestica housefly was cloned and sequenced. MdL2 is a novel minor lysozyme, whereas MdL1 is the major lysozyme thus far purified from M. domestica midgut. MdL1 and MdL2 were expressed as recombinant proteins in Pichia pastoris, purified and characterized. The lytic activities of MdL1 and MdL2 upon Micrococcus lysodeikticus have an acidic pH optimum (4.8) at low ionic strength (μ = 0.02), which shifts towards an even more acidic value, pH 3.8, at a high ionic strength (μ = 0.2). However, the pH optimum of their activities upon 4-methylumbelliferyl N-acetylchitotrioside (4.9) is not affected by ionic strength. These results suggest that the acidic pH optimum is an intrinsic property of MdL1 and MdL2, whereas pH optimum shifts are an effect of the ionic strength on the negatively charged bacterial wall. MdL2 affinity for bacterial cell wall is lower than that of MdL1. Differences in isoelectric point (pI) indicate that MdL2 (pI = 6.7) is less positively charged than MdL1 (pI = 7.7) at their pH optima, which suggests that electrostatic interactions might be involved in substrate binding. In agreement with that finding, MdL1 and MdL2 affinities for bacterial cell wall decrease as ionic strength increases.
Resumo:
The dynamics of a dissipative vibro-impact system called impact-pair is investigated. This system is similar to Fermi-Ulam accelerator model and consists of an oscillating one-dimensional box containing a point mass moving freely between successive inelastic collisions with the rigid walls of the box. In our numerical simulations, we observed multistable regimes, for which the corresponding basins of attraction present a quite complicated structure with smooth boundary. In addition, we characterize the system in a two-dimensional parameter space by using the largest Lyapunov exponents, identifying self-similar periodic sets. Copyright (C) 2009 Silvio L.T. de Souza et al.
Resumo:
We consider a nonlinear system and show the unexpected and surprising result that, even for high dissipation, the mean energy of a particle can attain higher values than when there is no dissipation in the system. We reconsider the time-dependent annular billiard in the presence of inelastic collisions with the boundaries. For some magnitudes of dissipation, we observe the phenomenon of boundary crisis, which drives the particles to an asymptotic attractive fixed point located at a value of energy that is higher than the mean energy of the nondissipative case and so much higher than the mean energy just before the crisis. We should emphasize that the unexpected results presented here reveal the importance of a nonlinear dynamics analysis to explain the paradoxical strategy of introducing dissipation in the system in order to gain energy.
Resumo:
The dynamics and mechanism of migration of a vacancy point defect in a two-dimensional (2D) colloidal crystal are studied using numerical simulations. We find that the migration of a vacancy is always realized by topology switching between its different configurations. From the temperature dependence of the topology switch frequencies, we obtain the activation energies for possible topology transitions associated with the vacancy diffusion in the 2D crystal. (C) 2011 American Institute of Physics. [doi:10.1063/1.3615287]
Resumo:
In this work we analyze the dynamical Casimir effect for a massless scalar field confined between two concentric spherical shells considering mixed boundary conditions. We thus generalize a previous result in literature [Phys. Rev. A 78, 032521 (2008)], where the same problem is approached for the field constrained to the Dirichlet-Dirichlet boundary conditions. A general expression for the average number of particle creation is deduced considering an arbitrary law of radial motion of the spherical shells. This expression is then applied to harmonic oscillations of the shells, and the number of particle production is analyzed and compared with the results previously obtained under Dirichlet-Dirichlet boundary conditions.
Resumo:
In this work, a new boundary element formulation for the analysis of plate-beam interaction is presented. This formulation uses a three nodal value boundary elements and each beam element is replaced by its actions on the plate, i.e., a distributed load and end of element forces. From the solution of the differential equation of a beam with linearly distributed load the plate-beam interaction tractions can be written as a function of the nodal values of the beam. With this transformation a final system of equation in the nodal values of displacements of plate boundary and beam nodes is obtained and from it, all unknowns of the plate-beam system are obtained. Many examples are analyzed and the results show an excellent agreement with those from the analytical solution and other numerical methods. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper a new boundary element method formulation for elastoplastic analysis of plates with geometrical nonlinearities is presented. The von Mises criterion with linear isotropic hardening is considered to evaluate the plastic zone. Large deflections are assumed but within the context of small strain. To derive the boundary integral equations the von Karman`s hypothesis is taken into account. An initial stress field is applied to correct the true stresses according to the adopted criterion. Isoparametric linear elements are used to approximate the boundary unknown values while triangular internal cells with linear shape function are adopted to evaluate the domain value influences. The nonlinear system of equations is solved by using an implicit scheme together with the consistent tangent operator derived along the paper. Numerical examples are presented to demonstrate the accuracy and the validity of the proposed formulation.
Resumo:
This work deals with analysis of cracked structures using BEM. Two formulations to analyse the crack growth process in quasi-brittle materials are discussed. They are based on the dual formulation of BEM where two different integral equations are employed along the opposite sides of the crack surface. The first presented formulation uses the concept of constant operator, in which the corrections of the nonlinear process are made only by applying appropriate tractions along the crack surfaces. The second presented BEM formulation to analyse crack growth problems is an implicit technique based on the use of a consistent tangent operator. This formulation is accurate, stable and always requires much less iterations to reach the equilibrium within a given load increment in comparison with the classical approach. Comparison examples of classical problem of crack growth are shown to illustrate the performance of the two formulations. (C) 2009 Elsevier Ltd. All rights reserved.