999 resultados para uniform linear hypothesis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the plate bending formulation of the boundary element method (BEM) based on the Reissner's hypothesis is extended to the analysis of zoned plates in order to model a building floor structure. In the proposed formulation each sub-region defines a beam or a slab and depending on the way the sub-regions are represented, one can have two different types of analysis. In the simple bending problem all sub-regions are defined by their middle surface. on the other hand, for the coupled stretching-bending problem all sub-regions are referred to a chosen reference surface, therefore eccentricity effects are taken into account. Equilibrium and compatibility conditions are automatically imposed by the integral equations, which treat this composed structure as a single body. The bending and stretching values defined on the interfaces are approximated along the beam width, reducing therefore the number of degrees of freedom. Then, in the proposed model the set of equations is written in terms of the problem values on the beam axis and on the external boundary without beams. Finally some numerical examples are presented to show the accuracy of the proposed model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a numerical model to perform non-linear analysis of building floor structures is proposed. The presented model is derived from the Kirchhoff-s plate bending formulation of the boundary element method (BENI) for zoned domains, in which the plate stiffness is modified by the presence of membrane effects. In this model, no approximation of the generalized forces along the interface is required and the compatibility and equilibrium conditions along interfaces are imposed at the integral equation level. In order to reduce the number of degrees of freedom, the Navier Bernoulli hypothesis is assumed to simplify the strain field for the thin sub-regions (rectangular beams). The non-linear formulation is obtained from the linear formulation by incorporating initial internal force fields, which are approximated by using the well-known cell sub-division. Then, the non-linear solution of algebraic equations is obtained by using the concept of the consistent tangent operator. The Von Mises criterion is adopted to govern the elasto-plastic material behaviour checked at points along the plate thickness and along the rectangular beam element axes. The numerical representations are accurately obtained by either computing analytically the element integrals or performing the numerical integration accurately using an appropriate sub-elementation scheme. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Matemática - IBILCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. Methods: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the "t" test for unpaired comparisons between groups. The level of statistical significance was 5%. Results: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. Conclusion: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n(-1/2), n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property. The power performance of all four criteria in one-parameter exponential family is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of the RAMS (Reliability, Availability, Maintainability and Safety) performances of system generally includes the evaluations of the “Importance” of its components and/or of the basic parameters of the model through the use of the Importance Measures. The analytical equations proposed in this study allow the estimation of the first order Differential Importance Measure on the basis of the Birnbaum measures of components, under the hypothesis of uniform percentage changes of parameters. The aging phenomena are introduced into the model by assuming exponential-linear or Weibull distributions for the failure probabilities. An algorithm based on a combination of MonteCarlo simulation and Cellular Automata is applied in order to evaluate the performance of a networked system, made up of source nodes, user nodes and directed edges subjected to failure and repair. Importance Sampling techniques are used for the estimation of the first and total order Differential Importance Measures through only one simulation of the system “operational life”. All the output variables are computed contemporaneously on the basis of the same sequence of the involved components, event types (failure or repair) and transition times. The failure/repair probabilities are forced to be the same for all components; the transition times are sampled from the unbiased probability distributions or it can be also forced, for instance, by assuring the occurrence of at least a failure within the system operational life. The algorithm allows considering different types of maintenance actions: corrective maintenance that can be performed either immediately upon the component failure or upon finding that the component has failed for hidden failures that are not detected until an inspection; and preventive maintenance, that can be performed upon a fixed interval. It is possible to use a restoration factor to determine the age of the component after a repair or any other maintenance action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The collapse of linear polyelectrolyte chains in a poor solvent: When does a collapsing polyelectrolyte collect its counter ions? The collapse of polyions in a poor solvent is a complex system and is an active research subject in the theoretical polyelectrolyte community. The complexity is due to the subtle interplay between hydrophobic effects, electrostatic interactions, entropy elasticity, intrinsic excluded volume as well as specific counter-ion and co-ion properties. Long range Coulomb forces can obscure single molecule properties. The here presented approach is to use just a small amount of screening salt in combination with a very high sample dilution in order to screen intermolecular interaction whereas keeping intramolecular interaction as much as possible (polyelectrolyte concentration cp ≤ 12 mg/L, salt concentration; Cs = 10^-5 mol/L). This is so far not described in literature. During collapse, the polyion is subject to a drastic change in size along with strong reduction of free counterions in solution. Therefore light scattering was utilized to obtain the size of the polyion whereas a conductivity setup was developed to monitor the proceeding of counterion collection by the polyion. Partially quaternized PVP’s below and above the Manning limit were investigated and compared to the collapse of their uncharged precursor. The collapses were induced by an isorefractive solvent/non-solvent mixture consisting of 1-propanol and 2-pentanone, with nearly constant dielectric constant. The solvent quality for the uncharged polyion could be quantified which, for the first time, allowed the experimental investigation of the effect of electrostatic interaction prior and during polyion collapse. Given that the Manning parameter M for QPVP4.3 is as low as lB / c = 0.6 (lB the Bjerrum length and c the mean contour distance between two charges), no counterion binding should occur. However the Walden product reduces with first addition of non solvent and accelerates when the structural collapse sets in. Since the dielectric constant of the solvent remains virtually constant during the chain collapse, the counterion binding is entirely caused by the reduction in the polyion chain dimension. The collapse is shifted to lower wns with higher degrees of quaternization as the samples QPVP20 and QPVP35 show (M = 2.8 respectively 4.9). The combination of light scattering and conductivity measurement revealed for the first time that polyion chains already collect their counter ions well above the theta-dimension when the dimensions start to shrink. Due to only small amounts of screening salt, strong electrostatic interactions bias dynamic as well as static light scattering measurements. An extended Zimm formula was derived to account for this interaction and to obtain the real chain dimensions. The effective degree of dissociation g could be obtained semi quantitatively using this extrapolated static in combination with conductivity measurements. One can conclude the expansion factor a and the effective degree of ionization of the polyion to be mutually dependent. In the good solvent regime g of QPVP4.3, QPVP20 and QPVP35 exhibited a decreasing value in the order 1 > g4.3 > g20 > g35. The low values of g for QPVP20 and QPVP35 are assumed to be responsible for the prior collapse of the higher quaternized samples. Collapse theory predicts dipole-dipole attraction to increase accordingly and even predicts a collapse in the good solvent regime. This could be exactly observed for the QPVP35 sample. The experimental results were compared to a newly developed theory of uniform spherical collapse induced by concomitant counterion binding developed by M. Muthukumar and A. Kundagrami. The theory agrees qualitatively with the location of the phase boundary as well as the trend of an increasing expansion with an increase of the degree of quaternization. However experimental determined g for the samples QPVP4.3, QPVP20 and QPVP35 decreases linearly with the degree of quaternization whereas this theory predicts an almost constant value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent optimizations of NMR spectroscopy have focused their attention on innovations in new hardware, such as novel probes and higher field strengths. Only recently has the potential to enhance the sensitivity of NMR through data acquisition strategies been investigated. This thesis has focused on the practice of enhancing the signal-to-noise ratio (SNR) of NMR using non-uniform sampling (NUS). After first establishing the concept and exact theory of compounding sensitivity enhancements in multiple non-uniformly sampled indirect dimensions, a new result was derived that NUS enhances both SNR and resolution at any given signal evolution time. In contrast, uniform sampling alternately optimizes SNR (t < 1.26T2) or resolution (t~3T2), each at the expense of the other. Experiments were designed and conducted on a plant natural product to explore this behavior of NUS in which the SNR and resolution continue to improve as acquisition time increases. Possible absolute sensitivity improvements of 1.5 and 1.9 are possible in each indirect dimension for matched and 2x biased exponentially decaying sampling densities, respectively, at an acquisition time of ¿T2. Recommendations for breaking into the linear regime of maximum entropy (MaxEnt) are proposed. Furthermore, examination into a novel sinusoidal sampling density resulted in improved line shapes in MaxEnt reconstructions of NUS data and comparable enhancement to a matched exponential sampling density. The Absolute Sample Sensitivity derived and demonstrated here for NUS holds great promise in expanding the adoption of non-uniform sampling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims Floral traits are frequency used in traditional plant systematics because Of their assumed constancy. One potential reason for the apparent constancy of flower size is that effective pollen transfer between flowers depends oil the accuracy of the physical fit between the flower and pollinator. Therefore, dowels are likely to he under stronger stabilizing selection for uniform size than vegetative plant parts. Moreover, as predicted by the pollinator-mediated stabilizing selection (PMSS) hypothesis, all accurate fit between flowers and their pollinators is likely to he more important for specialized pollination systems as found in many species with bilaterally symmetric (zygomorphic) flowers than for species, with radially symmetric (actinomorphic) flowers. Methods In a comparative study of 15 zygomorphic and 13 actinomorphic species ill Switzerland, we tested whether variation in flower size, among and within individuals, is smaller than variation ill leaf size and whether variation in flower size is smaller ill zygomorphic compared to actinomorphic species. Important findings Indeed, variation ill leaf length was significantly larger than variation in flower length and width. Within-individual variation ill flower and leaf sizes did not differ significantly between zygomorphic and actinomorphic species. In line with the predictions of the PMSS, among-individual variation ill flower length and flower width was significantly smaller for zygomorphic species than for actinomorphic species, while the two groups did not differ in leaf length variation. This suggests that plants with zygomorphic flowers have undergone stronger selection for uniform flowers than plants with actinomorphic flowers. This supports that the uniformity of flowers compared to vegetative structures within species, as already observed in traditional plant systematics, is, at least in part, a consequence of the requirement for effective pollination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^