975 resultados para anisotropic finite-size scaling
Resumo:
Indentation of ceramic materials with smooth indenters such as parabolae of revolution and spheres can be conducted in the elastic regime to relatively high loads. Ceramic single crystals thus provide excellent calibration media for load-and depth-sensing indentation testing; however, they are generally anisotropic and a complete elastic analysis is cumbersome. This study presents a simplified procedure for the determination of the stiffness of contact for the indentation of an anisotropic half-space by a rigid frictionless parabola of revolution which, to first order, approximates spherical indentation. Using a similar approach, a new procedure is developed for analysing conical indentation of anisotropic elastic media. For both indenter shapes, the contact is found to be elliptical, and equations are determined for the size, shape and orientation of the ellipse and the indentation modulus.
Resumo:
Since the introduction of fiber reinforced polymers (FRP) for the repair and retrofit of concrete structures in the 1980’s, considerable research has been devoted to the feasibility of their application and predictive modeling of their performance. However, the effects of flaws present in the constitutive components and the practices in substrate preparation and treatment have not yet been thoroughly studied. This research aims at investigating the effect of surface preparation and treatment for the pre-cured FRP systems and the groove size tolerance for near surface mounted (NSM) FRP systems; and to set thresholds for guaranteed system performance. This study was conducted as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B to develop construction specifications and process control manual for repair and retrofit of concrete structures using bonded FRP systems. The research included both analytical and experimental components. The experimental program for the pre-cured FRP systems consisted of a total of twenty-four (24) reinforced concrete (RC) T-beams with various surface preparation parameters and surface flaws, including roughness, flatness, voids and cracks (cuts). For the NSM FRP systems, a total of twelve (12) additional RC T-beams were tested with different grooves sizes for FRP bars and strips. The analytical program included developing an elaborate nonlinear finite element model using the general purpose software ANSYS. The bond interface between FRP and concrete was modeled by a series of nonlinear springs. The model was validated against test data from the present study as well as those available from the literature. The model was subsequently used to extend the experimental range of parameters for surface flatness in pre-cured FRP systems and for groove size study in the NSM FRP systems. Test results, confirmed by further analyses, indicated that contrary to the general belief in the industry, the impact of surface roughness on the global performance of pre-cured FRP systems was negligible. The study also verified that threshold limits set for wet lay-up FRP systems can be extended to pre-cured systems. The study showed that larger surface voids and cracks (cuts) can adversely impact both the strength and ductility of pre-cured FRP systems. On the other hand, frequency (or spacing) of surface cracks (cuts) may only affect system ductility rather than its strength. Finally, within the range studied, groove size tolerance of ±1/8 in. does not appear to have an adverse effect on the performance of NSM FRP systems.
Resumo:
Antenna design is an iterative process in which structures are analyzed and changed to comply with certain performance parameters required. The classic approach starts with analyzing a "known" structure, obtaining the value of its performance parameter and changing this structure until the "target" value is achieved. This process relies on having an initial structure, which follows some known or "intuitive" patterns already familiar to the designer. The purpose of this research was to develop a method of designing UWB antennas. What is new in this proposal is that the design process is reversed: the designer will start with the target performance parameter and obtain a structure as the result of the design process. This method provided a new way to replicate and optimize existing performance parameters. The base of the method was the use of a Genetic Algorithm (GA) adapted to the format of the chromosome that will be evaluated by the Electromagnetic (EM) solver. For the electromagnetic study we used XFDTD™ program, based in the Finite-Difference Time-Domain technique. The programming portion of the method was created under the MatLab environment, which serves as the interface for converting chromosomes, file formats and transferring of data between the XFDTD™ and GA. A high level of customization had to be written into the code to work with the specific files generated by the XFDTD™ program. Two types of cost functions were evaluated; the first one seeking broadband performance within the UWB band, and the second one searching for curve replication of a reference geometry. The performance of the method was evaluated considering the speed provided by the computer resources used. Balance between accuracy, data file size and speed of execution was achieved by defining parameters in the GA code as well as changing the internal parameters of the XFDTD™ projects. The results showed that the GA produced geometries that were analyzed by the XFDTD™ program and changed following the search criteria until reaching the target value of the cost function. Results also showed how the parameters can change the search criteria and influence the running of the code to provide a variety of geometries.
Resumo:
Physiological processes and local-scale structural dynamics of mangroves are relatively well studied. Regional-scale processes, however, are not as well understood. Here we provide long-term data on trends in structure and forest turnover at a large scale, following hurricane damage in mangrove ecosystems of South Florida, U.S.A. Twelve mangrove vegetation plots were monitored at periodic intervals, between October 1992 and March 2005. Mangrove forests of this region are defined by a −1.5 scaling relationship between mean stem diameter and stem density, mirroring self-thinning theory for mono-specific stands. This relationship is reflected in tree size frequency scaling exponents which, through time, have exhibited trends toward a community average that is indicative of full spatial resource utilization. These trends, together with an asymptotic standing biomass accumulation, indicate that coastal mangrove ecosystems do adhere to size-structured organizing principles as described for upland tree communities. Regenerative dynamics are different between areas inside and outside of the primary wind-path of Hurricane Andrew which occurred in 1992. Forest dynamic turnover rates, however, are steady through time. This suggests that ecological, more-so than structural factors, control forest productivity. In agreement, the relative mean rate of biomass growth exhibits an inverse relationship with the seasonal range of porewater salinities. The ecosystem average in forest scaling relationships may provide a useful investigative tool of mangrove community biomass relationships, as well as offer a robust indicator of general ecosystem health for use in mangrove forest ecosystem management and restoration.
Resumo:
The relationship between granular (poison) gland size and density was examined in an ontogenetic series of the strawberry dart-poison frog, Dendrobates pumilio. Specimens used in this study were collected from the La Selva Biological Station in northeastern Costa Rica. Patches of skin from the dorsal surface of seven frogs, ranging in size from 11 to 23 mm snout-vent length (SVL), were fixed and embedded in paraffin for histological sectioning. Poison gland size and density were quantified microscopically in these sections. Poison glands are uniformly distributed across the skin and mean poison gland diameter increases at a rate faster than snout-vent length from 42.5 at SVL 11mm to 120.0 at SVL 23 mm. Conversely, gland density decreases with body size from 71.9 glands/mm2 to 33.2 glands/mm2 • Due to the positive allometric growth of the poison glands, the percentage of skin surface occupied by poison glands increases from 10.1-22.1% in small frogs (SVL<18 >mm) to 50.0-65.2% in large frogs (SVL>19MM), resulting in more toxin per mm2 in the larger animals. The largest increase in toxicity is correlated temporally with the onset of sexual maturity rather than with changes in aposematic coloring.
Resumo:
Since the introduction of fiber reinforced polymers (FRP) for the repair and retrofit of concrete structures in the 1980’s, considerable research has been devoted to the feasibility of their application and predictive modeling of their performance. However, the effects of flaws present in the constitutive components and the practices in substrate preparation and treatment have not yet been thoroughly studied. This research aims at investigating the effect of surface preparation and treatment for the pre-cured FRP systems and the groove size tolerance for near surface mounted (NSM) FRP systems; and to set thresholds for guaranteed system performance. The research included both analytical and experimental components. The experimental program for the pre-cured FRP systems consisted of a total of twenty-four (24) reinforced concrete (RC) T-beams with various surface preparation parameters and surface flaws, including roughness, flatness, voids and cracks (cuts). For the NSM FRP systems, a total of twelve (12) additional RC T-beams were tested with different grooves sizes for FRP bars and strips. The analytical program included developing an elaborate nonlinear finite element model using the general purpose software ANSYS. The model was subsequently used to extend the experimental range of parameters for surface flatness in pre-cured FRP systems, and for groove size study in the NSM FRP systems. Test results, confirmed by further analyses, indicated that contrary to the general belief in the industry, the impact of surface roughness on the global performance of pre-cured FRP systems was negligible. The study also verified that threshold limits set for wet lay-up FRP systems can be extended to pre-cured systems. The study showed that larger surface voids and cracks (cuts) can adversely impact both the strength and ductility of pre-cured FRP systems. On the other hand, frequency (or spacing) of surface cracks (cuts) may only affect system ductility rather than its strength. Finally, within the range studied, groove size tolerance of +1/8 in. does not appear to have an adverse effect on the performance of NSM FRP systems.
Resumo:
The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
At the jamming transition, amorphous packings are known to display anomalous vibrational modes with a density of states (DOS) that remains constant at low frequency. The scaling of the DOS at higher packing fractions remains, however, unclear. One might expect to find a simple Debye scaling, but recent results from effective medium theory and the exact solution of mean-field models both predict an anomalous, non-Debye scaling. Being mean-field in nature, however, these solutions are only strictly valid in the limit of infinite spatial dimension, and it is unclear what value they have for finite-dimensional systems. Here, we study packings of soft spheres in dimensions 3 through 7 and find, away from jamming, a universal non-Debye scaling of the DOS that is consistent with the mean-field predictions. We also consider how the soft mode participation ratio evolves as dimension increases.
Resumo:
Semiconductor chip packaging has evolved from single chip packaging to 3D heterogeneous system integration using multichip stacking in a single module. One of the key challenges in 3D integration is the high density interconnects that need to be formed between the chips with through-silicon-vias (TSVs) and inter-chip interconnects. Anisotropic Conductive Film (ACF) technology is one of the low-temperature, fine-pitch interconnect method, which has been considered as a potential replacement for solder interconnects in line with continuous scaling of the interconnects in the IC industry. However, the conventional ACF materials are facing challenges to accommodate the reduced pad and pitch size due to the micro-size particles and the particle agglomeration issue. A new interconnect material - Nanowire Anisotropic Conductive Film (NW-ACF), composed of high density copper nanowires of ~ 200 nm diameter and 10-30 µm length that are vertically distributed in a polymeric template, is developed in this work to tackle the constrains of the conventional ACFs and serves as an inter-chip interconnect solution for potential three-dimensional (3D) applications.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology.
Resumo:
The ultrasonic non-destructive testing of components may encounter considerable difficulties to interpret some inspections results mainly in anisotropic crystalline structures. A numerical method for the simulation of elastic wave propagation in homogeneous elastically anisotropic media, based on the general finite element approach, is used to help this interpretation. The successful modeling of elastic field associated with NDE is based on the generation of a realistic pulsed ultrasonic wave, which is launched from a piezoelectric transducer into the material under inspection. The values of elastic constants are great interest information that provide the application of equations analytical models, until small and medium complexity problems through programs of numerical analysis as finite elements and/or boundary elements. The aim of this work is the comparison between the results of numerical solution of an ultrasonic wave, which is obtained from transient excitation pulse that can be specified by either force or displacement variation across the aperture of the transducer, and the results obtained from a experiment that was realized in an aluminum block in the IEN Ultrasonic Laboratory. The wave propagation can be simulated using all the characteristics of the material used in the experiment evaluation associated to boundary conditions and from these results, the comparison can be made.
Resumo:
We develop the a-posteriori error analysis of hp-version interior-penalty discontinuous Galerkin finite element methods for a class of second-order quasilinear elliptic partial differential equations. Computable upper and lower bounds on the error are derived in terms of a natural (mesh-dependent) energy norm. The bounds are explicit in the local mesh size and the local degree of the approximating polynomial. The performance of the proposed estimators within an automatic hp-adaptive refinement procedure is studied through numerical experiments.
Resumo:
We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.