899 resultados para Isotropic and Anisotropic models
Resumo:
This paper presents an overview of depth averaged modelling of fast catastrophic landslides where coupling of solid skeleton and pore fluid (air and water) is important. The first goal is to show how Biot-Zienkiewicz models can be applied to develop depth integrated, coupled models. The second objective of the paper is to consider a link which can be established between rheological and constitutive models. Perzyna´s viscoplasticity can be considered a general framework within which rheological models such as Bingham and cohesive frictional fluids can be derived. Among the several alternative numerical models, we will focus here on SPH which has not been widely applied by engineers to model landslide propagation. We propose an improvement, based on combining Finite Difference meshes associated to SPH nodes to describe pore pressure evolution inside the landslide mass. We devote a Section to analyze the performance of the models, considering three sets of tests and examples which allows to assess the model performance and limitations: (i) Problems having an analytical solution, (ii) Small scale laboratory tests, and (iii) Real cases for which we have had access to reliable information
Resumo:
Author: Kerry W. Holton Title: SCHLEIERMACHER’S DOCTRINE OF BIBLICAL AUTHORITY: AN ALTERNATIVE TO CONTENT-BASED/SUPERNATURALIST AND FUNCTION- BASED/RATIONALIST MODELS Advisor: Theodore M. Vial, Jr. Degree Date: August 2015 This dissertation examines Friedrich Schleiermacher’s understanding of biblical authority and argues that, as an alternative to strictly supernaturalistic and rationalistic models, his understanding allows the New Testament to speak authoritatively in Christian religion in an age of critical, historical awareness. After classifying Schleiermacher’s position in a typology of the doctrine of biblical authority, this dissertation explores his conception of divine revelation and inspiration vis-à-vis scripture. It demonstrates that although he did not believe there is warrant for the claim of a direct connection between divine revelation and scripture, or that scripture is the foundation of faith, he nonetheless asserted that the New Testament is authoritative. He asserted the normative authority of the New Testament on the basis that it is the first presentation of Christian faith. This dissertation examines Schleiermacher’s “canon within the canon,” as well as his denial that the Old Testament shares the same normative worth and inspiration of the New. Although this dissertation finds difficulty with some of Schleiermacher’s views regarding the Old Testament, it names two significant strengths of what is identified as his evangelical, content-based, and rationalist approach to biblical authority. First, it recognizes and values the co-presence and co-activity of the supernatural and the natural !ii in the production of the New Testament canon. This allows both scripture and the church to share religious authority. Second, it allows Christian faith and the historical-method to coexist, as it does not require people to contradict what they know to be the case about science, history, and philosophy. Thus, this dissertation asserts that Schleiermacher’s understanding of biblical authority is a robust one, since, for him, the authority of scripture does not lie in some property of the texts themselves that historians or unbelievers can take away.
Resumo:
We investigate whether relative contributions of genetic and shared environmental factors are associated with an increased risk in melanoma. Data from the Queensland Familial Melanoma Project comprising 15,907 subjects arising from 1912 families were analyzed to estimate the additive genetic, common and unique environmental contributions to variation in the age at onset of melanoma. Two complementary approaches for analyzing correlated time-to-onset family data were considered: the generalized estimating equations (GEE) method in which one can estimate relationship-specific dependence simultaneously with regression coefficients that describe the average population response to changing covariates; and a subject-specific Bayesian mixed model in which heterogeneity in regression parameters is explicitly modeled and the different components of variation may be estimated directly. The proportional hazards and Weibull models were utilized, as both produce natural frameworks for estimating relative risks while adjusting for simultaneous effects of other covariates. A simple Markov Chain Monte Carlo method for covariate imputation of missing data was used and the actual implementation of the Bayesian model was based on Gibbs sampling using the free ware package BUGS. In addition, we also used a Bayesian model to investigate the relative contribution of genetic and environmental effects on the expression of naevi and freckles, which are known risk factors for melanoma.
Resumo:
Adsorption of ethylene and ethane on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers is studied in detail to investigate the packing efficiency, the two-dimensional critical temperature, and the variation of the isosteric heat of adsorption with loading and temperature. Here we used a Monte Carlo simulation method with a grand canonical Monte Carlo ensemble. A number of two-center Lennard-Jones (LJ) potential models are investigated to study the impact of the choice of potential models in the description of adsorption behavior. We chose two 2C-LJ potential models in our investigation of the (i) UA-TraPPE-LJ model of Martin and Siepmann (J. Phys. Chem. B 1998,102, 25692577) for ethane and Wick et al. (J. Phys. Chem. B 2000,104, 8008-8016) for ethylene and (ii) AUA4-LJ model of Ungerer et al. (J. Chem. Phys. 2000,112, 5499-5510) for ethane and Bourasseau et al. (J. Chem. Phys. 2003, 118, 3020-3034) for ethylene. These models are used to study the adsorption of ethane and ethylene on graphitized thermal carbon black. It is found that the solid-fluid binary interaction parameter is a function of adsorbate and temperature, and the adsorption isotherms and heat of adsorption are well described by both the UA-TraPPE and AUA models, although the UA-TraPPE model performs slightly better. However, the local distributions predicted by these two models are slightly different. These two models are used to explore the two-dimensional condensation for the graphitized thermal carbon black, and these values are 110 K for ethylene and 120 K for ethane.
Resumo:
The similarity between the Peleg, Pilosof –Boquet–Batholomai and Singh–Kulshrestha models was investigated using the hydration behaviours of whey protein concentrate, wheat starch and whey protein isolate at 30 °C in 100% relative humidity. The three models were shown to be mathematically the same within experimental variations, and they yielded parameters that are related. The models, in their linear and original forms, were suitable (r2 > 0.98) in describing the sorption behaviours of the samples, and are sensitive to the length of the sorption segment used in the computation. The whey proteins absorbed more moisture than the wheat starch, and the isolate exhibited a higher sorptive ability than the concentrate.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.
Resumo:
This paper compares the UK/US exchange rate forecasting performance of linear and nonlinear models based on monetary fundamentals, to a random walk (RW) model. Structural breaks are identified and taken into account. The exchange rate forecasting framework is also used for assessing the relative merits of the official Simple Sum and the weighted Divisia measures of money. Overall, there are four main findings. First, the majority of the models with fundamentals are able to beat the RW model in forecasting the UK/US exchange rate. Second, the most accurate forecasts of the UK/US exchange rate are obtained with a nonlinear model. Third, taking into account structural breaks reveals that the Divisia aggregate performs better than its Simple Sum counterpart. Finally, Divisia-based models provide more accurate forecasts than Simple Sum-based models provided they are constructed within a nonlinear framework.
Resumo:
In this paper, we empirically examine how professional service firms are adapting their promotion and career models to new market and institutional pressures, without losing the benefits of the traditional up-or-out tournament. Based on an in-depth qualitative study of 10 large UK based law firms we find that most of these firms do not have a formal up-or-out policy but that the up-or-out rule operates in practice. We also find that most firms have introduced alternative roles and a novel career policy that offers a holistic learning and development deal to associates without any expectation that unsuccessful candidates for promotion to partner should quit the firm. While this policy and the new roles formally contradict the principle of up-or-out by creating permanent non-partner positions, in practice they coexist. We conclude that the motivational power of the up-or-out tournament remains intact, notwithstanding the changes to the internal labour market structure of these professional service firms.
Resumo:
Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.
Resumo:
In this study, discrete time one-factor models of the term structure of interest rates and their application to the pricing of interest rate contingent claims are examined theoretically and empirically. The first chapter provides a discussion of the issues involved in the pricing of interest rate contingent claims and a description of the Ho and Lee (1986), Maloney and Byrne (1989), and Black, Derman, and Toy (1990) discrete time models. In the second chapter, a general discrete time model of the term structure from which the Ho and Lee, Maloney and Byrne, and Black, Derman, and Toy models can all be obtained is presented. The general model also provides for the specification of an additional model, the ExtendedMB model. The third chapter illustrates the application of the discrete time models to the pricing of a variety of interest rate contingent claims. In the final chapter, the performance of the Ho and Lee, Black, Derman, and Toy, and ExtendedMB models in the pricing of Eurodollar futures options is investigated empirically. The results indicate that the Black, Derman, and Toy and ExtendedMB models outperform the Ho and Lee model. Little difference in the performance of the Black, Derman, and Toy and ExtendedMB models is detected. ^
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
The problem of social diffusion has animated sociological thinking on topics ranging from the spread of an idea, an innovation or a disease, to the foundations of collective behavior and political polarization. While network diffusion has been a productive metaphor, the reality of diffusion processes is often muddier. Ideas and innovations diffuse differently from diseases, but, with a few exceptions, the diffusion of ideas and innovations has been modeled under the same assumptions as the diffusion of disease. In this dissertation, I develop two new diffusion models for "socially meaningful" contagions that address two of the most significant problems with current diffusion models: (1) that contagions can only spread along observed ties, and (2) that contagions do not change as they spread between people. I augment insights from these statistical and simulation models with an analysis of an empirical case of diffusion - the use of enterprise collaboration software in a large technology company. I focus the empirical study on when people abandon innovations, a crucial, and understudied aspect of the diffusion of innovations. Using timestamped posts, I analyze when people abandon software to a high degree of detail.
To address the first problem, I suggest a latent space diffusion model. Rather than treating ties as stable conduits for information, the latent space diffusion model treats ties as random draws from an underlying social space, and simulates diffusion over the social space. Theoretically, the social space model integrates both actor ties and attributes simultaneously in a single social plane, while incorporating schemas into diffusion processes gives an explicit form to the reciprocal influences that cognition and social environment have on each other. Practically, the latent space diffusion model produces statistically consistent diffusion estimates where using the network alone does not, and the diffusion with schemas model shows that introducing some cognitive processing into diffusion processes changes the rate and ultimate distribution of the spreading information. To address the second problem, I suggest a diffusion model with schemas. Rather than treating information as though it is spread without changes, the schema diffusion model allows people to modify information they receive to fit an underlying mental model of the information before they pass the information to others. Combining the latent space models with a schema notion for actors improves our models for social diffusion both theoretically and practically.
The empirical case study focuses on how the changing value of an innovation, introduced by the innovations' network externalities, influences when people abandon the innovation. In it, I find that people are least likely to abandon an innovation when other people in their neighborhood currently use the software as well. The effect is particularly pronounced for supervisors' current use and number of supervisory team members who currently use the software. This case study not only points to an important process in the diffusion of innovation, but also suggests a new approach -- computerized collaboration systems -- to collecting and analyzing data on organizational processes.
Resumo:
The dominant model of atmospheric circulation posits that hot air rises, creating horizontal winds. A second major driver has recently been proposed by Makarieva and Gorshkov in their biotic pump theory (BPT), which suggests that evapotranspiration from natural closed-canopy forests causes intense condensation, and hence winds from ocean to land. Critics of the BPT argue that air movement to fill the partial vacuum caused by condensation is always isotropic, and therefore causes no net air movement (Bunyard, 2015, hdl:11232/397). This paper explores the physics of water condensation under mild atmospheric conditions, within a purpose-designed square-section 4.8 m-tall closed-system structure. Two enclosed vertical columns are connected at top and bottom by two horizontal tunnels, around which 19.5 m**3 of atmospheric air can circulate freely, allowing rotary airflows in either direction. This air can be cooled and/or warmed by refrigeration pipes and a heating mat, and changes in airflow, temperature, humidity and barometric pressure measured in real time. The study investigates whether the "hot-air-rises" or an implosive condensation model can better explain the results of more than 100 experiments. The data show a highly significant correlation (R2 >0.96, p value <0.001) between observed airflows and partial pressure changes from condensation. While the kinetic energy of the refrigerated air falls short of that required in bringing about observed airflows by a factor of at least 30, less than a tenth of the potential kinetic energy from condensation is shown to be sufficient. The assumption that condensation of water vapour is always isotropic is therefore incorrect. Condensation can be anisotropic, and in the laboratory does cause sustained airflow.