873 resultados para Multi-scale modeling
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
A pilot scale multi-media filtration system was used to evaluate the effectiveness of filtration in removing petroleum hydrocarbons from a source water contaminated with diesel fuel. Source water was artificially prepared by mixing bentonite clay and tap water to produce a turbidity range of 10-15 NTU. Diesel fuel concentrations of 150 ppm or 750 ppm were used to contaminate the source water. The coagulants used included Cat Floc K-10 and Cat Floc T-2. The experimental phase was conducted under direct filtration conditions at constant head and constant rate filtration at 8.0 gpm. Filtration experiments were run until the filter reached its clogging point as noted by a measured peak pressure loss of 10 psi. The experimental variables include type of coagulant, oil concentration and source water. Filtration results were evaluated based on turbidity removal and petroleum hydrocarbon (PHC) removal efficiency as measured by gas chromatography. Experiments indicated that clogging was controlled by the clay loading on the filter and that inadequate destabilization of the contaminated water by the coagulant limited the PHC removal. ^
Resumo:
Research has identified a number of putative risk factors that places adolescents at incrementally higher risk for involvement in alcohol and other drug (AOD) use and sexual risk behaviors (SRBs). Such factors include personality characteristics such as sensation-seeking, cognitive factors such as positive expectancies and inhibition conflict as well as peer norm processes. The current study was guided by a conceptual perspective that support the notion that an integrative framework that includes multi-level factors has significant explanatory value for understanding processes associated with the co-occurrence of AOD use and sexual risk behavior outcomes. This study evaluated simultaneously the mediating role of AOD-sex related expectancies and inhibition conflict on antecedents of AOD use and SRBs including sexual sensation-seeking and peer norms for condom use. The sample was drawn from the Enhancing My Personal Options While Evaluating Risk (EMPOWER: Jonathan Tubman, PI), data set (N = 396; aged 12-18 years). Measures used in the study included Sexual Sensation-Seeking Scale, Inhibition Conflict for Condom Use, Risky Sex Scale. All relevant measures had well-documented psychometric properties. A global assessment of alcohol, drug use and sexual risk behaviors was used. Results demonstrated that AOD-sex related expectancies mediated the influence of sexual sensation-seeking on the co-occurrence of alcohol and other drug use and sexual risk behaviors. The evaluation of the integrative model also revealed that sexual sensation-seeking was positively associated with peer norms for condom use. Also, peer norms predicted inhibition conflict among this sample of multi-problem youth. This dissertation research identified mechanisms of risk and protection associated with the co-occurrence of AOD use and SRBs among a multi-problem sample of adolescents receiving treatment for alcohol or drug use and related problems. This study is informative for adolescent-serving programs that address those individual and contextual characteristics that enhance treatment efficacy and effectiveness among adolescents receiving substance use and related problems services.
Resumo:
With the flow of the Mara River becoming increasingly erratic especially in the upper reaches, attention has been directed to land use change as the major cause of this problem. The semi-distributed hydrological model Soil and Water Assessment Tool 5 (SWAT) and Landsat imagery were utilized in the upper Mara River Basin in order to 1) map existing field scale land use practices in order to determine their impact 2) determine the impacts of land use change on water flux; and 3) determine the impacts of rainfall (0%, ±10% and ±20%) and air temperature variations (0% and +5%) based on the Intergovernmental Panel on Climate Change projections on the water flux of the 10 upper Mara River. This study found that the different scenarios impacted on the water balance components differently. Land use changes resulted in a slightly more erratic discharge while rainfall and air temperature changes had a more predictable impact on the discharge and water balance components. These findings demonstrate that the model results 15 show the flow was more sensitive to the rainfall changes than land use changes. It was also shown that land use changes can reduce dry season flow which is the most important problem in the basin. The model shows also deforestation in the Mau Forest increased the peak flows which can also lead to high sediment loading in the Mara River. The effect of the land use and climate change scenarios on the sediment and 20 water quality of the river needs a thorough understanding of the sediment transport processes in addition to observed sediment and water quality data for validation of modeling results.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
We present new d13C measurements of atmospheric CO2 covering the last glacial/interglacial cycle, complementing previous records covering Terminations I and II. Most prominent in the new record is a significant depletion in d13C(atm) of 0.5 permil occurring during marine isotope stage (MIS) 4, followed by an enrichment of the same magnitude at the beginning of MIS 3. Such a significant excursion in the record is otherwise only observed at glacial terminations, suggesting that similar processes were at play, such as changing sea surface temperatures, changes in marine biological export in the Southern Ocean (SO) due to variations in aeolian iron fluxes, changes in the Atlantic meridional overturning circulation, upwelling of deep water in the SO, and long-term trends in terrestrial carbon storage. Based on previous modeling studies, we propose constraints on some of these processes during specific time intervals. The decrease in d13C(atm) at the end of MIS 4 starting approximately 64 kyr B.P. was accompanied by increasing [CO2]. This period is also marked by a decrease in aeolian iron flux to the SO, followed by an increase in SO upwelling during Heinrich event 6, indicating that it is likely that a large amount of d13C-depleted carbon was transferred to the deep oceans previously, i.e., at the onset of MIS 4. Apart from the upwelling event at the end of MIS 4 (and potentially smaller events during Heinrich events in MIS 3), upwelling of deep water in the SO remained reduced until the last glacial termination, whereupon a second pulse of isotopically light carbon was released into the atmosphere.
Resumo:
This paper presents a theoretical model on the vibration analysis of micro scale fluid-loaded rectangular isotropic plates, based on the Lamb's assumption of fluid-structure interaction and the Rayleigh-Ritz energy method. An analytical solution for this model is proposed, which can be applied to most cases of boundary conditions. The dynamical experimental data of a series of microfabricated silicon plates are obtained using a base-excitation dynamic testing facility. The natural frequencies and mode shapes in the experimental results are in good agreement with the theoretical simulations for the lower order modes. The presented theoretical and experimental investigations on the vibration characteristics of the micro scale plates are of particular interest in the design of microplate based biosensing devices. Copyright © 2009 by ASME.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.
This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.
The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.
Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.
The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.
Resumo:
The purpose of this dissertation is to contribute to a better understanding of how global seafood trade interacts with the governance of small-scale fisheries (SSFs). As global seafood trade expands, SSFs have the potential to experience significant economic, social, and political benefits from participation in export markets. At the same time, market connections that place increasing pressures on resources pose risks to both the ecological and social integrity of SSFs. This dissertation seeks to explore the factors that mediate between the potential benefits and risks of global seafood markets for SSFs, with the goal of developing hypotheses regarding these relationships.
The empirical investigation consists of a series of case studies from the Yucatan Peninsula, Mexico. This is a particularly rich context in which to study global market connections with SSFs because the SSFs in this region engage in a variety of market-oriented harvests, most notably for octopus, groupers and snappers, lobster, and sea cucumber. Variation in market forms and the institutional diversity of local-level governance arrangements allows the dissertation to explore a number of examples.
The analysis is guided primarily by common-pool resource (CPR) theory because of the insights it provides regarding the conditions that facilitate collective action and the factors that promote long-lasting resource governance arrangements. Theory from institutional economics and political ecology contribute to the elaboration of a multi-faceted conceptualization of markets for CPR theory, with the aim of facilitating the identification of mechanisms through which markets and CPR governance actually interact. This dissertation conceptualizes markets as sets of institutions that structure the exchange of property rights over fisheries resources, affect the material incentives to harvest resources, and transmit ideas and values about fisheries resources and governance.
The case studies explore four different mechanisms through which markets potentially influence resource governance: 1) Markets can contribute to costly resource governance activities by offsetting costs through profits, 2) markets can undermine resource governance by generating incentives for noncompliance and lead to overharvesting resources, 3) markets can increase the costs of resource governance, for example by augmenting monitoring and enforcement burdens, and 4) markets can alter values and norms underpinning resource governance by transmitting ideas between local resource users and a variety of market actors.
Data collected using participant observation, survey, informal and structured interviews contributed to the elaboration of the following hypotheses relevant to interactions between global seafood trade and SSFs governance. 1) Roll-back neoliberalization of fisheries policies has undermined cooperatives’ ability to achieve financial success through engagement with markets and thus their potential role as key actors in resource governance (chapter two). 2) Different relations of production influence whether local governance institutions will erode or strengthen when faced with market pressures. In particular, relations of production in which fishers own their own means of production and share the collective costs of governance are more likely to strengthen resource governance while relations of production in which a single entrepreneur controls capital and access to the fishery are more likely to contribute to the erosion of resource governance institutions in the face of market pressures (chapter three). 3) By serving as a new discursive framework within which to conceive of and talk about fisheries resources, markets can influence norms and values that shape and constitute governance arrangements.
In sum, the dissertation demonstrates that global seafood trade manifests in a diversity of local forms and effects. Whether SSFs moderate risks and take advantage of benefits depends on a variety of factors, and resource users themselves have the potential to influence the outcomes of seafood market connections through local forms of collective action.
Resumo:
A class of multi-process models is developed for collections of time indexed count data. Autocorrelation in counts is achieved with dynamic models for the natural parameter of the binomial distribution. In addition to modeling binomial time series, the framework includes dynamic models for multinomial and Poisson time series. Markov chain Monte Carlo (MCMC) and Po ́lya-Gamma data augmentation (Polson et al., 2013) are critical for fitting multi-process models of counts. To facilitate computation when the counts are high, a Gaussian approximation to the P ́olya- Gamma random variable is developed.
Three applied analyses are presented to explore the utility and versatility of the framework. The first analysis develops a model for complex dynamic behavior of themes in collections of text documents. Documents are modeled as a “bag of words”, and the multinomial distribution is used to characterize uncertainty in the vocabulary terms appearing in each document. State-space models for the natural parameters of the multinomial distribution induce autocorrelation in themes and their proportional representation in the corpus over time.
The second analysis develops a dynamic mixed membership model for Poisson counts. The model is applied to a collection of time series which record neuron level firing patterns in rhesus monkeys. The monkey is exposed to two sounds simultaneously, and Gaussian processes are used to smoothly model the time-varying rate at which the neuron’s firing pattern fluctuates between features associated with each sound in isolation.
The third analysis presents a switching dynamic generalized linear model for the time-varying home run totals of professional baseball players. The model endows each player with an age specific latent natural ability class and a performance enhancing drug (PED) use indicator. As players age, they randomly transition through a sequence of ability classes in a manner consistent with traditional aging patterns. When the performance of the player significantly deviates from the expected aging pattern, he is identified as a player whose performance is consistent with PED use.
All three models provide a mechanism for sharing information across related series locally in time. The models are fit with variations on the P ́olya-Gamma Gibbs sampler, MCMC convergence diagnostics are developed, and reproducible inference is emphasized throughout the dissertation.
Resumo:
Soil erosion by water is a major driven force causing land degradation. Laboratory experiments, on-site field study, and suspended sediments measurements were major fundamental approaches to study the mechanisms of soil water erosion and to quantify the erosive losses during rain events. The experimental research faces the challenge to extent the result to a wider spatial scale. Soil water erosion modeling provides possible solutions for scaling problems in erosion research, and is of principal importance to better understanding the governing processes of water erosion. However, soil water erosion models were considered to have limited value in practice. Uncertainties in hydrological simulations are among the reasons that hindering the development of water erosion model. Hydrological models gained substantial improvement recently and several water erosion models took advantages of the improvement of hydrological models. It is crucial to know the impact of changes in hydrological processes modeling on soil erosion simulation.
This dissertation work first created an erosion modeling tool (GEOtopSed) that takes advantage of the comprehensive hydrological model (GEOtop). The newly created tool was then tested and evaluated at an experimental watershed. The GEOtopSed model showed its ability to estimate multi-year soil erosion rate with varied hydrological conditions. To investigate the impact of different hydrological representations on soil erosion simulation, a 11-year simulation experiment was conducted for six models with varied configurations. The results were compared at varied temporal and spatial scales to highlight the roles of hydrological feedbacks on erosion. Models with simplified hydrological representations showed agreement with GEOtopSed model on long temporal scale (longer than annual). This result led to an investigation for erosion simulation at different rainfall regimes to check whether models with different hydrological representations have agreement on the soil water erosion responses to the changing climate. Multi-year ensemble simulations with different extreme precipitation scenarios were conducted at seven climate regions. The differences in erosion simulation results showed the influences of hydrological feedbacks which cannot be seen by purely rainfall erosivity method.
Resumo:
Light rainfall is the baseline input to the annual water budget in mountainous landscapes through the tropics and at mid-latitudes. In the Southern Appalachians, the contribution from light rainfall ranges from 50-60% during wet years to 80-90% during dry years, with convective activity and tropical cyclone input providing most of the interannual variability. The Southern Appalachians is a region characterized by rich biodiversity that is vulnerable to land use/land cover changes due to its proximity to a rapidly growing population. Persistent near surface moisture and associated microclimates observed in this region has been well documented since the colonization of the area in terms of species health, fire frequency, and overall biodiversity. The overarching objective of this research is to elucidate the microphysics of light rainfall and the dynamics of low level moisture in the inner region of the Southern Appalachians during the warm season, with a focus on orographically mediated processes. The overarching research hypothesis is that physical processes leading to and governing the life cycle of orographic fog, low level clouds, and precipitation, and their interactions, are strongly tied to landform, land cover, and the diurnal cycles of flow patterns, radiative forcing, and surface fluxes at the ridge-valley scale. The following science questions will be addressed specifically: 1) How do orographic clouds and fog affect the hydrometeorological regime from event to annual scale and as a function of terrain characteristics and land cover?; 2) What are the source areas, governing processes, and relevant time-scales of near surface moisture convergence patterns in the region?; and 3) What are the four dimensional microphysical and dynamical characteristics, including variability and controlling factors and processes, of fog and light rainfall? The research was conducted with two major components: 1) ground-based high-quality observations using multi-sensor platforms and 2) interpretive numerical modeling guided by the analysis of the in situ data collection. Findings illuminate a high level of spatial – down to the ridge scale - and temporal – from event to annual scale - heterogeneity in observations, and a significant impact on the hydrological regime as a result of seeder-feeder interactions among fog, low level clouds, and stratiform rainfall that enhance coalescence efficiency and lead to significantly higher rainfall rates at the land surface. Specifically, results show that enhancement of an event up to one order of magnitude in short-term accumulation can occur as a result of concurrent fog presence. Results also show that events are modulated strongly by terrain characteristics including elevation, slope, geometry, and land cover. These factors produce interactions between highly localized flows and gradients of temperature and moisture with larger scale circulations. Resulting observations of DSD and rainfall patterns are stratified by region and altitude and exhibit clear diurnal and seasonal cycles.
Resumo:
Effective conservation and management of top predators requires a comprehensive understanding of their distributions and of the underlying biological and physical processes that affect these distributions. The Mid-Atlantic Bight shelf break system is a dynamic and productive region where at least 32 species of cetaceans have been recorded through various systematic and opportunistic marine mammal surveys from the 1970s through 2012. My dissertation characterizes the spatial distribution and habitat of cetaceans in the Mid-Atlantic Bight shelf break system by utilizing marine mammal line-transect survey data, synoptic multi-frequency active acoustic data, and fine-scale hydrographic data collected during the 2011 summer Atlantic Marine Assessment Program for Protected Species (AMAPPS) survey. Although studies describing cetacean habitat and distributions have been previously conducted in the Mid-Atlantic Bight, my research specifically focuses on the shelf break region to elucidate both the physical and biological processes that influence cetacean distribution patterns within this cetacean hotspot.
In Chapter One I review biologically important areas for cetaceans in the Atlantic waters of the United States. I describe the study area, the shelf break region of the Mid-Atlantic Bight, in terms of the general oceanography, productivity and biodiversity. According to recent habitat-based cetacean density models, the shelf break region is an area of high cetacean abundance and density, yet little research is directed at understanding the mechanisms that establish this region as a cetacean hotspot.
In Chapter Two I present the basic physical principles of sound in water and describe the methodology used to categorize opportunistically collected multi-frequency active acoustic data using frequency responses techniques. Frequency response classification methods are usually employed in conjunction with net-tow data, but the logistics of the 2011 AMAPPS survey did not allow for appropriate net-tow data to be collected. Biologically meaningful information can be extracted from acoustic scattering regions by comparing the frequency response curves of acoustic regions to theoretical curves of known scattering models. Using the five frequencies on the EK60 system (18, 38, 70, 120, and 200 kHz), three categories of scatterers were defined: fish-like (with swim bladder), nekton-like (e.g., euphausiids), and plankton-like (e.g., copepods). I also employed a multi-frequency acoustic categorization method using three frequencies (18, 38, and 120 kHz) that has been used in the Gulf of Maine and Georges Bank which is based the presence or absence of volume backscatter above a threshold. This method is more objective than the comparison of frequency response curves because it uses an established backscatter value for the threshold. By removing all data below the threshold, only strong scattering information is retained.
In Chapter Three I analyze the distribution of the categorized acoustic regions of interest during the daytime cross shelf transects. Over all transects, plankton-like acoustic regions of interest were detected most frequently, followed by fish-like acoustic regions and then nekton-like acoustic regions. Plankton-like detections were the only significantly different acoustic detections per kilometer, although nekton-like detections were only slightly not significant. Using the threshold categorization method by Jech and Michaels (2006) provides a more conservative and discrete detection of acoustic scatterers and allows me to retrieve backscatter values along transects in areas that have been categorized. This provides continuous data values that can be integrated at discrete spatial increments for wavelet analysis. Wavelet analysis indicates significant spatial scales of interest for fish-like and nekton-like acoustic backscatter range from one to four kilometers and vary among transects.
In Chapter Four I analyze the fine scale distribution of cetaceans in the shelf break system of the Mid-Atlantic Bight using corrected sightings per trackline region, classification trees, multidimensional scaling, and random forest analysis. I describe habitat for common dolphins, Risso’s dolphins and sperm whales. From the distribution of cetacean sightings, patterns of habitat start to emerge: within the shelf break region of the Mid-Atlantic Bight, common dolphins were sighted more prevalently over the shelf while sperm whales were more frequently found in the deep waters offshore and Risso’s dolphins were most prevalent at the shelf break. Multidimensional scaling presents clear environmental separation among common dolphins and Risso’s dolphins and sperm whales. The sperm whale random forest habitat model had the lowest misclassification error (0.30) and the Risso’s dolphin random forest habitat model had the greatest misclassification error (0.37). Shallow water depth (less than 148 meters) was the primary variable selected in the classification model for common dolphin habitat. Distance to surface density fronts and surface temperature fronts were the primary variables selected in the classification models to describe Risso’s dolphin habitat and sperm whale habitat respectively. When mapped back into geographic space, these three cetacean species occupy different fine-scale habitats within the dynamic Mid-Atlantic Bight shelf break system.
In Chapter Five I present a summary of the previous chapters and present potential analytical steps to address ecological questions pertaining the dynamic shelf break region. Taken together, the results of my dissertation demonstrate the use of opportunistically collected data in ecosystem studies; emphasize the need to incorporate middle trophic level data and oceanographic features into cetacean habitat models; and emphasize the importance of developing more mechanistic understanding of dynamic ecosystems.