893 resultados para Models and Modeling
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
This study examines the influence of acculturative stress on substance use and HIV risk behaviors among recent Latino immigrants. The central hypothesis of the study is that specific religious coping mechanisms influence the relationship that acculturative stress has on the substance use and HIV-risk behaviors of recent Latino immigrants. Within the Latino culture religiosity is a pervasive force, guiding attitudes, behaviors, and even social interactions. When controlling for education and socioeconomic status, Latinos have been found to use religious coping mechanisms more frequently than their Non-Latino White counterparts. In addition, less acculturated Latinos use religious coping strategies more frequently than those with higher levels of acculturation. Given its prominent role in Latino culture, it appears probable that this mechanism may prove to be influential during difficult life transitions, such as those experienced during the immigration process. This study examines the moderating influence of specific religious coping mechanisms on the relationship between acculturative stress and substance use/HIV risk behaviors of recent Latino immigrants. Analyses for the present study were conducted with wave 2 data from an ongoing longitudinal study investigating associations between pre-immigration factors and health behavior trajectories of recent Latino immigrants. Structural equation and zero-inflated Poisson modeling were implemented to test the specified models and examine the nature of the relationship among the variables. Moderating effects were found for negative religious coping. Higher levels of negative religious coping strengthened an inverse relationship between acculturative stress and substance use. Results also indicated direct relationships between religious coping mechanisms and substance use. External and positive religious coping were inversely related to substance use. Negative religious coping was positively related to substance use. This study aims to contribute knowledge of how religious coping influence's the adaptation process of recent Latino immigrants. Expanding scientific understanding as to the function and effect of these coping mechanisms could lead to enhanced culturally relevant approaches in service delivery among Latino populations. Furthermore this knowledge could inform research about specific cognitions and behaviors that need to be targeted in prevention and treatment programs with this population.
Resumo:
Thermal analysis of electronic devices is one of the most important steps for designing of modern devices. Precise thermal analysis is essential for designing an effective thermal management system of modern electronic devices such as batteries, LEDs, microelectronics, ICs, circuit boards, semiconductors and heat spreaders. For having a precise thermal analysis, the temperature profile and thermal spreading resistance of the device should be calculated by considering the geometry, property and boundary conditions. Thermal spreading resistance occurs when heat enters through a portion of a surface and flows by conduction. It is the primary source of thermal resistance when heat flows from a tiny heat source to a thin and wide heat spreader. In this thesis, analytical models for modeling the temperature behavior and thermal resistance in some common geometries of microelectronic devices such as heat channels and heat tubes are investigated. Different boundary conditions for the system are considered. Along the source plane, a combination of discretely specified heat flux, specified temperatures and adiabatic condition are studied. Along the walls of the system, adiabatic or convective cooling boundary conditions are assumed. Along the sink plane, convective cooling with constant or variable heat transfer coefficient are considered. Also, the effect of orthotropic properties is discussed. This thesis contains nine chapters. Chapter one is the introduction and shows the concepts of thermal spreading resistance besides the originality and importance of the work. Chapter two reviews the literatures on the thermal spreading resistance in the past fifty years with a focus on the recent advances. In chapters three and four, thermal resistance of a twodimensional flux channel with non-uniform convection coefficient in the heat sink plane is studied. The non-uniform convection is modeled by using two functions than can simulate a wide variety of different heat sink configurations. In chapter five, a non-symmetrical flux channel with different heat transfer coefficient along the right and left edges and sink plane is analytically modeled. Due to the edge cooling and non-symmetry, the eigenvalues of the system are defined using the heat transfer coefficient on both edges and for satisfying the orthogonality condition, a normalized function is calculated. In chapter six, thermal behavior of two-dimensional rectangular flux channel with arbitrary boundary conditions on the source plane is presented. The boundary condition along the source plane can be a combination of the first kind boundary condition (Dirichlet or prescribed temperature) and the second kind boundary condition (Neumann or prescribed heat flux). The proposed solution can be used for modeling the flux channels with numerous different source plane boundary conditions without any limitations in the number and position of heat sources. In chapter seven, temperature profile of a circular flux tube with discretely specified boundary conditions along the source plane is presented. Also, the effect of orthotropic properties are discussed. In chapter 8, a three-dimensional rectangular flux channel with a non-uniform heat convection along the heat sink plane is analytically modeled. In chapter nine, a summary of the achievements is presented and some systems are proposed for the future studies. It is worth mentioning that all the models and case studies in the thesis are compared with the Finite Element Method (FEM).
Resumo:
Marine spatial planning and ecological research call for high-resolution species distribution data. However, those data are still not available for most marine large vertebrates. The dynamic nature of oceanographic processes and the wide-ranging behavior of many marine vertebrates create further difficulties, as distribution data must incorporate both the spatial and temporal dimensions. Cetaceans play an essential role in structuring and maintaining marine ecosystems and face increasing threats from human activities. The Azores holds a high diversity of cetaceans but the information about spatial and temporal patterns of distribution for this marine megafauna group in the region is still very limited. To tackle this issue, we created monthly predictive cetacean distribution maps for spring and summer months, using data collected by the Azores Fisheries Observer Programme between 2004 and 2009. We then combined the individual predictive maps to obtain species richness maps for the same period. Our results reflect a great heterogeneity in distribution among species and within species among different months. This heterogeneity reflects a contrasting influence of oceanographic processes on the distribution of cetacean species. However, some persistent areas of increased species richness could also be identified from our results. We argue that policies aimed at effectively protecting cetaceans and their habitats must include the principle of dynamic ocean management coupled with other area-based management such as marine spatial planning.
Resumo:
Effective conservation and management of top predators requires a comprehensive understanding of their distributions and of the underlying biological and physical processes that affect these distributions. The Mid-Atlantic Bight shelf break system is a dynamic and productive region where at least 32 species of cetaceans have been recorded through various systematic and opportunistic marine mammal surveys from the 1970s through 2012. My dissertation characterizes the spatial distribution and habitat of cetaceans in the Mid-Atlantic Bight shelf break system by utilizing marine mammal line-transect survey data, synoptic multi-frequency active acoustic data, and fine-scale hydrographic data collected during the 2011 summer Atlantic Marine Assessment Program for Protected Species (AMAPPS) survey. Although studies describing cetacean habitat and distributions have been previously conducted in the Mid-Atlantic Bight, my research specifically focuses on the shelf break region to elucidate both the physical and biological processes that influence cetacean distribution patterns within this cetacean hotspot.
In Chapter One I review biologically important areas for cetaceans in the Atlantic waters of the United States. I describe the study area, the shelf break region of the Mid-Atlantic Bight, in terms of the general oceanography, productivity and biodiversity. According to recent habitat-based cetacean density models, the shelf break region is an area of high cetacean abundance and density, yet little research is directed at understanding the mechanisms that establish this region as a cetacean hotspot.
In Chapter Two I present the basic physical principles of sound in water and describe the methodology used to categorize opportunistically collected multi-frequency active acoustic data using frequency responses techniques. Frequency response classification methods are usually employed in conjunction with net-tow data, but the logistics of the 2011 AMAPPS survey did not allow for appropriate net-tow data to be collected. Biologically meaningful information can be extracted from acoustic scattering regions by comparing the frequency response curves of acoustic regions to theoretical curves of known scattering models. Using the five frequencies on the EK60 system (18, 38, 70, 120, and 200 kHz), three categories of scatterers were defined: fish-like (with swim bladder), nekton-like (e.g., euphausiids), and plankton-like (e.g., copepods). I also employed a multi-frequency acoustic categorization method using three frequencies (18, 38, and 120 kHz) that has been used in the Gulf of Maine and Georges Bank which is based the presence or absence of volume backscatter above a threshold. This method is more objective than the comparison of frequency response curves because it uses an established backscatter value for the threshold. By removing all data below the threshold, only strong scattering information is retained.
In Chapter Three I analyze the distribution of the categorized acoustic regions of interest during the daytime cross shelf transects. Over all transects, plankton-like acoustic regions of interest were detected most frequently, followed by fish-like acoustic regions and then nekton-like acoustic regions. Plankton-like detections were the only significantly different acoustic detections per kilometer, although nekton-like detections were only slightly not significant. Using the threshold categorization method by Jech and Michaels (2006) provides a more conservative and discrete detection of acoustic scatterers and allows me to retrieve backscatter values along transects in areas that have been categorized. This provides continuous data values that can be integrated at discrete spatial increments for wavelet analysis. Wavelet analysis indicates significant spatial scales of interest for fish-like and nekton-like acoustic backscatter range from one to four kilometers and vary among transects.
In Chapter Four I analyze the fine scale distribution of cetaceans in the shelf break system of the Mid-Atlantic Bight using corrected sightings per trackline region, classification trees, multidimensional scaling, and random forest analysis. I describe habitat for common dolphins, Risso’s dolphins and sperm whales. From the distribution of cetacean sightings, patterns of habitat start to emerge: within the shelf break region of the Mid-Atlantic Bight, common dolphins were sighted more prevalently over the shelf while sperm whales were more frequently found in the deep waters offshore and Risso’s dolphins were most prevalent at the shelf break. Multidimensional scaling presents clear environmental separation among common dolphins and Risso’s dolphins and sperm whales. The sperm whale random forest habitat model had the lowest misclassification error (0.30) and the Risso’s dolphin random forest habitat model had the greatest misclassification error (0.37). Shallow water depth (less than 148 meters) was the primary variable selected in the classification model for common dolphin habitat. Distance to surface density fronts and surface temperature fronts were the primary variables selected in the classification models to describe Risso’s dolphin habitat and sperm whale habitat respectively. When mapped back into geographic space, these three cetacean species occupy different fine-scale habitats within the dynamic Mid-Atlantic Bight shelf break system.
In Chapter Five I present a summary of the previous chapters and present potential analytical steps to address ecological questions pertaining the dynamic shelf break region. Taken together, the results of my dissertation demonstrate the use of opportunistically collected data in ecosystem studies; emphasize the need to incorporate middle trophic level data and oceanographic features into cetacean habitat models; and emphasize the importance of developing more mechanistic understanding of dynamic ecosystems.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.