3 resultados para Large modeling projects
em Illinois Digital Environment for Access to Learning and Scholarship Repository
Resumo:
Mesoscale Gravity Waves (MGWs) are large pressure perturbations that form in the presence of a stable layer at the surface either behind Mesoscale Convective Systems (MCSs) in summer or over warm frontal surfaces behind elevated convection in winter. MGWs are associated with damaging winds, moderate to heavy precipitation, and occasional heat bursts at the surface. The forcing mechanism for MGWs in this study is hypothesized to be evaporative cooling occurring behind a convective line. This evaporatively-cooled air generates a downdraft that then depresses the surface-based stable layer and causes pressure decreases, strong wind speeds and MGW genesis. Using the Weather Research and Forecast Model (WRF) version 3.0, evaporative cooling is simulated using an imposed cold thermal. Sensitivity studies examine the response of MGW structure to different thermal and shear profiles where the strength and depth of the inversion are varied, as well as the amount of wind shear. MGWs are characterized in terms of response variables, such as wind speed perturbations (U'), temperature perturbations (T'), pressure perturbations (P'), potential temperature perturbations (Θ'), and the correlation coefficient (R) between U' and P'. Regime Diagrams portray the response of MGW to the above variables in order to better understand the formation, causes, and intensity of MGWs. The results of this study indicate that shallow, weak surface layers coupled with deep, neutral layers above favor the formation of waves of elevation. Conversely, deep strong surface layers coupled with deep, neutral layers above favor the formation of waves of depression. This is also the type of atmospheric setup that tends to produce substantial surface heating at the surface.
Resumo:
The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.