990 resultados para Context modeling
Resumo:
We present a geospatial model to predict the radiofrequency electromagnetic field from fixed site transmitters for use in epidemiological exposure assessment. The proposed model extends an existing model toward the prediction of indoor exposure, that is, at the homes of potential study participants. The model is based on accurate operation parameters of all stationary transmitters of mobile communication base stations, and radio broadcast and television transmitters for an extended urban and suburban region in the Basel area (Switzerland). The model was evaluated by calculating Spearman rank correlations and weighted Cohen's kappa (kappa) statistics between the model predictions and measurements obtained at street level, in the homes of volunteers, and in front of the windows of these homes. The correlation coefficients of the numerical predictions with street level measurements were 0.64, with indoor measurements 0.66, and with window measurements 0.67. The kappa coefficients were 0.48 (95%-confidence interval: 0.35-0.61) for street level measurements, 0.44 (95%-CI: 0.32-0.57) for indoor measurements, and 0.53 (95%-CI: 0.42-0.65) for window measurements. Although the modeling of shielding effects by walls and roofs requires considerable simplifications of a complex environment, we found a comparable accuracy of the model for indoor and outdoor points.
Resumo:
Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.
Resumo:
The hydraulic fracturing of the Marcellus Formation creates a byproduct known as frac water. Five frac water samples were collected in Bradford County, PA. Inorganic chemical analysis, field parameters analysis, alkalinity titrations, total dissolved solids(TDS), total suspended solids (TSS), biological oxygen demand (BOD), and chemical oxygen demand (COD) were conducted on each sample to characterize frac water. A database of frac water chemistry results from across the state of Pennsylvania from multiple sources was compiled in order to provide the public and research communitywith an accurate characterization of frac water. Four geochemical models were created to model the reactions between frac water and the Marcellus Formation, Purcell Limestone, and the oil field brines presumed present in the formations. The average concentrations of chloride and TDS in the five frac water samples were 1.1 �± 0.5 x 105 mg/L (5.5X average seawater) and 140,000 mg/L (4X average seawater). BOD values for frac water immediately upon flow back were over 10X greater than the BOD of typical wastewater, but decreased into the range of typical wastewater after a short period of time. The COD of frac water decreases dramatically with an increase in elapsed time from flow back, but remain considerably higher than typicalwastewater. Different alkalinity calculation methods produced a range of alkalinity values for frac water: this result is most likely due to high concentrations of aliphatic acid anions present in the samples. Laboratory analyses indicate that the frac watercomposition is quite variable depending on the companies from which the water was collected, the geology of the local area, and number of fracturing jobs in which the frac water was used, but will require more treatment than typical wastewater regardless of theprecise composition of each sample. The geochemical models created suggest that the presence of organic complexes in an oil field brine and Marcellus Formation aid in the dissolution of ions such as bariumand strontium into the solution. Although equilibration reactions between the Marcellus Formation and the slickwater account for some of the final frac water composition, the predominant control of frac water composition appears to be the ratio of the mixture between the oil field brine and slickwater. The high concentration of barium in the frac water is likely due to the abundance of barite nodules in the Purcell Limestone, and the lack of sulfate in the frac water samples is due to the reducing, anoxic conditions in the earth's subsurface that allow for the degassing of H2S(g).
Resumo:
This study investigates the possibility of custom fitting a widely accepted approximate yield surface equation (Ziemian, 2000) to the theoretical yield surfaces of five different structural shapes, which include wide-flange, solid and hollow rectangular, and solid and hollow circular shapes. To achieve this goal, a theoretically “exact” but overly complex representation of the cross section’s yield surface was initially obtained by using fundamental principles of solid mechanics. A weighted regression analysis was performed with the “exact” yield surface data to obtain the specific coefficients of three terms in the approximate yield surface equation. These coefficients were calculated to determine the “best” yield surface equation for a given cross section geometry. Given that the exact yield surface shall have zero percentage of concavity, this investigation evaluated the resulting coefficient of determination (
Resumo:
Fuel cells are a topic of high interest in the scientific community right now because of their ability to efficiently convert chemical energy into electrical energy. This thesis is focused on solid oxide fuel cells (SOFCs) because of their fuel flexibility, and is specifically concerned with the anode properties of SOFCs. The anodes are composed of a ceramic material (yttrium stabilized zirconia, or YSZ), and conducting material. Recent research has shown that an infiltrated anode may offer better performance at a lower cost. This thesis focuses on the creation of a model of an infiltrated anode that mimics the underlying physics of the production process. Using the model, several key parameters for anode performance are considered. These are the initial volume fraction of YSZ in the slurry before sintering, the final porosity of the composite anode after sintering, and the size of the YSZ and conducting particles in the composite. The performance measures of the anode, namely percolation threshold and effective conductivity, are analyzed as a function of these important input parameters. Simple two and three-dimensional percolation models are used to determine the conditions at which the full infiltrated anode would be investigated. These more simple models showed that the aspect ratio of the anode has no effect on the threshold or effective conductivity, and that cell sizes of 303 are needed to obtain accurate conductivity values. The full model of the infiltrated anode is able to predict the performance of the SOFC anodes and it can be seen that increasing the size of the YSZ decreases the percolation threshold and increases the effective conductivity at low conductor loadings. Similar trends are seen for a decrease in final porosity and a decrease in the initial volume fraction of YSZ.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.
Resumo:
Drug release from a fluid-contacting biomaterial is simulated using a microfluidic device with channels defined by solute-loaded hydrogel. In order to mimic a drug delivery device, a solution of poly(ethylene glycol) diacrylate (PEG-DA), solute, and photoinitiator is cured inside a microfluidic device with a channel through the center ofthe hydrogel. As water is pumped through the channel, solute diffuses out of the hydrogel and into the water. Channel sizes within the devices range from 300 µm to 1000 µm to simulate vessels within the body. The properties of the PEG hydrogel were characterizedby the extent of crosslinking, the swelling ratio, and the mesh size of the gel. The structure of the hydrogel was related to the UV exposure dosage and the initial water and solute content in the PEG-DA solution.
Resumo:
The long-term performance of infrastructure depends on reliable and sustainable designs. Many of Pennsylvania’s streams experience sediment transport problems that increase maintenance costs and lower structural integrity of bridge crossings. A stream restoration project is one common mitigation measure used to correct such problems at bridge crossings. Specifically, in an attempt to alleviate aggradation problems with the Old Route 15 Bridge crossing on White Deer Creek, in White Deer, PA, two in-stream structures (rock cross vanes) and several bank stabilization features were installed along with a complete channel redevelopment. The objectives of this research were to characterize the hydraulic and sediment transport processes occurring at the White Deer Creek site, and to investigate, through physical and mathematical modeling, the use of instream restoration structures. The goal is to be able to use the results of this study to prevent aggradation or other sediment related problems in the vicinity of bridges through improved design considerations. Monitoring and modeling indicate that the study site on White Deer Creek is currently unstable, experiencing general channel down-cutting, bank erosion, and several local areas of increased aggradation and degradation of the channel bed. An in-stream structure installed upstream of the Old Route 15 Bridge failed by sediment burial caused by the high sediment load that White Deer Creek is transporting as well as the backwater effects caused by the bridge crossing. The in-stream structure installed downstream of the Old Route 15 Bridge is beginning to fail because of the alignment of the structure with the approach direction of flow from upstream of the restoration structure.
Resumo:
WE INVESTIGATED HOW WELL STRUCTURAL FEATURES such as note density or the relative number of changes in the melodic contour could predict success in implicit and explicit memory for unfamiliar melodies. We also analyzed which features are more likely to elicit increasingly confident judgments of "old" in a recognition memory task. An automated analysis program computed structural aspects of melodies, both independent of any context, and also with reference to the other melodies in the testset and the parent corpus of pop music. A few features predicted success in both memory tasks, which points to a shared memory component. However, motivic complexity compared to a large corpus of pop music had different effects on explicit and implicit memory. We also found that just a few features are associated with different rates of "old" judgments, whether the items were old or new. Rarer motives relative to the testset predicted hits and rarer motives relative to the corpus predicted false alarms. This data-driven analysis provides further support for both shared and separable mechanisms in implicit and explicit memory retrieval, as well as the role of distinctiveness in true and false judgments of familiarity.
Resumo:
Genomic alterations have been linked to the development and progression of cancer. The technique of Comparative Genomic Hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array-CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data. We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Since the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme and breast cancer are analyzed, and comparisons are made with some widely-used algorithms to illustrate the reliability and success of the technique.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately
Resumo:
The last two decades have seen intense scientific and regulatory interest in the health effects of particulate matter (PM). Influential epidemiological studies that characterize chronic exposure of individuals rely on monitoring data that are sparse in space and time, so they often assign the same exposure to participants in large geographic areas and across time. We estimate monthly PM during 1988-2002 in a large spatial domain for use in studying health effects in the Nurses' Health Study. We develop a conceptually simple spatio-temporal model that uses a rich set of covariates. The model is used to estimate concentrations of PM10 for the full time period and PM2.5 for a subset of the period. For the earlier part of the period, 1988-1998, few PM2.5 monitors were operating, so we develop a simple extension to the model that represents PM2.5 conditionally on PM10 model predictions. In the epidemiological analysis, model predictions of PM10 are more strongly associated with health effects than when using simpler approaches to estimate exposure. Our modeling approach supports the application in estimating both fine-scale and large-scale spatial heterogeneity and capturing space-time interaction through the use of monthly-varying spatial surfaces. At the same time, the model is computationally feasible, implementable with standard software, and readily understandable to the scientific audience. Despite simplifying assumptions, the model has good predictive performance and uncertainty characterization.