903 resultados para context analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Collingwood Member is a mid to late Ordovician self-sourced reservoir deposited across the northern Michigan Basin and parts of Ontario, Canada. Although it had been previously studied in Canada, there has been relatively little data available from the Michigan subsurface. Recent commercial interest in the Collingwood has resulted in the drilling and production of several wells in the state of Michigan. An analysis of core samples, measured laboratory data, and petrophysical logs has yielded both a quantitative and qualitative understanding of the formation in the Michigan Basin. The Collingwood is a low permeability and low porosity carbonate package that is very high in organic content. It is composed primarily of a uniformly fine grained carbonate matrix with lesser amounts of kerogen, silica, and clays. The kerogen content of the Collingwood is finely dispersed in the clay and carbonate mineral phases. Geochemical and production data show that both oil and gas phases are present based on regional thermal maturity. The deposit is richest in the north-central part of the basin with thickest deposition and highest organic content. The Collingwood is a fairly thin deposit and vertical fractures may very easily extend into the surrounding formations. Completion and treatment techniques should be designed around these parameters to enhance production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand for bio-fuels is expected to increase, due to rising prices of fossil fuels and concerns over greenhouse gas emissions and energy security. The overall cost of biomass energy generation is primarily related to biomass harvesting activity, transportation, and storage. With a commercial-scale cellulosic ethanol processing facility in Kinross Township of Chippewa County, Michigan about to be built, models including a simulation model and an optimization model have been developed to provide decision support for the facility. Both models track cost, emissions and energy consumption. While the optimization model provides guidance for a long-term strategic plan, the simulation model aims to present detailed output for specified operational scenarios over an annual period. Most importantly, the simulation model considers the uncertainty of spring break-up timing, i.e., seasonal road restrictions. Spring break-up timing is important because it will impact the feasibility of harvesting activity and the time duration of transportation restrictions, which significantly changes the availability of feedstock for the processing facility. This thesis focuses on the statistical model of spring break-up used in the simulation model. Spring break-up timing depends on various factors, including temperature, road conditions and soil type, as well as individual decision making processes at the county level. The spring break-up model, based on the historical spring break-up data from 27 counties over the period of 2002-2010, starts by specifying the probability distribution of a particular county’s spring break-up start day and end day, and then relates the spring break-up timing of the other counties in the harvesting zone to the first county. In order to estimate the dependence relationship between counties, regression analyses, including standard linear regression and reduced major axis regression, are conducted. Using realizations (scenarios) of spring break-up generated by the statistical spring breakup model, the simulation model is able to probabilistically evaluate different harvesting and transportation plans to help the bio-fuel facility select the most effective strategy. For early spring break-up, which usually indicates a longer than average break-up period, more log storage is required, total cost increases, and the probability of plant closure increases. The risk of plant closure may be partially offset through increased use of rail transportation, which is not subject to spring break-up restrictions. However, rail availability and rail yard storage may then become limiting factors in the supply chain. Rail use will impact total cost, energy consumption, system-wide CO2 emissions, and the reliability of providing feedstock to the bio-fuel processing facility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuel Cells are a promising alternative energy technology. One of the biggest problems that exists in fuel cell is that of water management. A better understanding of wettability characteristics in the fuel cells is needed to alleviate the problem of water management. Contact angle data on gas diffusion layers (GDL) of the fuel cells can be used to characterize the wettability of GDL in fuel cells. A contact angle measurement program has been developed to measure the contact angle of sessile drops from drop images. Digitization of drop images induces pixel errors in the contact angle measurement process. The resulting uncertainty in contact angle measurement has been analyzed. An experimental apparatus has been developed for contact angle measurements at different temperature, with the feature to measure advancing and receding contact angles on gas diffusion layers of fuel cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A non-hierarchical K-means algorithm is used to cluster 47 years (1960–2006) of 10-day HYSPLIT backward trajectories to the Pico Mountain (PM) observatory on a seasonal basis. The resulting cluster centers identify the major transport pathways and collectively comprise a long-term climatology of transport to the observatory. The transport climatology improves our ability to interpret the observations made there and our understanding of pollution source regions to the station and the central North Atlantic region. I determine which pathways dominate transport to the observatory and examine the impacts of these transport patterns on the O3, NOy, NOx, and CO measurements made there during 2001–2006. Transport from the U.S., Canada, and the Atlantic most frequently reaches the station, but Europe, east Africa, and the Pacific can also contribute significantly depending on the season. Transport from Canada was correlated with the North Atlantic Oscillation (NAO) in spring and winter, and transport from the Pacific was uncorrelated with the NAO. The highest CO and O3 are observed during spring. Summer is also characterized by high CO and O3 and the highest NOy and NOx of any season. Previous studies at the station attributed the summer time high CO and O3 to transport of boreal wildfire emissions (for 2002–2004), and boreal fires continued to affect the station during 2005 and 2006. The particle dispersion model FLEXPART was used to calculate anthropogenic and biomass-burning CO tracer values at the station in an attempt to identify the regions responsible for the high CO and O3 observations during spring and biomass-burning impacts in summer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pacaya volcanic complex is part of the Central American volcanic arc, which is associated with the subduction of the Cocos tectonic plate under the Caribbean plate. Located 30 km south of Guatemala City, Pacaya is situated on the southern rim of the Amatitlan Caldera. It is the largest post-caldera volcano, and has been one of Central America’s most active volcanoes over the last 500 years. Between 400 and 2000 years B.P, the Pacaya volcano had experienced a huge collapse, which resulted in the formation of horseshoe-shaped scarp that is still visible. In the recent years, several smaller collapses have been associated with the activity of the volcano (in 1961 and 2010) affecting its northwestern flanks, which are likely to be induced by the local and regional stress changes. The similar orientation of dry and volcanic fissures and the distribution of new vents would likely explain the reactivation of the pre-existing stress configuration responsible for the old-collapse. This paper presents the first stability analysis of the Pacaya volcanic flank. The inputs for the geological and geotechnical models were defined based on the stratigraphical, lithological, structural data, and material properties obtained from field survey and lab tests. According to the mechanical characteristics, three lithotechnical units were defined: Lava, Lava-Breccia and Breccia-Lava. The Hoek and Brown’s failure criterion was applied for each lithotechnical unit and the rock mass friction angle, apparent cohesion, and strength and deformation characteristics were computed in a specified stress range. Further, the stability of the volcano was evaluated by two-dimensional analysis performed by Limit Equilibrium (LEM, ROCSCIENCE) and Finite Element Method (FEM, PHASE 2 7.0). The stability analysis mainly focused on the modern Pacaya volcano built inside the collapse amphitheatre of “Old Pacaya”. The volcanic instability was assessed based on the variability of safety factor using deterministic, sensitivity, and probabilistic analysis considering the gravitational instability and the effects of external forces such as magma pressure and seismicity as potential triggering mechanisms of lateral collapse. The preliminary results from the analysis provide two insights: first, the least stable sector is on the south-western flank of the volcano; second, the lowest safety factor value suggests that the edifice is stable under gravity alone, and the external triggering mechanism can represent a likely destabilizing factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a single brush-less dc motor is fed by an inverter with a sensor-less algorithm embedded in the switching controller, the system exhibits a linear and stable output in terms of the speed and torque. However, with two motors modulated by the same inverter, the system is unstable and rendered useless for a steady application, unless provided with some resistive damping on the supply lines. The project discusses and analysis the stability of such a system through simulations and hardware demonstrations and also will discuss a method to derive the values of these damping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.