999 resultados para MTU
Resumo:
In 1970 Clark Benson published a theorem in the Journal of Algebra stating a congruence for generalized quadrangles. Since then this theorem has been expanded to other specific geometries. In this thesis the theorem for partial geometries is extended to develop new divisibility conditions for the existence of a partial geometry in Chapter 2. Then in Chapter 3 the theorem is applied to higher dimensional arcs resulting in parameter restrictions on geometries derived from these structures. In Chapter 4 we look at extending previous work with partial geometries with α = 2 to uncover potential partial geometries with higher values of α. Finally the theorem is extended to strongly regular graphs in Chapter 5. In addition we obtain expressions for the multiplicities of the eigenvalues of matrices related to the adjacency matrices of these graphs. Finally, a four lesson high school level enrichment unit is included to provide students at this level with an introduction to partial geometries, strongly regular graphs, and an opportunity to develop proof skills in this new context.
Resumo:
The objective for this thesis is to outline a Performance-Based Engineering (PBE) framework to address the multiple hazards of Earthquake (EQ) and subsequent Fire Following Earthquake (FFE). Currently, fire codes for the United States are largely empirical and prescriptive in nature. The reliance on prescriptive requirements makes quantifying sustained damage due to fire difficult. Additionally, the empirical standards have resulted from individual member or individual assembly furnace testing, which have been shown to differ greatly from full structural system behavior. The very nature of fire behavior (ignition, growth, suppression, and spread) is fundamentally difficult to quantify due to the inherent randomness present in each stage of fire development. The study of interactions between earthquake damage and fire behavior is also in its infancy with essentially no available empirical testing results. This thesis will present a literature review, a discussion, and critique of the state-of-the-art, and a summary of software currently being used to estimate loss due to EQ and FFE. A generalized PBE framework for EQ and subsequent FFE is presented along with a combined hazard probability to performance objective matrix and a table of variables necessary to fully implement the proposed framework. Future research requirements and summary are also provided with discussions of the difficulties inherent in adequately describing the multiple hazards of EQ and FFE.
Resumo:
The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.
DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS
Resumo:
Principal Component Analysis (PCA) is a popular method for dimension reduction that can be used in many fields including data compression, image processing, exploratory data analysis, etc. However, traditional PCA method has several drawbacks, since the traditional PCA method is not efficient for dealing with high dimensional data and cannot be effectively applied to compute accurate enough principal components when handling relatively large portion of missing data. In this report, we propose to use EM-PCA method for dimension reduction of power system measurement with missing data, and provide a comparative study of traditional PCA and EM-PCA methods. Our extensive experimental results show that EM-PCA method is more effective and more accurate for dimension reduction of power system measurement data than traditional PCA method when dealing with large portion of missing data set.
Resumo:
The Koyukuk Mining District was one of several northern, turn of the century, gold rush regions. Miners focused their efforts in this region on the Middle Fork of the Koyukuk River and on several of its tributaries. Mining in the Koyukuk began in the 1880s and the first rush occurred in 1898. Continued mining throughout the early decades of the 1900s has resulted in an historic mining landscape consisting of structures, equipment, mining shafts, waste rock, trash scatters, and prospect pits. Modern work continues in the region alongside these historic resources. An archaeological survey was completed in 2012 as part of an Abandoned Mine Lands survey undergone with the Bureau of Land Management, Michigan Technological University, and the University of Alaska Anchorage. This thesis examines the discrepancy between the size of mining operations and their respective successes in the region while also providing an historical background on the region and reports on the historical resources present.
Resumo:
This report summarizes the work done for the Vehicle Powertrain Modeling and Design Problem Proposal portion of the EcoCAR3 proposal as specified in the Request for Proposal from Argonne National Laboratory. The results of the modeling exercises presented in the proposal showed that: An average conventional vehicle powered by a combustion engine could not meet the energy consumption target when the engine was sized to meet the acceleration target, due the relatively low thermal efficiency of the spark ignition engine. A battery electric vehicle could not meet the required range target of 320 km while keeping the vehicle weight below the gross vehicle weight rating of 2000 kg. This was due to the low energy density of the batteries which necessitated a large, and heavy, battery pack to provide enough energy to meet the range target. A series hybrid electric vehicle has the potential to meet the acceleration and energy consumption parameters when the components are optimally sized. A parallel hybrid electric vehicle has less energy conversion losses than a series hybrid electric vehicle which results in greater overall efficiency, lower energy consumption, and less emissions. For EcoCAR3, Michigan Tech proposes to develop a plug-in parallel hybrid vehicle (PPHEV) powered by a small Diesel engine operating on B20 Bio-Diesel fuel. This architecture was chosen over other options due to its compact design, lower cost, and its ability to provide performance levels and energy efficiency that meet or exceed the design targets. While this powertrain configuration requires a more complex control system and strategy than others, the student engineering team at Michigan Tech has significant recent experience with this architecture and has confidence that it will perform well in the events planned for the EcoCAR3 competition.
Resumo:
Roads and highways present a unique challenge to wildlife as they exhibit substantial impacts on the surrounding ecosystem through the interruption of a number of ecological processes. With new roads added to the national highway system every year, an understanding of these impacts is required for effective mitigation of potential environmental impacts. A major contributor to these negative effects is the deposition of chemicals used in winter deicing activities to nearby surface waters. These chemicals often vary in composition and may affect freshwater species differently. The negative impacts of widespread deposition of sodium chloride (NaCl) have prompted a search for an `environmentally friendly' alternative. However, little research has investigated the potential environmental effects of widespread use of these alternatives. Herein, I detail the results of laboratory tests and field surveys designed to determine the impacts of road salt (NaCl) and other chemical deicers on amphibian communities in Michigan's Upper Peninsula. Using larval amphibians I demonstrate the lethal impacts of a suite of chemical deicers on this sensitive, freshwater species. Larval wood frogs (Lithobates sylvatica) were tolerant of short-term (96 hours) exposure to urea (CH4N2O), sodium chloride (NaCl), and magnesium chloride (MgCl2). However, these larvae were very sensitive to acetate products (C8H12CaMgO8, CH3COOK) and calcium chloride (CaCl2). These differences in tolerance suggest that certain deicers may be more harmful to amphibians than others. Secondly, I expanded this analysis to include an experiment designed to determine the sublethal effects of chronic exposure to environmentally realistic concentrations of NaCl on two unique amphibian species, L. sylvatica and green frogs (L. clamitans). L. sylvatica tend to breed in small, ephemeral wetlands and metamorphose within a single season. However, L. clamitans breed primarily in more permanent wetlands and often remain as tadpoles for one year or more. These species employ different life history strategies in this region which may influence their response to chronic NaCl exposure. Both species demonstrated potentially harmful effects on individual fitness. L. sylvatica larvae had a high incidence of edema suggesting the NaCl exposure was a significant physiologic stressor to these larvae. L. clamitans larvae reduced tail length during their exposure which may affect adult fitness of these individuals. In order to determine the risk local amphibians face when using these roadside pools, I conducted a survey of the spatial distribution of chloride in the three northernmost counties of Michigan. This area receives a relatively low amount of NaCl which is confined to state and federal highways. The chloride concentrations in this region were much lower than those in urban systems; however, amphibians breeding in the local area may encounter harmful chloride levels arising from temporal variations in hydroperiods. Spatial variation of chloride levels suggests the road-effect zone for amphibians may be as large as 1000 m from a salt-treated highway. Lastly, I performed an analysis of the use of specific conductance to predict chloride concentrations in natural surface water bodies. A number of studies have used this regression to predict chloride concentrations from measurements of specific conductance. This method is often chosen in the place of ion chromatography due to budget and time constraints. However, using a regression method to characterize this relationship does not result in accurate chloride ion concentration estimates.
Resumo:
Ethylene has myriad roles as a plant hormone, ranging from senescence and defending against pathogen attacks to fruit ripening and interactions with other hormones. It has been shown to increase cambial activity in poplar, but the effect on wood formation in Arabidopsis hypocotyl has not previously been studied. The Auxin-Regulated Gene involved in Organ Size (ARGOS), which increases organ size by lengthening the time for cell division, was found to be upregulated by ethylene. We tested the effect of ethylene treatment at 10 and 100 µM ACC on three genotypes of Arabidopsis, Col0 (wild-type), an ARGOS deficient mutant (argos), and ein3-1, an ethylene insensitive mutant. ARGOS expression analysis with qPCR indicated that ACC does induce ARGOS and ARGOS-LIKE (ARL) in the hypocotyl. As seen in poplar, ethylene also decreases stem elongation.Histochemical staining, showed that ethylene changes the way secondary xylem lignifies, causing gaps in lignification around the outer edge of secondary xylem. Our results also implied that ethylene treatment changes the proportion of secondary to total xylem, resulting in less secondary, whereas in poplar, ethylene treatment caused an increase.
Resumo:
Smallholders in eastern Paraguay plant small stands of Eucalyptus grandis W. Hill ex Maiden intended for sale on the local market. Smallholders have been encouraged to plant E. grandis by local forestry extension agents who offer both forestry education and incentive programs. Smallholders who practice recommended forestry techniques geared towards growing large diameter trees of good form are financially rewarded by the local markets which desire saw log quality trees. The question was posed, are smallholders engaging in recommended silvicultural practices and producing reasonable volume yields? It was hypothesized that smallholders, having received forestry education and having financial incentives from the local market, would engage in silvicultural practices resulting in trees of good form and volume yields that were reasonable for the local climate and soil characteristics. Yield volume results from this study support this hypothesis. Mean volume yield was estimated at 70 cubic meters per hectare at age four and 225 cubic meters per hectare at age eight. These volume yields compare favorably to volume yields from other studies of E. grandis grown in similar climates, with similar stocking levels and site qualities.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
Volcán Pacaya is one of three currently active volcanoes in Guatemala. Volcanic activity originates from the local tectonic subduction of the Cocos plate beneath the Caribbean plate along the Pacific Guatemalan coast. Pacaya is characterized by generally strombolian type activity with occasional larger vulcanian type eruptions approximately every ten years. One particularly large eruption occurred on May 27, 2010. Using GPS data collected for approximately 8 years before this eruption and data from an additional three years of collection afterwards, surface movement covering the period of the eruption can be measured and used as a tool to help understand activity at the volcano. Initial positions were obtained from raw data using the Automatic Precise Positioning Service provided by the NASA Jet Propulsion Laboratory. Forward modeling of observed 3-D displacements for three time periods (before, covering and after the May 2010 eruption) revealed that a plausible source for deformation is related to a vertical dike or planar surface trending NNW-SSE through the cone. For three distinct time periods the best fitting models describe deformation of the volcano: 0.45 right lateral movement and 0.55 m tensile opening along the dike mentioned above from October 2001 through January 2009 (pre-eruption); 0.55 m left lateral slip along the dike mentioned above for the period from January 2009 and January 2011 (covering the eruption); -0.025 m dip slip along the dike for the period from January 2011 through March 2013 (post-eruption). In all bestfit models the dike is oriented with a 75° westward dip. These data have respective RMS misfit values of 5.49 cm, 12.38 cm and 6.90 cm for each modeled period. During the time period that includes the eruption the volcano most likely experienced a combination of slip and inflation below the edifice which created a large scar at the surface down the northern flank of the volcano. All models that a dipping dike may be experiencing a combination of inflation and oblique slip below the edifice which augments the possibility of a westward collapse in the future.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.
Resumo:
This study investigated the effect that the video game Portal 2 had on students understanding of Newton’s Laws and their attitudes towards learning science during a two-week afterschool program at a science museum. Using a pre/posttest and survey design, along with instructor observations, the results showed a statistically relevant increase in understanding of Newton’s Laws (p=.02<.05) but did not measure a relevant change in attitude scores. The data and observations suggest that future research should pay attention to non-educational aspects of video games, be careful about the amount of time students spend in the game, and encourage positive relationships with game developers.
Resumo:
FEAST is a recently developed eigenvalue algorithm which computes selected interior eigenvalues of real symmetric matrices. It uses contour integral resolvent based projections. A weakness is that the existing algorithm relies on accurate reasoned estimates of the number of eigenvalues within the contour. Examining the singular values of the projections on moderately-sized, randomly-generated test problems motivates orthogonalization-based improvements to the algorithm. The singular value distributions provide experimentally robust estimates of the number of eigenvalues within the contour. The algorithm is modified to handle both Hermitian and general complex matrices. The original algorithm (based on circular contours and Gauss-Legendre quadrature) is extended to contours and quadrature schemes that are recursively subdividable. A general complex recursive algorithm is implemented on rectangular and diamond contours. The accuracy of different quadrature schemes for various contours is investigated.
Resumo:
A distinguishing feature of the discipline of archaeology is its reliance upon sensory dependant investigation. As perceived by all of the senses, the felt environment is a unique area of archaeological knowledge. It is generally accepted that the emergence of industrial processes in the recent past has been accompanied by unprecedented sonic extremes. The work of environmental historians has provided ample evidence that the introduction of much of this unwanted sound, or "noise" was an area of contestation. More recent research in the history of sound has called for more nuanced distinctions than the noisy/quiet dichotomy. Acoustic archaeology tends to focus upon a reconstruction of sound producing instruments and spaces with a primary goal of ascertaining intentionality. Most archaeoacoustic research is focused on learning more about the sonic world of people within prehistoric timeframes while some research has been done on historic sites. In this thesis, by way of a meditation on industrial sound and the physical remains of the Quincy Mining Company blacksmith shop (Hancock, MI) in particular, I argue for an acceptance and inclusion of sound as artifact in and of itself. I am introducing the concept of an individual sound-form, or sonifact, as a reproducible, repeatable, representable physical entity, created by tangible, perhaps even visible, host-artifacts. A sonifact is a sound that endures through time, with negligible variability. Through the piecing together of historical and archaeological evidence, in this thesis I present a plausible sonifactual assemblage at the blacksmith shop in April 1916 as it may have been experienced by an individual traversing the vicinity on foot: an 'historic soundwalk.' The sensory apprehension of abandoned industrial sites is multi-faceted. In this thesis I hope to make the case for an acceptance of sound as a primary heritage value when thinking about the industrial past, and also for an increased awareness and acceptance of sound and listening as a primary mode of perception.