926 resultados para digital spiral analysis
Resumo:
Northern hardwood management was assessed throughout the state of Michigan using data collected on recently harvested stands in 2010 and 2011. Methods of forensic estimation of diameter at breast height were compared and an ideal, localized equation form was selected for use in reconstructing pre-harvest stand structures. Comparisons showed differences in predictive ability among available equation forms which led to substantial financial differences when used to estimate the value of removed timber. Management on all stands was then compared among state, private, and corporate landowners. Comparisons of harvest intensities against a liberal interpretation of a well-established management guideline showed that approximately one third of harvests were conducted in a manner which may imply that the guideline was followed. One third showed higher levels of removals than recommended, and one third of harvests were less intensive than recommended. Multiple management guidelines and postulated objectives were then synthesized into a novel system of harvest taxonomy, against which all harvests were compared. This further comparison showed approximately the same proportions of harvests, while distinguishing sanitation cuts and the future productive potential of harvests cut more intensely than suggested by guidelines. Stand structures are commonly represented using diameter distributions. Parametric and nonparametric techniques for describing diameter distributions were employed on pre-harvest and post-harvest data. A common polynomial regression procedure was found to be highly sensitive to the method of histogram construction which provides the data points for the regression. The discriminative ability of kernel density estimation was substantially different from that of the polynomial regression technique.
Resumo:
This study focuses on a specific engine, i.e., a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). This conventional turbofan engine has been modified to include a secondary isobaric burner, i.e., ITB, in a transition duct between the high-pressure turbine and the low-pressure turbine. The preliminary design phase for this modified engine starts with the aerothermodynamics cycle analysis is consisting of parametric (i.e., on-design) and performance (i.e., off-design) cycle analyses. In parametric analysis, the modified engine performance parameters are evaluated and compared with baseline engine in terms of design limitation (maximum turbine inlet temperature), flight conditions (such as flight Mach condition, ambient temperature and pressure), and design choices (such as compressor pressure ratio, fan pressure ratio, fan bypass ratio etc.). A turbine cooling model is also included to account for the effect of cooling air on engine performance. The results from the on-design analysis confirmed the advantage of using ITB, i.e., higher specific thrust with small increases in thrust specific fuel consumption, less cooling air, and less NOx production, provided that the main burner exit temperature and ITB exit temperature are properly specified. It is also important to identify the critical ITB temperature, beyond which the ITB is turned off and has no advantage at all. With the encouraging results from parametric cycle analysis, a detailed performance cycle analysis of the identical engine is also conducted for steady-stateengine performance prediction. The results from off-design cycle analysis show that the ITB engine at full throttle setting has enhanced performance over baseline engine. Furthermore, ITB engine operating at partial throttle settings will exhibit higher thrust at lower specific fuel consumption and improved thermal efficiency over the baseline engine. A mission analysis is also presented to predict the fuel consumptions in certain mission phases. Excel macrocode, Visual Basic for Application, and Excel neuron cells are combined to facilitate Excel software to perform these cycle analyses. These user-friendly programs compute and plot the data sequentially without forcing users to open other types of post-processing programs.
Resumo:
Bidirectional promoters regulate adjacent genes organized in a divergent fashion (head to head orientation). Several Reports pertaining to bidirectional promoters on a genomic scale exists in mammals. This work provides the essential background on theoretical and experimental work to carry out a genomic scale analysis of bidirectional promoters in plants. A computational study was performed to identify putative bidirectional promoters and the over-represented cis-regulatory motifs from three sequenced plant genomes: rice (Oryza sativa), Arabidopsis thaliana, and Populus trichocarpa using the Plant Cis-acting Regulatory DNA Elements (PLACE) and PLANT CARE databases. Over-represented motifs along with their possible function were described with the help of a few conserved representative putative bidirectional promoters from the three model plants. By doing so a foundation was laid for the experimental evaluation of bidirectional promoters in plants. A novel Agrobacterium tumefaciens mediated transient expression assay (AmTEA) was developed for young plants of different cereal species and the model dicot Arabidopsis thaliana. AmTEA was evaluated using five promoters (six constructs) and two reporter genes, gus and egfp. Efficacy and stability of AmTEA was compared with stable transgenics using the Arabidopsis DEAD-box RNA helicase family gene promoter. AmTEA was primarily developed to overcome the many problems associated with the development of transgenics and expression studies in plants. Finally a possible mechanism for the bidirectional activity of bidirectional promoters was highlighted. Deletion analysis using promoter-reporter gene constructs identified three rice promoters to be bidirectional. Regulatory elements located in the 5’- untranslated regions (UTR) of one of the genes of the divergent gene pair were found to be responsible for their bidirectional ctivity
Resumo:
The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.
Resumo:
In this dissertation, the National Survey of Student Engagement (NSSE) serves as a nodal point through which to examine the power relations shaping the direction and practices of higher education in the twenty-first century. Theoretically, my analysis is informed by Foucault’s concept of governmentality, briefly defined as a technology of power that influences or shapes behavior from a distance. This form of governance operates through apparatuses of security, which include higher education. Foucault identified three essential characteristics of an apparatus—the market, the milieu, and the processes of normalization—through which administrative mechanisms and practices operate and govern populations. In this project, my primary focus is on the governance of faculty and administrators, as a population, at residential colleges and universities. I argue that the existing milieu of accountability is one dominated by the neoliberal assumption that all activity—including higher education—works best when governed by market forces alone, reducing higher education to a market-mediated private good. Under these conditions, what many in the academy believe is an essential purpose of higher education—to educate students broadly, to contribute knowledge for the public good, and to serve as society’s critic and social conscience (Washburn 227)—is being eroded. Although NSSE emerged as a form of resistance to commercial college rankings, it did not challenge the forces that empowered the rankings in the first place. Indeed, NSSE data are now being used to make institutions even more responsive to market forces. Furthermore, NSSE’s use has a normalizing effect that tends to homogenize classroom practices and erode the autonomy of faculty in the educational process. It also positions students as part of the system of surveillance. In the end, if aspects of higher education that are essential to maintaining a civil society are left to be defined solely in market terms, the result may be a less vibrant and, ultimately, a less just society.
Resumo:
What motivates students to perform and pursue engineering design tasks? This study examines this question by way of three Learning Through Service (LTS) programs: 1) an on-going longitudinal study examining the impacts of service on engineering students, 2) an on-going analysis of an international senior design capstone program, and 3) an on-going evaluation of an international graduate-level research program. The evaluation of these programs incorporates both qualitative and quantitative methods, utilizing surveys, questionnaires, and interviews, which help to provide insight on what motivates students to do engineering design work. The quantitative methods were utilized in analyzing various instruments including: a Readiness assessment inventory, Intercultural Development Inventory, Sustainable Engineering through Service Learning survey, the Impacts of Service on Engineering Students’ survey, Motivational narratives, as well as some analysis for interview text. The results of these instruments help to provide some much needed insight on how prepared students are to participate in engineering programs. Additional qualitative methods include: Word clouds, Motivational narratives, as well as interview analysis. This thesis focused on how these instruments help to determine what motivates engineering students to pursue engineering design tasks. These instruments aim to collect some more in-depth information than the quantitative instruments will allow. Preliminary results suggest that of the 120 interviews analyzed Interest/Enjoyment, Application of knowledge and skills, as well as gaining knowledge are key motivating factors regardless of gender or academic level. Together these findings begin to shed light on what motivates students to perform engineering design tasks, which can be applied for better recruitment and retention in university programs.
Resumo:
Typical internal combustion engines lose about 75% of the fuel energy through the engine coolant, exhaust and surface radiation. Most of the heat generated comes from converting the chemical energy in the fuel to mechanical energy and in turn thermal energy is produced. In general, the thermal energy is unutilized and thus wasted. This report describes the analysis of a novel waste heat recovery (WHR) system that operates on a Rankine cycle. This novel WHR system consists of a second piston within the existing piston to reduce losses associated with compression and exhaust strokes in a four-cycle engine. The wasted thermal energy recovered from the coolant and exhaust systems generate a high temperature and high pressure working fluid which is used to power the modified piston assembly. Cycle simulation shows that a large, stationary natural gas spark ignition engine produces enough waste heat to operate the novel WHR system. With the use of this system, the stationary gas compression ignition engine running at 900 RPM and full load had a net increase of 177.03 kW (240.7 HP). This increase in power improved the brake fuel conversion efficiency by 4.53%.
Resumo:
Compiler optimizations help to make code run faster at runtime. When the compilation is done before the program is run, compilation time is less of an issue, but how do on-the-fly compilation and optimization impact the overall runtime? If the compiler must compete with the running application for resources, the running application will take more time to complete. This paper investigates the impact of specific compiler optimizations on the overall runtime of an application. A foldover Plackett and Burman design is used to choose compiler optimizations that appear to contribute to shorter overall runtimes. These selected optimizations are compared with the default optimization levels in the Jikes RVM. This method selects optimizations that result in a shorter overall runtime than the default O0, O1, and O2 levels. This shows that careful selection of compiler optimizations can have a significant, positive impact on overall runtime.
Resumo:
The Collingwood Member is a mid to late Ordovician self-sourced reservoir deposited across the northern Michigan Basin and parts of Ontario, Canada. Although it had been previously studied in Canada, there has been relatively little data available from the Michigan subsurface. Recent commercial interest in the Collingwood has resulted in the drilling and production of several wells in the state of Michigan. An analysis of core samples, measured laboratory data, and petrophysical logs has yielded both a quantitative and qualitative understanding of the formation in the Michigan Basin. The Collingwood is a low permeability and low porosity carbonate package that is very high in organic content. It is composed primarily of a uniformly fine grained carbonate matrix with lesser amounts of kerogen, silica, and clays. The kerogen content of the Collingwood is finely dispersed in the clay and carbonate mineral phases. Geochemical and production data show that both oil and gas phases are present based on regional thermal maturity. The deposit is richest in the north-central part of the basin with thickest deposition and highest organic content. The Collingwood is a fairly thin deposit and vertical fractures may very easily extend into the surrounding formations. Completion and treatment techniques should be designed around these parameters to enhance production.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.