939 resultados para Digital analysis
Resumo:
In-cylinder pressure transducers have been used for decades to record combustion pressure inside a running engine. However, due to the extreme operating environment, transducer design and installation must be considered in order to minimize measurement error. One such error is caused by thermal shock, where the pressure transducer experiences a high heat flux that can distort the pressure transducer diaphragm and also change the crystal sensitivity. This research focused on investigating the effects of thermal shock on in-cylinder pressure transducer data quality using a 2.0L, four-cylinder, spark-ignited, direct-injected, turbo-charged GM engine. Cylinder four was modified with five ports to accommodate pressure transducers of different manufacturers. They included an AVL GH14D, an AVL GH15D, a Kistler 6125C, and a Kistler 6054AR. The GH14D, GH15D, and 6054AR were M5 size transducers. The 6125C was a larger, 6.2mm transducer. Note that both of the AVL pressure transducers utilized a PH03 flame arrestor. Sweeps of ignition timing (spark sweep), engine speed, and engine load were performed to study the effects of thermal shock on each pressure transducer. The project consisted of two distinct phases which included experimental engine testing as well as simulation using a commercially available software package. A comparison was performed to characterize the quality of the data between the actual cylinder pressure and the simulated results. This comparison was valuable because the simulation results did not include thermal shock effects. All three sets of tests showed the peak cylinder pressure was basically unaffected by thermal shock. Comparison of the experimental data with the simulated results showed very good correlation. The spark sweep was performed at 1300 RPM and 3.3 bar NMEP and showed that the differences between the simulated results (no thermal shock) and the experimental data for the indicated mean effective pressure (IMEP) and the pumping mean effective pressure (PMEP) were significantly less than the published accuracies. All transducers had an IMEP percent difference less than 0.038% and less than 0.32% for PMEP. Kistler and AVL publish that the accuracy of their pressure transducers are within plus or minus 1% for the IMEP (AVL 2011; Kistler 2011). In addition, the difference in average exhaust absolute pressure between the simulated results and experimental data was the greatest for the two Kistler pressure transducers. The location and lack of flame arrestor are believed to be the cause of the increased error. For the engine speed sweep, the torque output was held constant at 203 Nm (150 ft-lbf) from 1500 to 4000 RPM. The difference in IMEP was less than 0.01% and the PMEP was less than 1%, except for the AVL GH14D which was 5% and the AVL GH15DK which was 2.25%. A noticeable error in PMEP appeared as the load increased during the engine speed sweeps, as expected. The load sweep was conducted at 2000 RPM over a range of NMEP from 1.1 to 14 bar. The difference in IMEP values were less 0.08% while the PMEP values were below 1% except for the AVL GH14D which was 1.8% and the AVL GH15DK which was at 1.25%. In-cylinder pressure transducer data quality was effectively analyzed using a combination of experimental data and simulation results. Several criteria can be used to investigate the impact of thermal shock on data quality as well as determine the best location and thermal protection for various transducers.
Resumo:
The Zagros oak forests in Western Iran are critically important to the sustainability of the region. These forests have undergone dramatic declines in recent decades. We evaluated the utility of the non-parametric Random Forest classification algorithm for land cover classification of Zagros landscapes, and selected the best spatial and spectral predictive variables. The algorithm resulted in high overall classification accuracies (>85%) and also equivalent classification accuracies for the datasets from the three different sensors. We evaluated the associations between trends in forest area and structure with trends in socioeconomic and climatic conditions, to identify the most likely driving forces creating deforestation and landscape structure change. We used available socioeconomic (urban and rural population, and rural income), and climatic (mean annual rainfall and mean annual temperature) data for two provinces in northern Zagros. The most correlated driving force of forest area loss was urban population, and climatic variables to a lesser extent. Landscape structure changes were more closely associated with rural population. We examined the effects of scale changes on the results from spatial pattern analysis. We assessed the impacts of eight years of protection in a protected area in northern Zagros at two different scales (both grain and extent). The effects of protection on the amount and structure of forests was scale dependent. We evaluated the nature and magnitude of changes in forest area and structure over the entire Zagros region from 1972 to 2009. We divided the Zagros region in 167 Landscape Units and developed two measures— Deforestation Sensitivity (DS) and Connectivity Sensitivity (CS) — for each landscape unit as the percent of the time steps that forest area and ECA experienced a decrease of greater than 10% in either measure. A considerable loss in forest area and connectivity was detected, but no sudden (nonlinear) changes were detected at the spatial and temporal scale of the study. Connectivity loss occurred more rapidly than forest loss due to the loss of connecting patches. More connectivity was lost in southern Zagros due to climatic differences and different forms of traditional land use.
Resumo:
Nitrogen and water are essential for plant growth and development. In this study, we designed experiments to produce gene expression data of poplar roots under nitrogen starvation and water deprivation conditions. We found low concentration of nitrogen led first to increased root elongation followed by lateral root proliferation and eventually increased root biomass. To identify genes regulating root growth and development under nitrogen starvation and water deprivation, we designed a series of data analysis procedures, through which, we have successfully identified biologically important genes. Differentially Expressed Genes (DEGs) analysis identified the genes that are differentially expressed under nitrogen starvation or drought. Protein domain enrichment analysis identified enriched themes (in same domains) that are highly interactive during the treatment. Gene Ontology (GO) enrichment analysis allowed us to identify biological process changed during nitrogen starvation. Based on the above analyses, we examined the local Gene Regulatory Network (GRN) and identified a number of transcription factors. After testing, one of them is a high hierarchically ranked transcription factor that affects root growth under nitrogen starvation. It is very tedious and time-consuming to analyze gene expression data. To avoid doing analysis manually, we attempt to automate a computational pipeline that now can be used for identification of DEGs and protein domain analysis in a single run. It is implemented in scripts of Perl and R.
Resumo:
The work described in this thesis had two objectives. The first objective was to develop a physically based computational model that could be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. The second objective was to determine how the secondary phase inclusions observed in Pb1-xSnxTe alloys made by consolidating mechanically alloyed elemental powders impact the ability of the material to harvest waste heat and generate electricity in the 400 K to 700 K temperature range. The motivation for this work was that though the promise of this alloy as an unusually efficient thermoelectric power generator material in the 400 K to 700 K range had been demonstrated in the literature, methods to reproducibly control and subsequently optimize the materials thermoelectric figure of merit remain elusive. Mechanical alloying, though not typically used to fabricate these alloys, is a potential method for cost-effectively engineering these properties. Given that there are deviations from crystalline perfection in mechanically alloyed material such as secondary phase inclusions, the question arises as to whether these defects are detrimental to thermoelectric function or alternatively, whether they enhance thermoelectric function of the alloy. The hypothesis formed at the onset of this work was that the small secondary phase SnO2 inclusions observed to be present in the mechanically alloyed Pb1-xSnxTe would increase the thermoelectric figure of merit of the material over the temperature range of interest. It was proposed that the increase in the figure of merit would arise because the inclusions in the material would not reduce the electrical conductivity to as great an extent as the thermal conductivity. If this were to be true, then the experimentally measured electronic conductivity in mechanically alloyed Pb1-xSnxTe alloys that have these inclusions would not be less than that expected in alloys without these inclusions while the portion of the thermal conductivity that is not due to charge carriers (the lattice thermal conductivity) would be less than what would be expected from alloys that do not have these inclusions. Furthermore, it would be possible to approximate the observed changes in the electrical and thermal transport properties using existing physical models for the scattering of electrons and phonons by small inclusions. The approach taken to investigate this hypothesis was to first experimentally characterize the mobile carrier concentration at room temperature along with the extent and type of secondary phase inclusions present in a series of three mechanically alloyed Pb1-xSnxTe alloys with different Sn content. Second, the physically based computational model was developed. This model was used to determine what the electronic conductivity, Seebeck coefficient, total thermal conductivity, and the portion of the thermal conductivity not due to mobile charge carriers would be in these particular Pb1-xSnxTe alloys if there were to be no secondary phase inclusions. Third, the electronic conductivity, Seebeck coefficient and total thermal conductivity was experimentally measured for these three alloys with inclusions present at elevated temperatures. The model predictions for electrical conductivity and Seebeck coefficient were directly compared to the experimental elevated temperature electrical transport measurements. The computational model was then used to extract the lattice thermal conductivity from the experimentally measured total thermal conductivity. This lattice thermal conductivity was then compared to what would be expected from the alloys in the absence of secondary phase inclusions. Secondary phase inclusions were determined by X-ray diffraction analysis to be present in all three alloys to a varying extent. The inclusions were found not to significantly degrade electrical conductivity at temperatures above ~ 400 K in these alloys, though they do dramatically impact electronic mobility at room temperature. It is shown that, at temperatures above ~ 400 K, electrons are scattered predominantly by optical and acoustical phonons rather than by an alloy scattering mechanism or the inclusions. The experimental electrical conductivity and Seebeck coefficient data at elevated temperatures were found to be within ~ 10 % of what would be expected for material without inclusions. The inclusions were not found to reduce the lattice thermal conductivity at elevated temperatures. The experimentally measured thermal conductivity data was found to be consistent with the lattice thermal conductivity that would arise due to two scattering processes: Phonon phonon scattering (Umklapp scattering) and the scattering of phonons by the disorder induced by the formation of a PbTe-SnTe solid solution (alloy scattering). As opposed to the case in electrical transport, the alloy scattering mechanism in thermal transport is shown to be a significant contributor to the total thermal resistance. An estimation of the extent to which the mean free time between phonon scattering events would be reduced due to the presence of the inclusions is consistent with the above analysis of the experimental data. The first important result of this work was the development of an experimentally validated, physically based computational model that can be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. This model will be critical in future work as a tool to first determine what the highest thermoelectric figure of merit one can expect from this alloy system at a given temperature and, second, as a tool to determine the optimum Sn content and doping level to achieve this figure of merit. The second important result of this work is the determination that the secondary phase inclusions that were observed to be present in the Pb1-xSnxTe made by mechanical alloying do not keep the material from having the same electrical and thermal transport that would be expected from “perfect" single crystal material at elevated temperatures. The analytical approach described in this work will be critical in future investigations to predict how changing the size, type, and volume fraction of secondary phase inclusions can be used to impact thermal and electrical transport in this materials system.
Resumo:
Utilizing remote sensing methods to assess landscape-scale ecological change are rapidly becoming a dominant force in the natural sciences. Powerful and robust non-parametric statistical methods are also actively being developed to compliment the unique characteristics of remotely sensed data. The focus of this research is to utilize these powerful, robust remote sensing and statistical approaches to shed light on woody plant encroachment into native grasslands--a troubling ecological phenomenon occurring throughout the world. Specifically, this research investigates western juniper encroachment within the sage-steppe ecosystem of the western USA. Western juniper trees are native to the intermountain west and are ecologically important by means of providing structural diversity and habitat for many species. However, after nearly 150 years of post-European settlement changes to this threatened ecosystem, natural ecological processes such as fire regimes no longer limit the range of western juniper to rocky refugia and other areas protected from short fire return intervals that are historically common to the region. Consequently, sage-steppe communities with high juniper densities exhibit negative impacts, such as reduced structural diversity, degraded wildlife habitat and ultimately the loss of biodiversity. Much of today's sage-steppe ecosystem is transitioning to juniper woodlands. Additionally, the majority of western juniper woodlands have not reached their full potential in both range and density. The first section of this research investigates the biophysical drivers responsible for juniper expansion patterns observed in the sage-steppe ecosystem. The second section is a comprehensive accuracy assessment of classification methods used to identify juniper tree cover from multispectral 1 m spatial resolution aerial imagery.
Resumo:
The accuracy of simulating the aerodynamics and structural properties of the blades is crucial in the wind-turbine technology. Hence the models used to implement these features need to be very precise and their level of detailing needs to be high. With the variety of blade designs being developed the models should be versatile enough to adapt to the changes required by every design. We are going to implement a combination of numerical models which are associated with the structural and the aerodynamic part of the simulation using the computational power of a parallel HPC cluster. The structural part models the heterogeneous internal structure of the beam based on a novel implementation of the Generalized Timoshenko Beam Model Technique.. Using this technique the 3-D structure of the blade is reduced into a 1-D beam which is asymptotically equivalent. This reduces the computational cost of the model without compromising its accuracy. This structural model interacts with the Flow model which is a modified version of the Blade Element Momentum Theory. The modified version of the BEM accounts for the large deflections of the blade and also considers the pre-defined structure of the blade. The coning, sweeping of the blade, tilt of the nacelle and the twist of the sections along the blade length are all computed by the model which aren’t considered in the classical BEM theory. Each of these two models provides feedback to the other and the interactive computations lead to more accurate outputs. We successfully implemented the computational models to analyze and simulate the structural and aerodynamic aspects of the blades. The interactive nature of these models and their ability to recompute data using the feedback from each other makes this code more efficient than the commercial codes available. In this thesis we start off with the verification of these models by testing it on the well-known benchmark blade for the NREL-5MW Reference Wind Turbine, an alternative fixed-speed stall-controlled blade design proposed by Delft University, and a novel alternative design that we proposed for a variable-speed stall-controlled turbine, which offers the potential for more uniform power control and improved annual energy production.. To optimize the power output of the stall-controlled blade we modify the existing designs and study their behavior using the aforementioned aero elastic model.
Resumo:
Reflection seismic data from the F3 block in the Dutch North Sea exhibits many large-amplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
A fundamental combustion model for spark-ignition engine is studied in this report. The model is implemented in SIMULINK to simulate engine outputs (mass fraction burn and in-cylinder pressure) under various engine operation conditions. The combustion model includes a turbulent propagation and eddy burning processes based on literature [1]. The turbulence propagation and eddy burning processes are simulated by zero-dimensional method and the flame is assumed as sphere. To predict pressure, temperature and other in-cylinder variables, a two-zone thermodynamic model is used. The predicted results of this model match well with the engine test data under various engine speeds, loads, spark ignition timings and air fuel mass ratios. The developed model is used to study cyclic variation and combustion stability at lean (or diluted) combustion conditions. Several variation sources are introduced into the combustion model to simulate engine performance observed in experimental data. The relations between combustion stability and the introduced variation amount are analyzed at various lean combustion levels.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.
Resumo:
The climate change narrative has changed from one of mitigation to one of adaptation. Governments around the world have created climate change frameworks which address how the country can better cope with the expected and unexpected changes due to global climate change. In an effort to do so, federal governments of Canada and the United States, as well as some provinces and states within these countries, have created detailed documents which outline what steps must be taken to adapt to these changes. However, not much is mentioned about how these steps will be translated in to policy, and how that policy will eventually be implemented. To examine the ability of governments to acknowledge and incorporate the plethora of scientific information to policy, consideration must be made for policy capacity. This report focuses on three sectors: water supply and demand; drought and flood planning; and forest and grassland ecosystems, and the word ‘capacity’ as related to nine different forms of policy capacity acknowledged in these frameworks. Qualitative content analysis using NVivo was carried out on fifty four frameworks and the results obtained show that there is a greater consideration for managerial capacity compared to analytical or political capacity. The data also indicated that although there were more Canadian frameworks which referred to policy capacity, the frameworks from the United States actually considered policy capacity to a greater degree.
Resumo:
This afternoon you will be working on descriptive statistics, such as what is the total number of discharges in the state of Montana for a given Diagnosis Related Group (DRG), what is the average payment of a given DRG, and what is the range of payments of a given DRG. We will also formulate and solve a statistical question such as is there a relationship between the size of a hospital and the average payment of a given DRG.
Resumo:
This morning Dr. Risser will introduce you to the basic ideas of social network analysis. You will learn some history behind the study of social networks. Dr. Risser will introduce you to mathematical measures of social networks including centrality measures and measures of spread and cohesion. You will also learn how to use a computer program to analyze social network data
Resumo:
There is practically only one method of gas analysis. This was worked out many years ago by Bunsen, Hempel, and Winkler and consists in the successive absorption with different chemicals of the various constituents of the gas. The only improvement to this method is the oxidation and combustion of different components of a mixture followed by absorption.