7 resultados para Two Approaches

em Digital Commons - Michigan Tech


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Infrared thermography is a well-recognized non-destructive testing technique for evaluating concrete bridge elements such as bridge decks and piers. However, overcoming some obstacles and limitations are necessary to be able to add this invaluable technique to the bridge inspector's tool box. Infrared thermography is based on collecting radiant temperature and presenting the results as a thermal infrared image. Two methods considered in conducting an infrared thermography test include passive and active. The source of heat is the main difference between these two approaches of infrared thermography testing. Solar energy and ambient temperature change are the main heat sources in conducting a passive infrared thermography test, while active infrared thermography involves generating a temperature gradient using an external source of heat other than sun. Passive infrared thermography testing was conducted on three concrete bridge decks in Michigan. Ground truth information was gathered through coring several locations on each bridge deck to validate the results obtained from the passive infrared thermography test. Challenges associated with data collection and processing using passive infrared thermography are discussed and provide additional evidence to confirm that passive infrared thermography is a promising remote sensing tool for bridge inspections. To improve the capabilities of the infrared thermography technique for evaluation of the underside of bridge decks and bridge girders, an active infrared thermography technique using the surface heating method was developed in the laboratory on five concrete slabs with simulated delaminations. Results from this study demonstrated that active infrared thermography not only eliminates some limitations associated with passive infrared thermography, but also provides information regarding the depth of the delaminations. Active infrared thermography was conducted on a segment of an out-of-service prestressed box beam and cores were extracted from several locations on the beam to validate the results. This study confirms the feasibility of the application of active infrared thermography on concrete bridges and of estimating the size and depth of delaminations. From the results gathered in this dissertation, it was established that applying both passive and active thermography can provide transportation agencies with qualitative and quantitative measures for efficient maintenance and repair decision-making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Analyzing “nuggety” gold samples commonly produces erratic fire assay results, due to random inclusion or exclusion of coarse gold in analytical samples. Preconcentrating gold samples might allow the nuggets to be concentrated and fire assayed separately. In this investigation synthetic gold samples were made using similar density tungsten powder and silica, and were preconcentrated using two approaches: an air jig and an air classifier. Current analytical gold sampling method is time and labor intensive and our aim is to design a set-up for rapid testing. It was observed that the preliminary air classifier design showed more promise than the air jig in terms of control over mineral recovery and preconcentrating bulk ore sub-samples. Hence the air classifier was modified with the goal of producing 10-30 grams samples aiming to capture all of the high density metallic particles, tungsten in this case. Effects of air velocity and feed rate on the recovery of tungsten from synthetic tungsten-silica mixtures were studied. The air classifier achieved optimal high density metal recovery of 97.7% at an air velocity of 0.72 m/s and feed rate of 160 g/min. Effects of density on classification were investigated by using iron as the dense metal instead of tungsten and the recovery was seen to drop from 96.13% to 20.82%. Preliminary investigations suggest that preconcentration of gold samples is feasible using the laboratory designed air classifier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bulk electric waste plastics were recycled and reduced in size into plastic chips before pulverization or cryogenic grinding into powders. Two major types of electronic waste plastics were used in this investigation: acrylonitrile butadiene styrene (ABS) and high impact polystyrene (HIPS). This research investigation utilized two approaches for incorporating electronic waste plastics into asphalt pavement materials. The first approach was blending and integrating recycled and processed electronic waste powders directly into asphalt mixtures and binders; and the second approach was to chemically treat recycled and processed electronic waste powders with hydro-peroxide before blending into asphalt mixtures and binders. The chemical treatment of electronic waste (e-waste) powders was intended to strengthen molecular bonding between e-waste plastics and asphalt binders for improved low and high temperature performance. Superpave asphalt binder and mixture testing techniques were conducted to determine the rheological and mechanical performance of the e-waste modified asphalt binders and mixtures. This investigation included a limited emissions-performance assessment to compare electronic waste modified asphalt pavement mixture emissions using SimaPro and performance using MEPDG software. Carbon dioxide emissions for e-waste modified pavement mixtures were compared with conventional asphalt pavement mixtures using SimaPro. MEPDG analysis was used to determine rutting potential between the various e-waste modified pavement mixtures and the control asphalt mixture. The results from this investigation showed the following: treating the electronic waste plastics delayed the onset of tertiary flow for electronic waste mixtures, electronic waste mixtures showed some improvement in dynamic modulus results at low temperatures versus the control mixture, and tensile strength ratio values for treated e-waste asphalt mixtures were improved versus the control mixture.