9 resultados para Networked Digital Environment
em Digital Commons - Michigan Tech
Resumo:
This dissertation explores the viability of invitational rhetoric as a mode of advocacy for sustainable energy use in the residential built environment. The theoretical foundations for this study join ecofeminist concepts and commitments with the conditions and resources of invitational rhetoric, developing in particular the rhetorical potency of the concepts of re-sourcement and enfoldment. The methodological approach is autoethnography using narrative reflection and journaling, both adapted to and developed within the autoethnographic project. Through narrative reflection, the author explores her lived experiences in advocating for energy-responsible residential construction in the Keweenaw Peninsula of Michigan. The analysis reveals the opportunities for cooperative, collaborative advocacy and the struggle against traditional conventions of persuasive advocacy, particularly the centrality of the rhetor. The author also conducted two field trips to India, primarily the state of Kerala. Drawing on autoethnographic journaling, the analysis highlights the importance of sensory relations in lived advocacy and the resonance of everyday Indian culture to invitational principles. Based on field research, the dissertation proposes autoethnography as a critical development in encouraging invitational rhetoric as an alternative mode of effecting change. The invitational force of autoethnography is evidenced in portraying the material advocacy of the built environment itself, specifically the sensual experience of material arrangements and ambience, as well as revealing the corporeality of advocacy, that is, the body as the site of invitational engagement, emotional encounter, and sensory experience. This study concludes that vulnerability of self in autoethnographic work and the vulnerability of rhetoric as invitational constitute the basis for transformation. The dissertation confirms the potential of an ecofeminist invitational advocacy conveyed autoethnographically for transforming perceptions and use of energy in a smaller-scale residential environment appropriate for culture, climate, and ultimately part of the challenge of sustaining life on this planet.
Resumo:
Since product take-back is mandated in Europe, and has effects for producers worldwide including the U.S., designing efficient forward and reverse supply chain networks is becoming essential for business viability. Centralizing production facilities may reduce costs but perhaps not environmental impacts. Decentralizing a supply chain may reduce transportation environmental impacts but increase capital costs. Facility location strategies of centralization or decentralization are tested for companies with supply chains that both take back and manufacture products. Decentralized and centralized production systems have different effects on the environment, industry and the economy. Decentralized production systems cluster suppliers within the geographical market region that the system serves. Centralized production systems have many suppliers spread out that meet all market demand. The point of this research is to help further the understanding of company decision-makers about impacts to the environment and costs when choosing a decentralized or centralized supply chain organizational strategy. This research explores; what degree of centralization for a supply chain makes the most financial and environmental sense for siting facilities; and which factories are in the best location to handle the financial and environmental impacts of particular processing steps needed for product manufacture. This research considered two examples of facility location for supply chains when products are taken back; the theoretical case involved shoe resoling and a real world case study considered the location of operations for a company that reclaims multiple products for use as material inputs. For the theoretical example a centralized strategy to facility location was optimal: whereas for the case study a decentralized strategy to facility location was best. In conclusion, it is not possible to say that a centralized or decentralized strategy to facility location is in general best for a company that takes back products. Each company’s specific concerns, needs, and supply chain details will determine which degree of centralization creates the optimal strategy for siting their facilities.
Resumo:
The Great Lakes watershed is home to over 40 million people, and the health of the Great Lakes ecosystem is vital to the overall economic, societal, and environmental health of the U.S. and Canada. However, environmental issues related to them are sometimes overlooked. Policymakers and the public face the challenges of balancing economic benefits with the need to conserve and/or replenish regional natural resources to ensure long term prosperity. From the literature review, nine critical stressors of ecological services were delineated, which include pollution and contamination, agricultural erosion, non-native species, degraded recreational resources, loss of wetlands habitat, climate change, risk of clean water shortage, vanishing sand dunes, and population overcrowding; this list was validated through a series of stakeholder discussions and focus groups in Grand Rapids. Focus groups were conducted in Grand Rapids to examine the awareness of, concern with, and willingness to expend resources on these stressors. Stressors that the respondents have direct contact with tend to be the most important. The focus group results show that concern related to pollution and contamination is much higher than for any of the other stressors. Low responses to climate change result in recommendations for outreach programs.
Resumo:
In 1906, two American industrialists, John Munroe Longyear and Frederick Ayer, formed the Arctic Coal Company to make the first large scale attempt at mining in the high-Arctic location of Spitsbergen, north of the Norwegian mainland. In doing so, they encountered numerous obstacles and built an organization that attempted to overcome them. The Americans sold out in 1916 but others followed, eventually culminating in the transformation of a largely underdeveloped landscape into a mining region. This work uses John Law’s network approach of the Actor Network Theory (ANT) framework to explain how the Arctic Coal Company built a mining network in this environmentally difficult region and why they made the choices they did. It does so by identifying and analyzing the problems the company encountered and the strategies they used to overcome them by focusing on three major components of the operations; the company’s four land claims, its technical system and its main settlement, Longyear City. Extensive comparison between aspects of Longyear City and the company’s choices of technology with other American examples place analysis of the company in a wider context and helps isolate unique aspects of mining in the high-Arctic. American examples dominate comparative sections because Americans dominated the ownership and upper management of the company.
Resumo:
Understanding clouds and their role in climate depends in part on our ability to understand how individual cloud particles respond to environmental conditions. Keeping this objective in mind, a quadrupole trap with thermodynamic control has been designed and constructed in order to create an environment conducive to studying clouds in the laboratory. The quadrupole trap allows a single cloud particle to be suspended for long times. The temperature and water vapor saturation ratio near the trapped particle is controlled by the flow of saturated air through a tube with a discontinuous wall temperature. The design has the unique aspect that the quadrupole electrodes are submerged in heat transfer fluid, completely isolated from the cylindrical levitation volume. This fluid is used in the thermodynamic system to cool the chamber to realistic cloud temperatures, and a heated section of the tube provides for the temperature discontinuity. Thus far, charged water droplets, ranging from about 30-70 microns in diameter have been levitated. In addition, the thermodynamic system has been shown to create the necessary thermal conditions that will create supersaturated conditions in subsequent experiments. These advances will help lead to the next generation of ice nucleation experiments, moving from hemispherical droplets on a substrate to a spherical droplet that is not in contact with any surface.
Resumo:
This study will look at the passenger air bag (PAB) performance in a fix vehicle environment using Partial Low Risk Deployment (PLRD) as a strategy. This development will follow test methods against actual baseline vehicle data and Federal Motor Vehicle Safety Standards 208 (FMVSS 208). FMVSS 208 states that PAB compliance in vehicle crash testing can be met using one of three deployment methods. The primary method suppresses PAB deployment, with the use of a seat weight sensor or occupant classification sensor (OCS), for three-year old and six-year old occupants including the presence of a child seat. A second method, PLRD allows deployment on all size occupants suppressing only for the presents of a child seat. A third method is Low Risk Deployment (LRD) which allows PAB deployment in all conditions, all statures including any/all child seats. This study outlines a PLRD development solution for achieving FMVSS 208 performance. The results of this study should provide an option for system implementation including opportunities for system efficiency and other considerations. The objective is to achieve performance levels similar too or incrementally better than the baseline vehicles National Crash Assessment Program (NCAP) Star rating. In addition, to define systemic flexibility where restraint features can be added or removed while improving occupant performance consistency to the baseline. A certified vehicles’ air bag system will typically remain in production until the vehicle platform is redesigned. The strategy to enable the PLRD hypothesis will be to first match the baseline out of position occupant performance (OOP) for the three and six-year old requirements. Second, improve the 35mph belted 5th percentile female NCAP star rating over the baseline vehicle. Third establish an equivalent FMVSS 208 certification for the 25mph unbelted 50th percentile male. FMVSS 208 high-speed requirement defines the federal minimum crash performance required for meeting frontal vehicle crash-test compliance. The intent of NCAP 5-Star rating is to provide the consumer with information about crash protection, beyond what is required by federal law. In this study, two vehicles segments were used for testing to compare and contrast to their baseline vehicles performance. Case Study 1 (CS1) used a cross over vehicle platform and Case Study 2 (CS2) used a small vehicle segment platform as their baselines. In each case study, the restraints systems were from different restraint supplier manufactures and each case contained that suppliers approach to PLRD. CS1 incorporated a downsized twins shaped bag, a carryover inflator, standard vents, and a strategic positioned bag diffuser to help disperse the flow of gas to improve OOP. The twin shaped bag with two segregated sections (lobes) to enabled high-speed baseline performance correlation on the HYGE Sled. CS2 used an A-Symmetric (square shape) PAB with standard size vents, including a passive vent, to obtain OOP similar to the baseline. The A-Symmetric shape bag also helped to enabled high-speed baseline performance improvements in HYGE Sled testing in CS2. The anticipated CS1 baseline vehicle-pulse-index (VPI) target was in the range of 65-67. However, actual dynamic vehicle (barrier) testing was overshadowed with the highest crash pulse from the previous tested vehicles with a VPI of 71. The result from the 35mph NCAP Barrier test was a solid 4-Star (4.7 Star) respectfully. In CS2, the vehicle HYGE Sled development VPI range, from the baseline was 61-62 respectively. Actual NCAP test produced a chest deflection result of 26mm versus the anticipated baseline target of 12mm. The initial assessment of this condition was thought to be due to the vehicles significant VPI increase to 67. A subsequent root cause investigation confirmed a data integrity issue due to the instrumentation. In an effort to establish a true vehicle test data point a second NCAP test was performed but faced similar instrumentation issues. As a result, the chest deflect hit the target of 12.1mm; however a femur load spike, similar to the baseline, now skewed the results. With noted level of performance improvement in chest deflection, the NCAP star was assessed as directional for 5-Star capable performance. With an actual rating of 3-Star due to instrumentation, using data extrapolation raised the ratings to 5-Star. In both cases, no structural changes were made to the surrogate vehicle and the results in each case matched their perspective baseline vehicle platforms. These results proved the PLRD is viable for further development and production implementation.
Resumo:
Direction-of-arrival (DOA) estimation is susceptible to errors introduced by the presence of real-ground and resonant size scatterers in the vicinity of the antenna array. To compensate for these errors pre-calibration and auto-calibration techniques are presented. The effects of real-ground constituent parameters on the mutual coupling (MC) of wire type antenna arrays for DOA estimation are investigated. This is accomplished by pre-calibration of the antenna array over the real-ground using the finite element method (FEM). The mutual impedance matrix is pre-estimated and used to remove the perturbations in the received terminal voltage. The unperturbed terminal voltage is incorporated in MUSIC algorithm to estimate DOAs. First, MC of quarter wave monopole antenna arrays is investigated. This work augments an existing MC compensation technique for ground-based antennas and proposes reduction in MC for antennas over finite ground as compared to the perfect ground. A factor of 4 decrease in both the real and imaginary parts of the MC is observed when considering a poor ground versus a perfectly conducting one for quarter wave monopoles in the receiving mode. A simulated result to show the compensation of errors direction of arrival (DOA) estimation with actual realization of the environment is also presented. Secondly, investigations for the effects on received MC of λ/2 dipole arrays placed near real-earth are carried out. As a rule of thumb, estimation of mutual coupling can be divided in two regions of antenna height that is very near ground 0
Resumo:
Cloud edge mixing plays an important role in the life cycle and development of clouds. Entrainment of subsaturated air affects the cloud at the microscale, altering the number density and size distribution of its droplets. The resulting effect is determined by two timescales: the time required for the mixing event to complete, and the time required for the droplets to adjust to their new environment. If mixing is rapid, evaporation of droplets is uniform and said to be homogeneous in nature. In contrast, slow mixing (compared to the adjustment timescale) results in the droplets adjusting to the transient state of the mixture, producing an inhomogeneous result. Studying this process in real clouds involves the use of airborne optical instruments capable of measuring clouds at the `single particle' level. Single particle resolution allows for direct measurement of the droplet size distribution. This is in contrast to other `bulk' methods (i.e. hot-wire probes, lidar, radar) which measure a higher order moment of the distribution and require assumptions about the distribution shape to compute a size distribution. The sampling strategy of current optical instruments requires them to integrate over a path tens to hundreds of meters to form a single size distribution. This is much larger than typical mixing scales (which can extend down to the order of centimeters), resulting in difficulties resolving mixing signatures. The Holodec is an optical particle instrument that uses digital holography to record discrete, local volumes of droplets. This method allows for statistically significant size distributions to be calculated for centimeter scale volumes, allowing for full resolution at the scales important to the mixing process. The hologram also records the three dimensional position of all particles within the volume, allowing for the spatial structure of the cloud volume to be studied. Both of these features represent a new and unique view into the mixing problem. In this dissertation, holographic data recorded during two different field projects is analyzed to study the mixing structure of cumulus clouds. Using Holodec data, it is shown that mixing at cloud top can produce regions of clear but humid air that can subside down along the edge of the cloud as a narrow shell, or advect down shear as a `humid halo'. This air is then entrained into the cloud at lower levels, producing mixing that appears to be very inhomogeneous. This inhomogeneous-like mixing is shown to be well correlated with regions containing elevated concentrations of large droplets. This is used to argue in favor of the hypothesis that dilution can lead to enhanced droplet growth rates. I also make observations on the microscale spatial structure of observed cloud volumes recorded by the Holodec.
Resumo:
Mobile sensor networks have unique advantages compared with wireless sensor networks. The mobility enables mobile sensors to flexibly reconfigure themselves to meet sensing requirements. In this dissertation, an adaptive sampling method for mobile sensor networks is presented. Based on the consideration of sensing resource constraints, computing abilities, and onboard energy limitations, the adaptive sampling method follows a down sampling scheme, which could reduce the total number of measurements, and lower sampling cost. Compressive sensing is a recently developed down sampling method, using a small number of randomly distributed measurements for signal reconstruction. However, original signals cannot be reconstructed using condensed measurements, as addressed by Shannon Sampling Theory. Measurements have to be processed under a sparse domain, and convex optimization methods should be applied to reconstruct original signals. Restricted isometry property would guarantee signals can be recovered with little information loss. While compressive sensing could effectively lower sampling cost, signal reconstruction is still a great research challenge. Compressive sensing always collects random measurements, whose information amount cannot be determined in prior. If each measurement is optimized as the most informative measurement, the reconstruction performance can perform much better. Based on the above consideration, this dissertation is focusing on an adaptive sampling approach, which could find the most informative measurements in unknown environments and reconstruct original signals. With mobile sensors, measurements are collect sequentially, giving the chance to uniquely optimize each of them. When mobile sensors are about to collect a new measurement from the surrounding environments, existing information is shared among networked sensors so that each sensor would have a global view of the entire environment. Shared information is analyzed under Haar Wavelet domain, under which most nature signals appear sparse, to infer a model of the environments. The most informative measurements can be determined by optimizing model parameters. As a result, all the measurements collected by the mobile sensor network are the most informative measurements given existing information, and a perfect reconstruction would be expected. To present the adaptive sampling method, a series of research issues will be addressed, including measurement evaluation and collection, mobile network establishment, data fusion, sensor motion, signal reconstruction, etc. Two dimensional scalar field will be reconstructed using the method proposed. Both single mobile sensors and mobile sensor networks will be deployed in the environment, and reconstruction performance of both will be compared.In addition, a particular mobile sensor, a quadrotor UAV is developed, so that the adaptive sampling method can be used in three dimensional scenarios.