995 resultados para Theses and Dissertation Repositories


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The time course of lake recovery after a reduction in external loading of nutrients is often controlled by conditions in the sediment. Remediation of eutrophication is hindered by the presence of legacy organic carbon deposits, that exert a demand on the terminal electron acceptors of the lake and contribute to problems such as internal nutrient recycling, absence of sediment macrofauna, and flux of toxic metal species into the water column. Being able to quantify the timing of a lake’s response requires determination of the magnitude and lability, i.e., the susceptibility to biodegradation, of the organic carbon within the legacy deposit. This characterization is problematic for organic carbon in sediments because of the presence of different fractions of carbon, which vary from highly labile to refractory. The lability of carbon under varied conditions was tested with a bioassay approach. It was found that the majority of the organic material found in the sediments is conditionally-labile, where mineralization potential is dependent on prevailing conditions. High labilities were noted under oxygenated conditions and a favorable temperature of 30 °C. Lability decreased when oxygen was removed, and was further reduced when the temperature was dropped to the hypolimnetic average of 8° C . These results indicate that reversible preservation mechanisms exist in the sediment, and are able to protect otherwise labile material from being mineralized under in situ conditions. The concept of an active sediment layer, a region in the sediments in which diagenetic reactions occur (with nothing occurring below it), was examined through three lines of evidence. Initially, porewater profiles of oxygen, nitrate, sulfate/total sulfide, ETSA (Electron Transport System Activity- the activity of oxygen, nitrate, iron/manganese, and sulfate), and methane were considered. It was found through examination of the porewater profiles that the edge of diagenesis occurred around 15-20 cm. Secondly, historical and contemporary TOC profiles were compared to find the point at which the profiles were coincident, indicating the depth at which no change has occurred over the (13 year) interval between core collections. This analysis suggested that no diagenesis has occurred in Onondaga Lake sediment below a depth of 15 cm. Finally, the time to 99% mineralization, the t99, was viewed by using a literature estimate of the kinetic rate constant for diagenesis. A t99 of 34 years, or approximately 30 cm of sediment depth, resulted for the slowly decaying carbon fraction. Based on these three lines of evidence , an active sediment layer of 15-20 cm is proposed for Onondaga Lake, corresponding to a time since deposition of 15-20 years. While a large legacy deposit of conditionally-labile organic material remains in the sediments of Onondaga Lake, it becomes clear that preservation, mechanisms that act to shield labile organic carbon from being degraded, protects this material from being mineralized and exerting a demand on the terminal electron acceptors of the lake. This has major implications for management of the lake, as it defines the time course of lake recovery following a reduction in nutrient loading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The delivery of oxygen, nutrients, and the removal of waste are essential for cellular survival. Culture systems for 3D bone tissue engineering have addressed this issue by utilizing perfusion flow bioreactors that stimulate osteogenic activity through the delivery of oxygen and nutrients by low-shear fluid flow. It is also well established that bone responds to mechanical stimulation, but may desensitize under continuous loading. While perfusion flow and mechanical stimulation are used to increase cellular survival in vitro, 3D tissue-engineered constructs face additional limitations upon in vivo implantation. As it requires significant amounts of time for vascular infiltration by the host, implants are subject to an increased risk of necrosis. One solution is to introduce tissue-engineered bone that has been pre-vascularized through the co-culture of osteoblasts and endothelial cells on 3D constructs. It is unclear from previous studies: 1) how 3D bone tissue constructs will respond to partitioned mechanical stimulation, 2) how gene expression compares in 2D and in 3D, 3) how co-cultures will affect osteoblast activity, and 4) how perfusion flow will affect co-cultures of osteoblasts and endothelial cells. We have used an integrated approach to address these questions by utilizing mechanical stimulation, perfusion flow, and a co-culture technique to increase the success of 3D bone tissue engineering. We measured gene expression of several osteogenic and angiogenic genes in both 2D and 3D (static culture and mechanical stimulation), as well as in 3D cultures subjected to perfusion flow, mechanical stimulation and partitioned mechanical stimulation. Finally, we co-cultured osteoblasts and endothelial cells on 3D scaffolds and subjected them to long-term incubation in either static culture or under perfusion flow to determine changes in gene expression as well as histological measures of osteogenic and angiogenic activity. We discovered that 2D and 3D osteoblast cultures react differently to shear stress, and that partitioning mechanical stimulation does not affect gene expression in our model. Furthermore, our results suggest that perfusion flow may rescue 3D tissue-engineered constructs from hypoxic-like conditions by reducing hypoxia-specific gene expression and increasing histological indices of both osteogenic and angiogenic activity. Future research to elucidate the mechanisms behind these results may contribute to a more mature bone-like structure that integrates more quickly into host tissue, increasing the potential of bone tissue engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volcanoes are the surficial expressions of complex pathways that vent magma and gasses generated deep in the Earth. Geophysical data record at least the partial history of magma and gas movement in the conduit and venting to the atmosphere. This work focuses on developing a more comprehensive understanding of explosive degassing at Fuego volcano, Guatemala through observations and analysis of geophysical data collected in 2005 – 2009. A pattern of eruptive activity was observed during 2005 – 2007 and quantified with seismic and infrasound, satellite thermal and gas measurements, and lava flow lengths. Eruptive styles are related to variable magma flux and accumulation of gas. Explosive degassing was recorded on broadband seismic and infrasound sensors in 2008 and 2009. Explosion energy partitioning between the ground and the atmosphere shows an increase in acoustic energy from 2008 to 2009, indicating a shift toward increased gas pressure in the conduit. Very-long-period (VLP) seismic signals are associated with the strongest explosions recorded in 2009 and waveform modeling in the 10 – 30 s band produces a best-fit source location 300 m west and 300 m below the summit crater. The calculated moment tensor indicates a volumetric source, which is modeled as a dike feeding a SW-dipping (35°) sill. The sill is the dominant component and its projection to the surface nearly intersects the summit crater. The deformation history of the sill is interpreted as: 1) an initial inflation due to pressurization, followed by 2) a rapid deflation as overpressure is explosively release, and finally 3) a reinflation as fresh magma flows into the sill and degasses. Tilt signals are derived from the horizontal components of the seismometer and show repetitive inflation deflation cycles with a 20 minute period coincident with strong explosions. These cycles represent the pressurization of the shallow conduit and explosive venting of overpressure that develops beneath a partially crystallized plug of magma. The energy released during the strong explosions has allowed for imaging of Fuego’s shallow conduit, which appears to have migrated west of the summit crater. In summary, Fuego is becoming more gas charged and its summit centered vent is shifting to the west - serious hazard consequences are likely.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For countless communities around the world, acquiring access to safe drinking water is a daily challenge which many organizations endeavor to meet. The villages in the interior of Suriname have been the focus of many improved drinking water projects as most communities are without year-round access. Unfortunately, as many as 75% of the systems in Suriname fail within several years of implementation. These communities, scattered along the rivers and throughout the jungle, lack many of the resources required to sustain a centralized water treatment system. However, the centralized system in the village of Bendekonde on the Upper Suriname River has been operational for over 10 years and is often touted by other communities. The Bendekonde system is praised even though the technology does not differ significantly from other failed systems. Many of the water systems that fail in the interior fail due to a lack of resources available to the community to maintain the system. Typically, the more complex a system becomes, so does the demand for additional resources. Alternatives to centralized systems include technologies such as point-of-use water filters, which can greatly reduce the necessity for outside resources. In particular, ceramic point-of-use water filters offer a technology that can be reasonably managed in a low resource setting such as that in the interior of Suriname. This report investigates the appropriateness and effectiveness of ceramic filters constructed with local Suriname clay and compares the treatment effectiveness to that of the Bendekonde system. Results of this study showed that functional filters could be produced from Surinamese clay and that they were more effective, in a controlled laboratory setting, than the field performance of the Bendekonde system for removing total coliform. However, the Bendekonde system was more successful at removing E. coli. In a life-cycle assessment, ceramic water filters manufactured in Suriname and used in homes for a lifespan of 2 years were shown to have lower cumulative energy demand, as well as lower global warming potential than a centralized system similar to that used in Bendekonde.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Onondaga Lake has received the municipal effluent and industrial waste from the city of Syracuse for more than a century. Historically, 75 metric tons of mercury were discharged to the lake by chlor-alkali facilities. These legacy deposits of mercury now exist primarily in the lake sediments. Under anoxic conditions, methylmercury is produced in the sediments and can be released to the overlying water. Natural sedimentation processes are continuously burying the mercury deeper into the sediments. Eventually, the mercury will be buried to a depth where it no longer has an impact on the overlying water. In the interim, electron acceptor amendment systems can be installed to retard these chemical releases while the lake naturally recovers. Electron acceptor amendment systems are designed to meet the sediment oxygen demand in the sediment and maintain manageable hypolimnion oxygen concentrations. Historically, designs of these systems have been under designed resulting in failure. This stems from a mischaracterization of the sediment oxygen demand. Turbulence at the sediment water interface has been shown to impact sediment oxygen demand. The turbulence introduced by the electron amendment system can thus increase the sediment oxygen demand, resulting in system failure if turbulence is not factored into the design. Sediment cores were gathered and operated to steady state under several well characterized turbulence conditions. The relationship between sediment oxygen/nitrate demand and turbulence was then quantified and plotted. A maximum demand was exhibited at or above a fluid velocity of 2.0 mm•s-1. Below this velocity, demand decreased rapidly with fluid velocity as zero velocity was approached. Similar relationships were displayed by both oxygen and nitrate cores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecules are the smallest possible elements for electronic devices, with active elements for such devices typically a few Angstroms in footprint area. Owing to the possibility of producing ultrahigh density devices, tremendous effort has been invested in producing electronic junctions by using various types of molecules. The major issues for molecular electronics include (1) developing an effective scheme to connect molecules with the present micro- and nano-technology, (2) increasing the lifetime and stabilities of the devices, and (3) increasing their performance in comparison to the state-of-the-art devices. In this work, we attempt to use carbon nanotubes (CNTs) as the interconnecting nanoelectrodes between molecules and microelectrodes. The ultimate goal is to use two individual CNTs to sandwich molecules in a cross-bar configuration while having these CNTs connected with microelectrodes such that the junction displays the electronic character of the molecule chosen. We have successfully developed an effective scheme to connect molecules with CNTs, which is scalable to arrays of molecular electronic devices. To realize this far reaching goal, the following technical topics have been investigated. 1. Synthesis of multi-walled carbon nanotubes (MWCNTs) by thermal chemical vapor deposition (T-CVD) and plasma-enhanced chemical vapor deposition (PECVD) techniques (Chapter 3). We have evaluated the potential use of tubular and bamboo-like MWCNTs grown by T-CVD and PE-CVD in terms of their structural properties. 2. Horizontal dispersion of MWCNTs with and without surfactants, and the integration of MWCNTs to microelectrodes using deposition by dielectrophoresis (DEP) (Chapter 4). We have systematically studied the use of surfactant molecules to disperse and horizontally align MWCNTs on substrates. In addition, DEP is shown to produce impurityfree placement of MWCNTs, forming connections between microelectrodes. We demonstrate the deposition density is tunable by both AC field strength and AC field frequency. 3. Etching of MWCNTs for the impurity-free nanoelectrodes (Chapter 5). We show that the residual Ni catalyst on MWCNTs can be removed by acid etching; the tip removal and collapsing of tubes into pyramids enhances the stability of field emission from the tube arrays. The acid-etching process can be used to functionalize the MWCNTs, which was used to make our initial CNT-nanoelectrode glucose sensors. Finally, lessons learned trying to perform spectroscopic analysis of the functionalized MWCNTs were vital for designing our final devices. 4. Molecular junction design and electrochemical synthesis of biphenyl molecules on carbon microelectrodes for all-carbon molecular devices (Chapter 6). Utilizing the experience gained on the work done so far, our final device design is described. We demonstrate the capability of preparing patterned glassy carbon films to serve as the bottom electrode in the new geometry. However, the molecular switching behavior of biphenyl was not observed by scanning tunneling microscopy (STM), mercury drop or fabricated glassy carbon/biphenyl/MWCNT junctions. Either the density of these molecules is not optimum for effective integration of devices using MWCNTs as the nanoelectrodes, or an electroactive contaminant was reduced instead of the ionic biphenyl species. 5. Self-assembly of octadecanethiol (ODT) molecules on gold microelectrodes for functional molecular devices (Chapter 7). We have realized an effective scheme to produce Au/ODT/MWCNT junctions by spanning MWCNTs across ODT-functionalized microelectrodes. A percentage of the resulting junctions retain the expected character of an ODT monolayer. While the process is not yet optimized, our successful junctions show that molecular electronic devices can be fabricated using simple processes such as photolithography, self-assembled monolayers and dielectrophoresis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phosphomolybdic acid (H3PMo12O40) along with niobium,pyridine and niobium exchanged phosphomolybdic acid catalysts were prepared. Ammonia adsorption microcalorimetry and methanol oxidation studies were carried out to investigate the acid sites strength acid/base/redox properties of each catalyst. The addition of niobium, pyridine or both increased the ammonia heat of adsorption and the total uptake. The catalyst with both niobium and pyridine demonstrated the largest number of strong sites. For the parent H3PMo12O40 catalyst, methanol oxidation favors the redox product. Incorporation of niobium results in similar selectivity to redox products but also results in no catalyst deactivation. Incorporation of pyridine instead changes to the selectivity to favor the acidic product. Finally, the inclusion of both niobium and pyridine results in strong selectivity to the acidic product while also showing no catalyst deactivation. Thus the presence of pyridine appears to enhance the acid property of the catalyst while niobium appears to stabilize the active site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous system applications are typically limited by the power supply operational lifetime when battery replacement is difficult or costly. A trade-off between battery size and battery life is usually calculated to determine the device capability and lifespan. As a result, energy harvesting research has gained importance as society searches for alternative energy sources for power generation. For instance, energy harvesting has been a proven alternative for powering solar-based calculators and self-winding wristwatches. Thus, the use of energy harvesting technology can make it possible to assist or replace batteries for portable, wearable, or surgically-implantable autonomous systems. Applications such as cardiac pacemakers or electrical stimulation applications can benefit from this approach since the number of surgeries for battery replacement can be reduced or eliminated. Research on energy scavenging from body motion has been investigated to evaluate the feasibility of powering wearable or implantable systems. Energy from walking has been previously extracted using generators placed on shoes, backpacks, and knee braces while producing power levels ranging from milliwatts to watts. The research presented in this paper examines the available power from walking and running at several body locations. The ankle, knee, hip, chest, wrist, elbow, upper arm, side of the head, and back of the head were the chosen target localizations. Joints were preferred since they experience the most drastic acceleration changes. For this, a motor-driven treadmill test was performed on 11 healthy individuals at several walking (1-4 mph) and running (2-5 mph) speeds. The treadmill test provided the acceleration magnitudes from the listed body locations. Power can be estimated from the treadmill evaluation since it is proportional to the acceleration and frequency of occurrence. Available power output from walking was determined to be greater than 1mW/cm³ for most body locations while being over 10mW/cm³ at the foot and ankle locations. Available power from running was found to be almost 10 times higher than that from walking. Most energy harvester topologies use linear generator approaches that are well suited to fixed-frequency vibrations with sub-millimeter amplitude oscillations. In contrast, body motion is characterized with a wide frequency spectrum and larger amplitudes. A generator prototype based on self-winding wristwatches is deemed to be appropriate for harvesting body motion since it is not limited to operate at fixed-frequencies or restricted displacements. Electromagnetic generation is typically favored because of its slightly higher power output per unit volume. Then, a nonharmonic oscillating rotational energy scavenger prototype is proposed to harness body motion. The electromagnetic generator follows the approach from small wind turbine designs that overcome the lack of a gearbox by using a larger number of coil and magnets arrangements. The device presented here is composed of a rotor with multiple-pole permanent magnets having an eccentric weight and a stator composed of stacked planar coils. The rotor oscillations induce a voltage on the planar coil due to the eccentric mass unbalance produced by body motion. A meso-scale prototype device was then built and evaluated for energy generation. The meso-scale casing and rotor were constructed on PMMA with the help of a CNC mill machine. Commercially available discrete magnets were encased in a 25mm rotor. Commercial copper-coated polyimide film was employed to manufacture the planar coils using MEMS fabrication processes. Jewel bearings were used to finalize the arrangement. The prototypes were also tested at the listed body locations. A meso-scale generator with a 2-layer coil was capable to extract up to 234 µW of power at the ankle while walking at 3mph with a 2cm³ prototype for a power density of 117 µW/cm³. This dissertation presents the analysis of available power from walking and running at different speeds and the development of an unobtrusive miniature energy harvesting generator for body motion. Power generation indicates the possibility of powering devices by extracting energy from body motion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green’s Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of advanced materials aimed at improving human life has been performed since time immemorial. Such studies have created everlasting and greatly revered monuments and have helped revolutionize transportation by ushering the age of lighter–than–air flying machines. Hence a study of the mechanical behavior of advanced materials can pave way for their use for mankind’s benefit. In this school of thought, the aim of this dissertation is to broadly perform two investigations. First, an efficient modeling approach is established to predict the elastic response of cellular materials with distributions of cell geometries. Cellular materials find important applications in structural engineering. The approach does not require complex and time-consuming computational techniques usually associated with modeling such materials. Unlike most current analytical techniques, the modeling approach directly accounts for the cellular material microstructure. The approach combines micropolar elasticity theory and elastic mixture theory to predict the elastic response of cellular materials. The modeling approach is applied to the two dimensional balsa wood material. Predicted properties are in good agreement with experimentally determined properties, which emphasizes the model’s potential to predict the elastic response of other cellular solids, such as open cell and closed cell foams. The second topic concerns intraneural ganglion cysts which are a set of medical conditions that result in denervation of the muscles innervated by the cystic nerve leading to pain and loss of function. Current treatment approaches only temporarily alleviate pain and denervation which, however, does not prevent cyst recurrence. Hence, a mechanistic understanding of the pathogenesis of intraneural ganglion cysts can help clinicians understand them better and therefore devise more effective treatment options. In this study, an analysis methodology using finite element analysis is established to investigate the pathogenesis of intraneural ganglion cysts. Using this methodology, the propagation of these cysts is analyzed in their most common site of occurrence in the human body i.e. the common peroneal nerve. Results obtained using finite element analysis show good correlation with clinical imaging patterns thereby validating the promise of the method to study cyst pathogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Northern hardwood management was assessed throughout the state of Michigan using data collected on recently harvested stands in 2010 and 2011. Methods of forensic estimation of diameter at breast height were compared and an ideal, localized equation form was selected for use in reconstructing pre-harvest stand structures. Comparisons showed differences in predictive ability among available equation forms which led to substantial financial differences when used to estimate the value of removed timber. Management on all stands was then compared among state, private, and corporate landowners. Comparisons of harvest intensities against a liberal interpretation of a well-established management guideline showed that approximately one third of harvests were conducted in a manner which may imply that the guideline was followed. One third showed higher levels of removals than recommended, and one third of harvests were less intensive than recommended. Multiple management guidelines and postulated objectives were then synthesized into a novel system of harvest taxonomy, against which all harvests were compared. This further comparison showed approximately the same proportions of harvests, while distinguishing sanitation cuts and the future productive potential of harvests cut more intensely than suggested by guidelines. Stand structures are commonly represented using diameter distributions. Parametric and nonparametric techniques for describing diameter distributions were employed on pre-harvest and post-harvest data. A common polynomial regression procedure was found to be highly sensitive to the method of histogram construction which provides the data points for the regression. The discriminative ability of kernel density estimation was substantially different from that of the polynomial regression technique.