948 resultados para Biochemical and Biomolecular Engineering
Resumo:
Due to the inherent limitations of DXA, assessment of the biomechanical properties of vertebral bodies relies increasingly on CT-based finite element (FE) models, but these often use simplistic material behaviour and/or single loading cases. In this study, we applied a novel constitutive law for bone elasticity, plasticity and damage to FE models created from coarsened pQCT images of human vertebrae, and compared vertebral stiffness, strength and damage accumulation for axial compression, anterior flexion and a combination of these two cases. FE axial stiffness and strength correlated with experiments and were linearly related to flexion properties. In all loading modes, damage localised preferentially in the trabecular compartment. Damage for the combined loading was higher than cumulated damage produced by individual compression and flexion. In conclusion, this FE method predicts stiffness and strength of vertebral bodies from CT images with clinical resolution and provides insight into damage accumulation in various loading modes.
Resumo:
It is well established that local muscle tissue hypoxia is an important consequence and possibly a relevant adaptive signal of endurance exercise training in humans. It has been reasoned that it might be advantageous to increase this exercise stimulus by working in hypoxia. However, as long-term exposure to severe hypoxia has been shown to be detrimental to muscle tissue, experimental protocols were developed that expose subjects to hypoxia only for the duration of the exercise session and allow recovery in normoxia (live low-train high or hypoxic training). This overview reports data from 27 controlled studies using some implementation of hypoxic training paradigms. Hypoxia exposure varied between 2300 and 5700 m and training duration ranged from 10 days to 8 weeks. A similar number of studies was carried out on untrained and on trained subjects. Muscle structural, biochemical and molecular findings point to a specific role of hypoxia in endurance training. However, based on the available data on global estimates of performance capacity such as maximal oxygen uptake (VO2max) and maximal power output (Pmax), hypoxia as a supplement to training is not consistently found to be of advantage for performance at sea level. There is some evidence mainly from studies on untrained subjects for an advantage of hypoxic training for performance at altitude. Live low-train high may be considered when altitude acclimatization is not an option.
Resumo:
The single electron transistor (SET) is a Coulomb blockade device, whose operation is based on the controlled manipulation of individual electrons. Single electron transistors show immense potential to be used in future ultra lowpower devices, high density memory and also in high precision electrometry. Most SET devices operate at cryogenic temperatures, because the charging energy is much smaller than the thermal oscillations. The room temperature operation of these devices is possible with sub- 10nm nano-islands due to the inverse dependance of charging energy on the radius of the conducting nano-island. The fabrication of sub-10nm features with existing lithographic techniques is a technological challenge. Here we present the results for the first room temperature operating SET device fabricated using Focused Ion Beam deposition technology. The SET device, incorporates an array of tungsten nano-islands with an average diameter of 8nm. The SET devices shows clear Coulomb blockade for different gate voltages at room temperature. The charging energy of the device was calculated to be 160.0 meV; the capacitance per junction was found to be 0.94 atto F; and the tunnel resistance per junction was calculated to be 1.26 G Ω. The tunnel resistance is five orders of magnitude larger than the quantum of resistance (26 k Ω) and allows for the localization of electrons on the tungsten nano-island. The lower capacitance of the device combined with the high tunnel resistance, allows for the Coulomb blockade effects observed at room temperature. Different device configurations, minimizing the total capacitance of the device have been explored. The effect of the geometry of the nano electrodes on the device characteristics has been presented. Simulated device characteristics, based on the soliton model have been discussed. The first application of SET device as a gas sensor has been demonstrated.
Resumo:
This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.
Resumo:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).
Resumo:
A mass‐balance model for Lake Superior was applied to polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and mercury to determine the major routes of entry and the major mechanisms of loss from this ecosystem as well as the time required for each contaminant class to approach steady state. A two‐box model (water column, surface sediments) incorporating seasonally adjusted environmental parameters was used. Both numerical (forward Euler) and analytical solutions were employed and compared. For validation, the model was compared with current and historical concentrations and fluxes in the lake and sediments. Results for PCBs were similar to prior work showing that air‐water exchange is the most rapid input and loss process. The model indicates that mercury behaves similarly to a moderately‐chlorinated PCB, with air‐water exchange being a relatively rapid input and loss process. Modeled accumulation fluxes of PBDEs in sediments agreed with measured values reported in the literature. Wet deposition rates were about three times greater than dry particulate deposition rates for PBDEs. Gas deposition was an important process for tri‐ and tetra‐BDEs (BDEs 28 and 47), but not for higher‐brominated BDEs. Sediment burial was the dominant loss mechanism for most of the PBDE congeners while volatilization was still significant for tri‐ and tetra‐BDEs. Because volatilization is a relatively rapid loss process for both mercury and the most abundant PCBs (tri‐ through penta‐), the model predicts that similar times (from 2 ‐ 10 yr) are required for the compounds to approach steady state in the lake. The model predicts that if inputs of Hg(II) to the lake decrease in the future then concentrations of mercury in the lake will decrease at a rate similar to the historical decline in PCB concentrations following the ban on production and most uses in the U.S. In contrast, PBDEs are likely to respond more slowly if atmospheric concentrations are reduced in the future because loss by volatilization is a much slower process for PBDEs, leading to lesser overall loss rates for PBDEs in comparison to PCBs and mercury. Uncertainties in the chemical degradation rates and partitioning constants of PBDEs are the largest source of uncertainty in the modeled times to steady‐state for this class of chemicals. The modeled organic PBT loading rates are sensitive to uncertainties in scavenging efficiencies by rain and snow, dry deposition velocity, watershed runoff concentrations, and uncertainties in air‐water exchange such as the effect of atmospheric stability.
Resumo:
The lack of access to sufficient water and sanitation facilities is one of the largest hindrances towards the sustainable development of the poorest 2.2 billion people in the world. Rural Uganda is one of the areas where such inaccessibility is seriously hampering their efforts at development. Many rural Ugandans must travel several kilometers to fetch adequate water and many still do not have adequate sanitation facilities. Such poor access to clean water forces Ugandans to spend an inordinate amount of time and energy collecting water - time and energy that could be used for more useful endeavors. Furthermore, the difficulty in getting water means that people use less water than they need to for optimal health and well-being. Access to other sanitation facilities can also have a large impact, particularly on the health of young children and the elderly whose immune systems are less than optimal. Hand-washing, presence of a sanitary latrine, general household cleanliness, maintenance of the safe water chain and the households’ knowledge about and adherence to sound sanitation practices may be as important as access to clean water sources. This report investigates these problems using the results from two different studies. It first looks into how access to water affects peoples’ use of it. In particular it investigates how much water households use as a function of perceived effort to fetch it. Operationally, this was accomplished by surveying nearly 1,500 residents in three different districts around Uganda about their water usage and the time and distance they must travel to fetch it. The study found that there is no statistically significant correlation between a family’s water usage and the perceived effort they must put forth to have to fetch it. On average, people use around 15 liters per person per day. Rural Ugandan residents apparently require a certain amount of water and will travel as far or as long as necessary to collect it. Secondly, a study entitled “What Works Best in Diarrheal Disease Prevention?” was carried out to study the effectiveness of five different water and sanitation facilities in reducing diarrheal disease incidences amongst children under five. It did this by surveying five different communities before and after the implementation of improvements to find changes in diarrheal disease incidences amongst children under five years of age. It found that household water treatment devices provide the best means of preventing diarrheal diseases. This is likely because water often becomes contaminated before it is consumed even if it was collected from a protected source.
Resumo:
With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system. With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system.
Resumo:
Riparian ecology plays an important part in the filtration of sediments from upland agricultural lands. The focus of this work makes use of multispectral high spatial resolution remote sensing imagery (Quickbird by Digital Globe) and geographic information systems (GIS) to characterize significant riparian attributes in the USDA’s experimental watershed, Goodwin Creek, located in northern Mississippi. Significant riparian filter characteristics include the width of the strip, vegetation properties, soil properties, topography, and upland land use practices. The land use and vegetation classes are extracted from the remotely sensed image with a supervised maximum likelihood classification algorithm. Accuracy assessments resulted in an acceptable overall accuracy of 84 percent. In addition to sensing riparian vegetation characteristics, this work addresses the issue of concentrated flow bypassing a riparian filter. Results indicate that Quickbird multispectral remote sensing and GIS data are capable of determining riparian impact on filtering sediment. Quickbird imagery is a practical solution for land managers to monitor the effectiveness of riparian filtration in an agricultural watershed.
Resumo:
As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.
Resumo:
Soil erosion is a natural geological phenomenon resulting from removal and transportation of soil particles by water, wind, ice and gravity. As soil erosion may be affected from cultural factors as well. The physical and social phenomena of soil erosion are researched in six communities in the upper part of Rio Grijalva Basin in the vicinity of Motozintla de Mendoza, Chiapas, Mexico. For this study, the USDA RUSLE model was applied to estimate soil erosion rates in the six communities based on the available data. The RUSLE model is based on soil properties, topography, and land cover and management factors. These results showed that estimated soil erosion rates ranged from a high of 2,050 metric ton ha-1 yr-1 to a low of 100 metric ton ha-1 yr-1. A survey concerning knowledge, attitudes and practices (KAP) related to soil erosion was also conducted in all 236 households in the six communities. The main findings of the KAP survey were: 69% of respondents did not know what soil erosion was, while over 40% of the population perceived that hurricanes are the biggest factors that cause soil erosion, and about 20 % of the interviewees said that the landslides are the consequences of the soil erosion. People in communities did not perceive cultural factors as important in conservation efforts for reduce vulnerability to erosion; therefore, the results obtained are suggested to be useful for informing efforts to educate stakeholders.
Resumo:
Renewable energy is growing in demand, and thus the the manufacture of solar cells and photovoltaic arrays has advanced dramatically in recent years. This is proved by the fact that the photovoltaic production has doubled every 2 years, increasing by an average of 48% each year since 2002. Covering the general overview of solar cell working, and its model, this thesis will start with the three generations of photovoltaic solar cell technology, and move to the motivation of dedicating research to nanostructured solar cell. For the current generation solar cells, among several factors, like photon capture, photon reflection, carrier generation by photons, carrier transport and collection, the efficiency also depends on the absorption of photons. The absorption coefficient,α, and its dependence on the wavelength, λ, is of major concern to improve the efficiency. Nano-silicon structures (quantum wells and quantum dots) have a unique advantage compared to bulk and thin film crystalline silicon that multiple direct and indirect band gaps can be realized by appropriate size control of the quantum wells. This enables multiple wavelength photons of the solar spectrum to be absorbed efficiently. There is limited research on the calculation of absorption coefficient in nano structures of silicon. We present a theoretical approach to calculate the absorption coefficient using quantum mechanical calculations on the interaction of photons with the electrons of the valence band. One model is that the oscillator strength of the direct optical transitions is enhanced by the quantumconfinement effect in Si nanocrystallites. These kinds of quantum wells can be realized in practice in porous silicon. The absorption coefficient shows a peak of 64638.2 cm-1 at = 343 nm at photon energy of ξ = 3.49 eV ( = 355.532 nm). I have shown that a large value of absorption coefficient α comparable to that of bulk silicon is possible in silicon QDs because of carrier confinement. Our results have shown that we can enhance the absorption coefficient by an order of 10, and at the same time a nearly constant absorption coefficient curve over the visible spectrum. The validity of plots is verified by the correlation with experimental photoluminescence plots. A very generic comparison for the efficiency of p-i-n junction solar cell is given for a cell incorporating QDs and sans QDs. The design and fabrication technique is discussed in brief. I have shown that by using QDs in the intrinsic region of a cell, we can improve the efficiency by a factor of 1.865 times. Thus for a solar cell of efficiency of 26% for first generation solar cell, we can improve the efficiency to nearly 48.5% on using QDs.
Resumo:
Traditionally, asphalt mixtures were produced at high temperatures (between 150°C to 180°C) and therefore often referred to as Hot Mix Asphalt (HMA). Recently, a new technology named Warm Mix Asphalt (WMA) was developed in Europe that allows HMA to be produced at a lower temperature. Over years of research efforts, a few WMA technologies were introduced including the foaming method using Aspha-min® and Advera® WMA; organic additives such as Sasobit® and Asphaltan B®; and chemical packages such as Evotherm® and Cecabase RT®. Benefits were found when lower temperatures were used to produce asphalt mixtures, especially when it comes to environmental and energy savings. Even though WMA has shown promising results in energy savings and emission reduction, however, only limited studies and laboratory tests have been conducted to date. The objectives of this project are to 1) develop a mix design framework for WMA by evaluating its mechanical properties; 2) evaluate performance of WMA containing high percentages of recycled asphalt material; and 3) evaluate the moisture sensitivity in WMA. The test results show that most of the WMA has higher fatigue life and TSR which indicated WMA has better fatigue cracking and moisture damage resistant; however, the rutting potential of most of the WMA tested were higher than the control HMA. A recommended WMA mix design framework was developed as well. The WMA design framework was presented in this study to provide contractors, and government agencies successfully design WMA. Mixtures containing high RAP and RAS were studied as well and the overall results show that WMA technology allows the mixture containing high RAP content and RAS to be produced at lower temperature (up to 35°C lower) without significantly affect the performance of asphalt mixture in terms of rutting, fatigue and moisture susceptibility. Lastly, the study also found that by introducing the hydrated lime in the WMA, all mixtures modified by the hydrated lime passed the minimum requirement of 0.80. This indicated that, the moisture susceptibility of the WMA can be improved by adding the hydrated lime.