895 resultados para System failures (Engineering) -- Location


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuromorphic computing has become an emerging field in wide range of applications. Its challenge lies in developing a brain-inspired architecture that can emulate human brain and can work for real time applications. In this report a flexible neural architecture is presented which consists of 128 X 128 SRAM crossbar memory and 128 spiking neurons. For Neuron, digital integrate and fire model is used. All components are designed in 45nm technology node. The core can be configured for certain Neuron parameters, Axon types and synapses states and are fully digitally implemented. Learning for this architecture is done offline. To train this circuit a well-known algorithm Restricted Boltzmann Machine (RBM) is used and linear classifiers are trained at the output of RBM. Finally, circuit was tested for handwritten digit recognition application. Future prospects for this architecture are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PDZ-binding motifs are found in the C-terminal tails of numerous integral membrane proteins where they mediate specific protein-protein interactions by binding to PDZ-containing proteins. Conventional yeast two-hybrid screens have been used to probe protein-protein interactions of these soluble C termini. However, to date no in vivo technology has been available to study interactions between the full-length integral membrane proteins and their cognate PDZ-interacting partners. We previously developed a split-ubiquitin membrane yeast two-hybrid (MYTH) system to test interactions between such integral membrane proteins by using a transcriptional output based on cleavage of a transcription factor from the C terminus of membrane-inserted baits. Here we modified MYTH to permit detection of C-terminal PDZ domain interactions by redirecting the transcription factor moiety from the C to the N terminus of a given integral membrane protein thus liberating their native C termini. We successfully applied this "MYTH 2.0" system to five different mammalian full-length renal transporters and identified novel PDZ domain-containing partners of the phosphate (NaPi-IIa) and sulfate (NaS1) transporters that would have otherwise not been detectable. Furthermore this assay was applied to locate the PDZ-binding domain on the NaS1 protein. We showed that the PDZ-binding domain for PDZK1 on NaS1 is upstream of its C terminus, whereas the two interacting proteins, NHERF-1 and NHERF-2, bind at a location closer to the N terminus of NaS1. Moreover NHERF-1 and NHERF-2 increased functional sulfate uptake in Xenopus oocytes when co-expressed with NaS1. Finally we used MYTH 2.0 to demonstrate that the NaPi-IIa transporter homodimerizes via protein-protein interactions within the lipid bilayer. In summary, our study establishes the MYTH 2.0 system as a novel tool for interactive proteomics studies of membrane protein complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microbial fuel cell (MFC) research has focused mostly on producing electricity using soluble organic and inorganic substrates. This study focused on converting solid organic waste into electricity using a two-stage MFC process. In the first stage, a hydrolysis reactor produced soluble organic substrates from solid organic waste. The soluble substrates from the hydrolysis reactor were pumped to the second stage reactor: a continuous-flow, air-cathode MFC. Maximum power output (Pmax) of the MFC was 296 mW/m3 at a current density of 25.4 mA/m2 while being fed only leachate from the first stage reactor. Addition of phosphate buffer increased Pmax to 1,470 mW/m3 (89.4 mA/m2), although this result could not be duplicated with repeated polarization testing. The minimum internal resistance achieved was 77 Omega with leachate feed and 17 Omega with phosphate buffer. The low coulombic efficiency (

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel solution to the long standing issue of chip entanglement and breakage in metal cutting is presented in this dissertation. Through this work, an attempt is made to achieve universal chip control in machining by using chip guidance and subsequent breakage by backward bending (tensile loading of the chip's rough top surface) to effectively control long continuous chips into small segments. One big limitation of using chip breaker geometries in disposable carbide inserts is that the application range is limited to a narrow band depending on cutting conditions. Even within a recommended operating range, chip breakers do not function effectively as designed due to the inherent variations of the cutting process. Moreover, for a particular process, matching the chip breaker geometry with the right cutting conditions to achieve effective chip control is a very iterative process. The existence of a large variety of proprietary chip breaker designs further exacerbates the problem of easily implementing a robust and comprehensive chip control technique. To address the need for a robust and universal chip control technique, a new method is proposed in this work. By using a single tool top form geometry coupled with a tooling system for inducing chip breaking by backward bending, the proposed method achieves comprehensive chip control over a wide range of cutting conditions. A geometry based model is developed to predict a variable edge inclination angle that guides the chip flow to a predetermined target location. Chip kinematics for the new tool geometry is examined via photographic evidence from experimental cutting trials. Both qualitative and quantitative methods are used to characterize the chip kinematics. Results from the chip characterization studies indicate that the chip flow and final form show a remarkable consistency across multiple levels of workpiece and tool configurations as well as cutting conditions. A new tooling system is then designed to comprehensively break the chip by backward bending. Test results with the new tooling system prove that by utilizing the chip guidance and backward bending mechanism, long continuous chips can be more consistently broken into smaller segments that are generally deemed acceptable or good chips. It is found that the proposed tool can be applied effectively over a wider range of cutting conditions than present chip breakers thus taking possibly the first step towards achieving universal chip control in machining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space-based (satellite, scientific probe, space station, etc.) and millimeter – to – microscale (such as are used in high power electronics cooling, weapons cooling in aircraft, etc.) condensers and boilers are shear/pressure driven. They are of increasing interest to system engineers for thermal management because flow boilers and flow condensers offer both high fluid flow-rate-specific heat transfer capacity and very low thermal resistance between the fluid and the heat exchange surface, so large amounts of heat may be removed using reasonably-sized devices without the need for excessive temperature differences. However, flow stability issues and degradation of performance of shear/pressure driven condensers and boilers due to non-desirable flow morphology over large portions of their lengths have mostly prevented their use in these applications. This research is part of an ongoing investigation seeking to close the gap between science and engineering by analyzing two key innovations which could help address these problems. First, it is recommended that the condenser and boiler be operated in an innovative flow configuration which provides a non-participating core vapor stream to stabilize the annular flow regime throughout the device length, accomplished in an energy-efficient manner by means of ducted vapor re-circulation. This is demonstrated experimentally. Second, suitable pulsations applied to the vapor entering the condenser or boiler (from the re-circulating vapor stream) greatly reduce the thermal resistance of the already effective annular flow regime. For experiments reported here, application of pulsations increased time-averaged heat-flux up to 900 % at a location within the flow condenser and up to 200 % at a location within the flow boiler, measured at the heat-exchange surface. Traditional fully condensing flows, reported here for comparison purposes, show similar heat-flux enhancements due to imposed pulsations over a range of frequencies. Shear/pressure driven condensing and boiling flow experiments are carried out in horizontal mm-scale channels with heat exchange through the bottom surface. The sides and top of the flow channel are insulated. The fluid is FC-72 from 3M Corporation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The object of this work has been to devise a method by which the different phases in the chalcocite-stibnite-galena ternary system may be identified. As the mineralogists have no precise methods for the identification of these phases, a hydrochloric acid-chromate trioxide staining solution was employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To what extent is “software engineering” really “engineering” as this term is commonly understood? A hallmark of the products of the traditional engineering disciplines is trustworthiness based on dependability. But in his keynote presentation at ICSE 2006 Barry Boehm pointed out that individuals’, systems’, and peoples’ dependency on software is becoming increasingly critical, yet that dependability is generally not the top priority for software intensive system producers. Continuing in an uncharacteristic pessimistic vein, Professor Boehm said that this situation will likely continue until a major software-induced system catastrophe similar in impact to the 9/11 World Trade Center catastrophe stimulates action toward establishing accountability for software dependability. He predicts that it is highly likely that such a software-induced catastrophe will occur between now and 2025. It is widely understood that software, i.e., computer programs, are intrinsically different from traditionally engineered products, but in one aspect they are identical: the extent to which the well-being of individuals, organizations, and society in general increasingly depend on software. As wardens of the future through our mentoring of the next generation of software developers, we believe that it is our responsibility to at least address Professor Boehm’s predicted catastrophe. Traditional engineering has, and continually addresses its social responsibility through the evolution of the education, practice, and professional certification/licensing of professional engineers. To be included in the fraternity of professional engineers, software engineering must do the same. To get a rough idea of where software engineering currently stands on some of these issues we conducted two surveys. Our main survey was sent to software engineering academics in the U.S., Canada, and Australia. Among other items it sought detail information on their software engineering programs. Our auxiliary survey was sent to U.S. engineering institutions to get some idea about how software engineering programs compared with those in established engineering disciplines of Civil, Electrical, and Mechanical Engineering. Summaries of our findings can be found in the last two sections of our paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Powder metallurgy is a branch of metallurgy which produces metallic compacts in their final forms by means of pressure and heat-treatment from the pow­ders. The products of powder metallurgy are being used in our daily lives quite often. For example, the tungsten wires in the electric bulbs to the silver-tin fillings of our teeth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photovoltaic power has become one of the most popular research area in new energy field. In this report, the case of household solar power system is presented. Based on the Matlab environment, the simulation is built by using Simulink and SimPowerSystem. There are four parts in a household solar system, solar cell, MPPT system, battery and power consumer. Solar cell and MPPT system are been studied and analyzed individually. The system with MPPT generates 30% more energy than the system without MPPT. After simulating the household system, it is can be seen that the power which generated by the system is 40.392 kWh per sunny day. By combining the power generated by the system and the price of the electric power, 8.42 years are need for the system to achieve a balance of income and expenditure when weather condition is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

More than eighteen percent of the world’s population lives without reliable access to clean water, forced to walk long distances to get small amounts of contaminated surface water. Carrying heavy loads of water long distances and ingesting contaminated water can lead to long-term health problems and even death. These problems affect the most vulnerable populations, women, children, and the elderly, more than anyone else. Water access is one of the most pressing issues in development today. Boajibu, a small village in Sierra Leone, where the author served in Peace Corps for two years, lacks access to clean water. Construction of a water distribution system was halted when a civil war broke out in 1992 and has not been continued since. The community currently relies on hand-dug and borehole wells that can become dirty during the dry season, which forces people to drink contaminated water or to travel a far distance to collect clean water. This report is intended to provide a design the system as it was meant to be built. The water system design was completed based on the taps present, interviews with local community leaders, local surveying, and points taken with a GPS. The design is a gravity-fed branched water system, supplied by a natural spring on a hill adjacent to Boajibu. The system’s source is a natural spring on a hill above Boajibu, but the flow rate of the spring is unknown. There has to be enough flow from the spring over a 24-hour period to meet the demands of the users on a daily basis, or what is called providing continuous flow. If the spring has less than this amount of flow, the system must provide intermittent flow, flow that is restricted to a few hours a day. A minimum flow rate of 2.1 liters per second was found to be necessary to provide continuous flow to the users of Boajibu. If this flow is not met, intermittent flow can be provided to the users. In order to aid the construction of a distribution system in the absence of someone with formal engineering training, a table was created detailing water storage tank sizing based on possible source flow rates. A builder can interpolate using the source flow rate found to get the tank size from the table. However, any flow rate below 2.1 liters per second cannot be used in the table. In this case, the builder should size the tank such that it can take in the water that will be supplied overnight, as all the water will be drained during the day because the users will demand more than the spring can supply through the night. In the developing world, there is often a problem collecting enough money to fund large infrastructure projects, such as a water distribution system. Often there is only enough money to add only one or two loops to a water distribution system. It is helpful to know where these one or two loops can be most effectively placed in the system. Various possible loops were designated for the Boajibu water distribution system and the Adaptive Greedy Heuristic Loop Addition Selection Algorithm (AGHLASA) was used to rank the effectiveness of the possible loops to construct. Loop 1 which was furthest upstream was selected because it benefitted the most people for the least cost. While loops which were further downstream were found to be less effective because they would benefit fewer people. Further studies should be conducted on the water use habits of the people of Boajibu to more accurately predict the demands that will be placed on the system. Further population surveying should also be conducted to predict population change over time so that the appropriate capacity can be built into the system to accommodate future growth. The flow at the spring should be measured using a V-notch weir and the system adjusted accordingly. Future studies can be completed adjusting the loop ranking method so that two users who may be using the water system for different lengths of time are not counted the same and vulnerable users are weighted more heavily than more robust users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective for this thesis is to outline a Performance-Based Engineering (PBE) framework to address the multiple hazards of Earthquake (EQ) and subsequent Fire Following Earthquake (FFE). Currently, fire codes for the United States are largely empirical and prescriptive in nature. The reliance on prescriptive requirements makes quantifying sustained damage due to fire difficult. Additionally, the empirical standards have resulted from individual member or individual assembly furnace testing, which have been shown to differ greatly from full structural system behavior. The very nature of fire behavior (ignition, growth, suppression, and spread) is fundamentally difficult to quantify due to the inherent randomness present in each stage of fire development. The study of interactions between earthquake damage and fire behavior is also in its infancy with essentially no available empirical testing results. This thesis will present a literature review, a discussion, and critique of the state-of-the-art, and a summary of software currently being used to estimate loss due to EQ and FFE. A generalized PBE framework for EQ and subsequent FFE is presented along with a combined hazard probability to performance objective matrix and a table of variables necessary to fully implement the proposed framework. Future research requirements and summary are also provided with discussions of the difficulties inherent in adequately describing the multiple hazards of EQ and FFE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tissue engineering and regenerative medicine have emerged in an effort to generate replacement tissues capable of restoring native tissue structure and function, but because of the complexity of biologic system, this has proven to be much harder than originally anticipated. Silica based bioactive glasses are popular as biomaterials because of their ability to enhance osteogenesis and angiogenesis. Sol-gel processing methods are popular in generating these materials because it offers: 1) mild processing conditions; 2) easily controlled structure and composition; 3) the ability to incorporate biological molecules; and 4) inherent biocompatibility. The goal of this work was to develop a bioactive vaporization system for the deposition of silica sol-gel particles as a means to modify the material properties of a substrate at the nano- and micro- level to better mimic the instructive conditions of native bone tissue, promoting appropriate osteoblast attachment, proliferation, and differentiation as a means for supporting bone tissue regeneration. The size distribution, morphology and degradation behavior of the vapor deposited sol-gel particles developed here were found to be dependent upon formulation (H2O:TMOS, pH, Ca/P incorporation) and manufacturing (substrate surface character, deposition time). Additionally, deposition of these particles onto substrates can be used to modify overall substrate properties including hydrophobicity, roughness, and topography. Deposition of Ca/P sol particles induced apatite-like mineral formation on both two- and three-dimensional materials when exposed to body fluids. Gene expression analysis suggests that Ca/P sol particles induce upregulation osteoblast gene expression (Runx2, OPN, OCN) in preosteoblasts during early culture time points. Upon further modification-specifically increasing particle stability-these Ca/P sol particles possess the potential to serve as a simple and unique means to modify biomaterial surface properties as a means to direct osteoblast differentiation.