908 resultados para goal-oriented requirements engineering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a heterogeneous network composed of femtocells deployed within a macrocell network is considered, and a quality-of-service (QoS)-oriented fairness metric which captures important characteristics of tiered network architectures is proposed. Using homogeneous Poisson processes, the sum capacities in such networks are expressed in closed form for co-channel, dedicated channel, and hybrid resource allocation methods. Then a resource splitting strategy that simultaneously considers capacity maximization, fairness constraints, and QoS constraints is proposed. Detailed computer simulations utilizing 3GPP simulation assumptions show that a hybrid allocation strategy with a well-designed resource split ratio enjoys the best cell-edge user performance, with minimal degradation in the sum throughput of macrocell users when compared with that of co-channel operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems – stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Sensor and Actuator Networks (WSAN) are a key component in Ubiquitous Computing Systems and have many applications in different knowledge domains. Programming for such networks is very hard and requires developers to know the available sensor platforms specificities, increasing the learning curve for developing WSAN applications. In this work, an MDA (Model-Driven Architecture) approach for WSAN applications development called ArchWiSeN is proposed. The goal of such approach is to facilitate the development task by providing: (i) A WSAN domain-specific language, (ii) a methodology for WSAN application development; and (iii) an MDA infrastructure composed of several software artifacts (PIM, PSMs and transformations). ArchWiSeN allows the direct contribution of domain experts in the WSAN application development without the need of specialized knowledge on WSAN platforms and, at the same time, allows network experts to manage the application requirements without the need for specific knowledge of the application domain. Furthermore, this approach also aims to enable developers to express and validate functional and non-functional requirements of the application, incorporate services offered by WSAN middleware platforms and promote reuse of the developed software artifacts. In this sense, this Thesis proposes an approach that includes all WSAN development stages for current and emerging scenarios through the proposed MDA infrastructure. An evaluation of the proposal was performed by: (i) a proof of concept encompassing three different scenarios performed with the usage of the MDA infrastructure to describe the WSAN development process using the application engineering process, (ii) a controlled experiment to assess the use of the proposed approach compared to traditional method of WSAN application development, (iii) the analysis of ArchWiSeN support of middleware services to ensure that WSAN applications using such services can achieve their requirements ; and (iv) systematic analysis of ArchWiSeN in terms of desired characteristics for MDA tool when compared with other existing MDA tools for WSAN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many buildings constructed during the middle of the 20th century were constructed with criteria that fall short of current requirements. Although shortcomings are possible in all aspects of the design, the inadequacies in terms of seismic design present a more pressing issue to human life. This risk has been seen in various earthquakes that have struck Italy recently, and subsequently, the codes have been altered to account for this underestimated danger. Structures built after these changes remain at risk and must be retrofitted depending on their use. This report centers around the Giovanni Michelucci Institute of Mathematics at the University of Bologna and the work required to modify the building so that it can withstand 60% of the current design requirements. The goal of this particular report is to verify the previous reports written in Italian and present an accurate analysis along with intervention suggestions for this particular building. The work began with an investigation into the previous sources and work to find out how the structure had been interpreted. After understanding the building, corrections were made where required, and the failing elements were organized graphically to more easily show where the building needed the most work. Once the critical zones were mapped, remediation techniques were tested on the top floor, and the modeling techniques and effects of the interventions were presented to assist in further work on the structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the research is to provide an overview of those factors that play a major role in structural failures and also to focus on the importance that bracing has in construction accidents. A temporary bracing system is important to construction safety, yet it is often neglected. Structural collapses often occur due to the insufficient support of loads that are applied at the time of failure. The structural load is usually analyzed by conceiving the whole structure as a completed entity, and there is frequently a lack of design or proper implementation of systems that can provide stability during construction. Often, the specific provisions and requirements of temporary bracing systems are left to the workers on the job site that may not have the qualifications or expertise for proper execution. To effectively see if bracing design should get more attention in codes and standards, failures which could have been avoided with the presence and/or the correct design of a bracing system were searched and selected among a variety of cases existing in the engineering literature. Eleven major cases were found, which span in a time frame of almost 70 years, clearly showing that the topic should get more attention. The case studies are presented in chronological order and in a systematic way. The failed structure is described in its design components and the sequence of failure is reconstructed. Then, the causes and failure mechanism are presented. Advice on how to avoid similar failures from happening again and hypothetic solutions which could have prevented the collapses are identified. The findings shows that insufficient or nonexistent bracing mainly results from human negligence or miscalculation of the load analysis and show that time has come to fully acknowledge that temporary structures should be more accounted for in design and not left to contractors' means and methods of construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service supply chain (SSC) has attracted more and more attention from academia and industry. Although there exists extensive product-based supply chain management models and methods, they are not applicable to the SSC as the differences between service and product. Besides, the existing supply chain management models and methods possess some common deficiencies. Because of the above reasons, this paper develops a novel value-oriented model for the management of SSC using the modeling methods of E3-value and Use Case Maps (UCMs). This model can not only resolve the problems of applicability and effectiveness of the existing supply chain management models and methods, but also answer the questions of ‘why the management model is this?’ and ‘how to quantify the potential profitability of the supply chains?’. Meanwhile, the service business processes of SSC system can be established using its logic procedure. In addition, the model can also determine the value and benefits distribution of the entire service value chain and optimize the operations management performance of the service supply.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Burn injuries in the United States account for over one million hospital admissions per year, with treatment estimated at four billion dollars. Of severe burn patients, 30-90% will develop hypertrophic scars (HSc). Current burn therapies rely upon the use of bioengineered skin equivalents (BSEs), which assist in wound healing but do not prevent HSc. HSc contraction occurs of 6-18 months and results in the formation of a fixed, inelastic skin deformity, with 60% of cases occurring across a joint. HSc contraction is characterized by abnormally high presence of contractile myofibroblasts which normally apoptose at the completion of the proliferative phase of wound healing. Additionally, clinical observation suggests that the likelihood of HSc is increased in injuries with a prolonged immune response. Given the pathogenesis of HSc, we hypothesize that BSEs should be designed with two key anti-scarring characterizes: (1) 3D architecture and surface chemistry to mitigate the inflammatory microenvironment and decrease myofibroblast transition; and (2) using materials which persist in the wound bed throughout the remodeling phase of repair. We employed electrospinning and 3D printing to generate scaffolds with well-controlled degradation rate, surface coatings, and 3D architecture to explore our hypothesis through four aims.

In the first aim, we evaluate the impact of elastomeric, randomly-oriented biostable polyurethane (PU) scaffold on HSc-related outcomes. In unwounded skin, native collagen is arranged randomly, elastin fibers are abundant, and myofibroblasts are absent. Conversely, in scar contractures, collagen is arranged in linear arrays and elastin fibers are few, while myofibroblast density is high. Randomly oriented collagen fibers native to the uninjured dermis encourage random cell alignment through contact guidance and do not transmit as much force as aligned collagen fibers. However, the linear ECM serves as a system for mechanotransduction between cells in a feed-forward mechanism, which perpetuates ECM remodeling and myofibroblast contraction. The electrospinning process allowed us to create scaffolds with randomly-oriented fibers that promote random collagen deposition and decrease myofibroblast formation. Compared to an in vitro HSc contraction model, fibroblast-seeded PU scaffolds significantly decreased matrix and myofibroblast formation. In a murine HSc model, collagen coated PU (ccPU) scaffolds significantly reduced HSc contraction as compared to untreated control wounds and wounds treated with the clinical standard of care. The data from this study suggest that electrospun ccPU scaffolds meet the requirements to mitigate HSc contraction including: reduction of in vitro HSc related outcomes, diminished scar stiffness, and reduced scar contraction. While clinical dogma suggests treating severe burn patients with rapidly biodegrading skin equivalents, these data suggest that a more long-term scaffold may possess merit in reducing HSc.

In the second aim, we further investigate the impact of scaffold longevity on HSc contraction by studying a degradable, elastomeric, randomly oriented, electrospun micro-fibrous scaffold fabricated from the copolymer poly(l-lactide-co-ε-caprolactone) (PLCL). PLCL scaffolds displayed appropriate elastomeric and tensile characteristics for implantation beneath a human skin graft. In vitro analysis using normal human dermal fibroblasts (NHDF) demonstrated that PLCL scaffolds decreased myofibroblast formation as compared to an in vitro HSc contraction model. Using our murine HSc contraction model, we found that HSc contraction was significantly greater in animals treated with standard of care, Integra, as compared to those treated with collagen coated-PLCL (ccPLCL) scaffolds at d 56 following implantation. Finally, wounds treated with ccPLCL were significantly less stiff than control wounds at d 56 in vivo. Together, these data further solidify our hypothesis that scaffolds which persist throughout the remodeling phase of repair represent a clinically translatable method to prevent HSc contraction.

In the third aim, we attempt to optimize cell-scaffold interactions by employing an anti-inflammatory coating on electrospun PLCL scaffolds. The anti-inflammatory sub-epidermal glycosaminoglycan, hyaluronic acid (HA) was used as a coating material for PLCL scaffolds to encourage a regenerative healing phenotype. To minimize local inflammation, an anti-TNFα monoclonal antibody (mAB) was conjugated to the HA backbone prior to PLCL coating. ELISA analysis confirmed mAB activity following conjugation to HA (HA+mAB), and following adsorption of HA+mAB to the PLCL backbone [(HA+mAB)PLCL]. Alican blue staining demonstrated thorough HA coating of PLCL scaffolds using pressure-driven adsorption. In vitro studies demonstrated that treatment with (HA+mAB)PLCL prevented downstream inflammatory events in mouse macrophages treated with soluble TNFα. In vivo studies using our murine HSc contraction model suggested positive impact of HA coating, which was partiall impeded by the inclusion of the TNFα mAB. Further characterization of the inflammatory microenvironment of our murine model is required prior to conclusions regarding the potential for anti-TNFα therapeutics for HSc. Together, our data demonstrate the development of a complex anti-inflammatory coating for PLCL scaffolds, and the potential impact of altering the ECM coating material on HSc contraction.

In the fourth aim, we investigate how scaffold design, specifically pore dimensions, can influence myofibroblast interactions and subsequent formation of OB-cadherin positive adherens junctions in vitro. We collaborated with Wake Forest University to produce 3D printed (3DP) scaffolds with well-controlled pore sizes we hypothesized that decreasing pore size would mitigate intra-cellular communication via OB-cadherin-positive adherens junctions. PU was 3D printed via pressure extrusion in basket-weave design with feature diameter of ~70 µm and pore sizes of 50, 100, or 150 µm. Tensile elastic moduli of 3DP scaffolds were similar to Integra; however, flexural moduli of 3DP were significantly greater than Integra. 3DP scaffolds demonstrated ~50% porosity. 24 h and 5 d western blot data demonstrated significant increases in OB-cadherin expression in 100 µm pores relative to 50 µm pores, suggesting that pore size may play a role in regulating cell-cell communication. To analyze the impact of pore size in these scaffolds on scarring in vivo, scaffolds were implanted beneath skin graft in a murine HSc model. While flexural stiffness resulted in graft necrosis by d 14, cellular and blood vessel integration into scaffolds was evident, suggesting potential for this design if employed in a less stiff material. In this study, we demonstrate for the first time that pore size alone impacts OB-cadherin protein expression in vitro, suggesting that pore size may play a role on adherens junction formation affiliated with the fibroblast-to-myofibroblast transition. Overall, this work introduces a new bioengineered scaffold design to both study the mechanism behind HSc and prevent the clinical burden of this contractile disease.

Together, these studies inform the field of critical design parameters in scaffold design for the prevention of HSc contraction. We propose that scaffold 3D architectural design, surface chemistry, and longevity can be employed as key design parameters during the development of next generation, low-cost scaffolds to mitigate post-burn hypertrophic scar contraction. The lessening of post-burn scarring and scar contraction would improve clinical practice by reducing medical expenditures, increasing patient survival, and dramatically improving quality of life for millions of patients worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Underground hardrock mining can be very energy intensive and in large part this can be attributed to the power consumption of underground ventilation systems. In general, the power consumed by a mine’s ventilation system and its overall scale are closely related to the amount of diesel power in operation. This is because diesel exhaust is a major source of underground air pollution, including diesel particulate matter (DPM), NO2 and heat, and because regulations tie air volumes to diesel engines. Furthermore, assuming the size of airways remains constant, the power consumption of the main system increases exponentially with the volume of air supplied to the mine. Therefore large diesel fleets lead to increased energy consumption and can also necessitate large capital expenditures on ventilation infrastructure in order to manage power requirements. Meeting ventilation requirements for equipment in a heading can result in a similar scenario with the biggest pieces leading to higher energy consumption and potentially necessitating larger ventilation tubing and taller drifts. Depending on the climate where the mine is located, large volumes of air can have a third impact on ventilation costs if heating or cooling the air is necessary. Annual heating and cooling costs, as well as the cost of the associated infrastructure, are directly related to the volume of air sent underground. This thesis considers electric mining equipment as a means for reducing the intensity and cost of energy consumption at underground, hardrock mines. Potentially, electric equipment could greatly reduce the volume of air needed to ventilate an entire mine as well as individual headings because they do not emit many of the contaminants found in diesel exhaust and because regulations do not connect air volumes to electric motors. Because of the exponential relationship between power consumption and air volumes, this could greatly reduce the amount of power required for mine ventilation as well as the capital cost of ventilation infrastructure. As heating and cooling costs are also directly linked to air volumes, the cost and energy intensity of heating and cooling the air would also be significantly reduced. A further incentive is that powering equipment from the grid is substantially cheaper than fuelling them with diesel and can also produce far fewer GHGs. Therefore, by eliminating diesel from the underground workers will enjoy safer working conditions and operators and society at large will gain from a smaller impact on the environment. Despite their significant potential, in order to produce a credible economic assessment of electric mining equipment their impact on underground systems must be understood and considered in their evaluation. Accordingly, a good deal of this thesis reviews technical considerations related to the use of electric mining equipment, especially ones that impact the economics of their implementation. The goal of this thesis will then be to present the economic potential of implementing the equipment, as well as to outline the key inputs which are necessary to support an evaluation and to provide a model and an approach which can be used by others if the relevant information is available and acceptable assumptions can be made.