4 resultados para Calumet and Hecla Mining Company.
em Dalarna University College Electronic Archive
Resumo:
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits
Resumo:
The purpose of this essay is to examine and explain how the Swedish mining court of Stora Kopparberget (the Great Copper Mountain) implemented its judicial legislation between 1641-1682. Questions are asked about which counts of indictments the court tried, which sentences they handed out, in what quantities and how these results looks in comparison with other contemporary courts. The index cards of the court judicial protocols are the primary source of information. The methods are those of quantity- and comparative analysis.The results show that theft of copper ore was the most common crime ransacked by the court. Other common crimes were (in order): sin of omission, transgression of work directions, fights, slander and disdain, trade of stolen ore, failing appearance in court etc.Fines were by far the most common sentence followed by shorter imprisonments, gauntlets, loss of right to mine possession, twig beating, loss of work, penal servitude, banishment, “wooden horse riding” and finally military transcription. Even though previous re-search, in the field of Swedish specialized courts, is almost non existent evidence confirms great similarities between the Stora Kopparberget mining court and Sala mining court. This essay will, hopefully, enrich our knowledge of specialized courts, of 17th century mining industry and society and let us reach a broader understanding of the working conditions of the mountain.
Resumo:
Company X develops a laboratory information system (LIS) called System Y. The informationsystem has a two-tier database architecture consisting of a production database and a historicaldatabase. A database constitutes the backbone of a IS, which makes the design of the databasevery important. A poorly designed database can cause major problems within an organization.The two databases in System Y are poorly modeled, particularly the historical database. Thecause of the poor modeling was unclear concepts. The unclear concepts have remained in thedatabase and in the company organization and caused a general confusion of concepts. The splitdatabase architecture itself has evolved into a bottleneck and is the cause of many problemsduring the development of System Y.Company X investigates the possibility of integrating the historical database with the productiondatabase. The goal of our thesis is to conduct a consequence analysis of such integration andwhat the effects would be on System Y, and to create a new design for the integrated database.We will also examine and describe the practical effects of confusion of concepts for a databaseconceptual design.To achieve the goal of the thesis, five different method steps have been performed: a preliminarystudy of the organization, a change analysis, a consequence analysis and an investigation of theconceptual design of the database. These method steps have helped identify changes necessaryfor the organization, a new design proposal for an integrated database, the impact of theproposed design and a number of effects of confusion for the database.
Resumo:
Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Due to the free nature of Wikipedia and allowing open access to everyone to edit articles the quality of articles may be affected. As all people don’t have equal level of knowledge and also different people have different opinions about a topic so there may be difference between the contributions made by different authors. To overcome this situation it is very important to classify the articles so that the articles of good quality can be separated from the poor quality articles and should be removed from the database. The aim of this study is to classify the articles of Wikipedia into two classes class 0 (poor quality) and class 1(good quality) using the Adaptive Neuro Fuzzy Inference System (ANFIS) and data mining techniques. Two ANFIS are built using the Fuzzy Logic Toolbox [1] available in Matlab. The first ANFIS is based on the rules obtained from J48 classifier in WEKA while the other one was built by using the expert’s knowledge. The data used for this research work contains 226 article’s records taken from the German version of Wikipedia. The dataset consists of 19 inputs and one output. The data was preprocessed to remove any similar attributes. The input variables are related to the editors, contributors, length of articles and the lifecycle of articles. In the end analysis of different methods implemented in this research is made to analyze the performance of each classification method used.