984 resultados para Tool inventory
Resumo:
Intermolecular forces are a useful concept that can explain the attraction between particulate matter as well as numerous phenomena in our lives such as viscosity, solubility, drug interactions, and dyeing of fibers. However, studies show that students have difficulty understanding this important concept, which has led us to develop a free educational software in English and Portuguese. The software can be used interactively by teachers and students, thus facilitating better understanding. Professors and students, both graduate and undergraduate, were questioned about the software quality and its intuitiveness of use, facility of navigation, and pedagogical application using a Likert scale. The results led to the conclusion that the developed computer application can be characterized as an auxiliary tool to assist teachers in their lectures and students in their learning process of intermolecular forces.
Resumo:
Having inventory to cover from all possible problems would increase the inventory level indefinitely in accordance with the standard deviation. If the materials in stock are not used, but kept just to be on the safe side, they are waste. The main objective of this study was to find out, how much inventory is required to cover the requirements, without causing delivery problems towards the end-customers, and how the inventory could be controlled efficiently. Several improvements were made in the controlling principles, and the inventory level was quickly decreased by more than 30 %, and kept on the reached level. The suitability of kanban control was investigated, and it was eventually taken into some use. A great advantage was found in the new procedures in securing the supply. The requests for quotations were diversified, and the faulty basis was corrected. Thus, the inventory surplus would later be avoided, and at the same time, a lot of valuable time was saved from daily routines to further improvement projects.
Resumo:
A technique to measure the concentration of Penicillium allii conidia in damp chamber experiments by spectrophotometry was developed. A negative linear correlation (R²=0.56) was observed between transmittance at 340 nm and the concentration of P. allii conidia in water agar 0.05%. The equation that relates transmittance (T) with concentration (conidia mL-1) (y) is: y = 9.3 10(6) - 86497 T. The method was assayed by inoculating 43 P. allii strains in two garlic cultivars. The method proved to be more rapid than the traditional use of a hemocytometer with an improved accuracy. The CV of the number of conidia per hemocytometer reticule was of 35.04%, while the transmittance CV was of 2.73%. The extreme values chosen for T were 40 and 80 because the sensitivity of the method decreased when concentrations of conidia were out of this range.
Resumo:
The aim of this research was to investigate Tikkurila Oyj's Vantaa factories' current status in procurement and to develop the reporting of purchases and purchase warehouses, and to measure the activities during the implementation of the new purchasing tool. The implemented purchasing tool was based on ABC-analysis. Based on its reports the importance of performance measurements for the operations of the company, and the purpose of getting transparency to the company's supply chain on the part of purchasing were examined. A successful purchase- and material operation calls for accurate knowledge and professional skills. The research proved that with a separate purchasing tool, that analyses the existing information of the company's production management system, it is possible to get added value to whole supply chain's needs. The analyses and reports of the purchasing tool enable a more harmonized purchasing process at the operative level, and create a basis for internal targets and to their follow-up. At the same time the analyses clarify the current status and development trends of procurement fresh to the management. The increase of the possibilities of exploitation of information technology enables a perfect transparency to the case company's purchase department.
Resumo:
The environmental challenges of plastic packaging industry have increased remarkably along with climate change debate. The interest to study carbon footprints of packaging has increased in packaging industry to find out the real climate change impacts of packaging. In this thesis the greenhouse gas discharges of plastic packaging during their life cycle is examined. The carbon footprint is calculated for food packaging manufactured from plastic laminate. The structure of the laminate is low density polyethylene (PE-LD) and oriented polypropylene (OPP), which have been joined together with laminating adhesive. The purpose is to find out the possibilities to create a carbon footprint calculating tool for plastic packaging and its usability in a plastic packaging manufacturing company. As a carbon footprint calculating method PAS 2050 standard has been used. In the calculations direct and indirect greenhouse gas discharges as well as avoided discharges are considered. Avoided discharges are born for example in packaging waste utilization as energy. The results of the calculations have been used to create a simple calculating tool to be used for similar laminate structures. Although the utilization of the calculating tool is limited to one manufacturing plant because the primary activity data is dependent of geographical location and for example the discharges of used energy in the plant. The results give an approximation of the climate change potential caused by the laminate. It is although noticed that calculations do not include all environmental impacts of plastic packaging´s life cycle.
Resumo:
The Differential Scanning Calorimetry (DSC) was used to study the thermal behavior of hair samples and to verify the possibility of identifying an individual based on DSC curves from a data bank. Hair samples of students and officials from Instituto de Química de Araraquara, UNESP were obtained to build up a data bank. Thus to sought an individual, under incognito participant of this data bank, was identified using DSC curves.
Resumo:
The development of software tools begun as the first computers were built. The current generation of development environments offers a common interface to access multiple software tools and often also provide a possibility to build custom tools as extensions to the existing development environment. Eclipse is an open source development environment that offers good starting point for developing custom extensions. This thesis presents a software tool to aid the development of context-aware applications on Multi-User Publishing Environment (MUPE) platform. The tool is implemented as an Eclipse plug-in. The tool allows developer to include external server side contexts to their MUPE applications. The tool allows additional context sources to be added through the Eclipse's extension point mechanism. The thesis describes how the tool was designed and implemented. The implementation consists of tool core component part and an additional context source extension part. Tool core component is responsible for the actual context addition and also provides the needed user interface elements to the Eclipse workbench. Context source component provides the needed context source related information to the core component. As part of the work an update site feature was also implemented for distributing the tool through Eclipse update mechanism.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Tämä diplomityö on tehty painotaloon. Yrityksen tilaus-toimitusketjua on kehitetty ennen diplomityötä. Muutoksen kohteena on ollut toimintamallit, laatu, läpäisy ja tuotanto- ja varastotilojen layout, lisäksi koneitakin on uusittu ja korjattu runsaasti. Muutosten yhteydessä henkilöstöä sopeutettiin vastaamaan uutta toimintamallia ja samalla myös johtamisen vastuu jaettiin uudelleen. Diplomityö koostuu kolmesta merkittävästä osasta ja sen tarkoituksena on kuvata sisäiset prosessit ja kehittää valittuja ongelmia. Ensimmäinen osa käsittelee leania ja sen teoriaa. Lisäksi määritellään lean-työkalut, joita työssä hyödynnetään. Toinen osa keskittyy kuvaamaan työn käytännön osuutta, jossa selvitetään prosessin nykytilaa. Suuri osa tätä diplomityötä on työn tutkimista ja sen kuvaamista yksinkertaisilla prosessikuvauksilla. Nykytilan kuvauksessa etsittiin myös ongelmakohtia, joita käytetään osittain tämän työn kehitysprojekteissa. Ongelmakohtien listan avulla voidaan toteuttaa jatkossa jatkuvan parantamisen projekteja. Kolmas osa työstä liittyy valittujen kehitysprojektien toteutukseen. Ensin työssä kuvataan jatkuvan parantamisen projektien työskentelytavat ja perustellaan syyt projektien valintaan. Valinnan jälkeen kuvataan valittujen projektien toteuttaminen ja lopuksi käydään läpi tulokset. Lisäksi yhteenvedossa esitellään työstä syntyvät jatkotoimenpide-ehdotukset. Kehitysprojektien tuloksien perusteella pystyttiin arvioimaan tulevia kehityskohteita ja niihin käytettäviä työkaluja. Koska kyseessä oli pilottiprojekti, oppimista tapahtui paljon ja tulokset saattavat näyttää positiivisimmilta kun kehitystoimintaa jatketaan. Tuloksina syntyivät prosessikuvaukset melkein jokaiseen työvaiheeseen. Tämän lisäksi prosessikuvausten yhteydessä syntyneitä kehitysehdotuksia voidaan pitää tuloksina. Ensimmäinen kehitysprojekteista oli työmääräimen kehittäminen. Työmääräintä muutettiin informatiivisemmaksi, mutta muu parantaminen jäi diplomityön ulkopuolelle. Toinen kehitysprojekti oli varastotuotteiden hallinnan kehittäminen, jossa aloitettiin ennakoida asiakkaan tilauksia ja varastojen arvot saatiin paremmin hallintaan. Tuloksia saavutetaan enemmän, mikäli toimintaa laajennetaan. Viimeisenä kehitysprojektina oli työpisteen toiminnan kehittäminen. Kehittämisessä keskityttiin pohjapiirroksen järkevöittämiseen, työtapojen vakiointiin ja lean-työkalu 5S toteutukseen. Työpisteen tehokkuutta saatiin nostettua jonkun verran, tosin painopiste oli uuden toimintatavan oppimisessa.
Resumo:
Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.