53 resultados para L71 - Mining, Extraction, and Refining:
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The consumption of manganese is increasing, but huge amounts of manganese still end up in waste in hydrometallurgical processes. The recovery of manganese from multi-metal solutions at low concentrations may not be economical. In addition, poor iron control typically prevents the production of high purity manganese. Separation of iron from manganese can be done with chemical precipitation or solvent extraction methods. Combined carbonate precipitation with air oxidation is a feasible method to separate iron and manganese due to the fast kinetics, good controllability and economical reagents. In addition the leaching of manganese carbonate is easier and less acid consuming than that of hydroxide or sulfide precipitates. Selective iron removal with great efficiency from MnSO4 solution is achieved by combined oxygen or air oxidation and CaCO3 precipitation at pH > 5.8 and at a redox potential of > 200 mV. In order to avoid gypsum formation, soda ash should be used instead of limestone. In such case, however, extra attention needs to be paid on the reagents mole ratios in order to avoid manganese coprecipitation. After iron removal, pure MnSO4 solution was obtained by solvent extraction using organophosphorus reagents, di-(2-ethylhexyl)phosphoric acid (D2EHPA) and bis(2,4,4- trimethylpentyl)phosphinic acid (CYANEX 272). The Mn/Ca and Mn/Mg selectivities can be increased by decreasing the temperature from the commonly used temperatures (40 –60oC) to 5oC. The extraction order of D2EHPA (Ca before Mn) at low temperature remains unchanged but the lowering of temperature causes an increase in viscosity and slower phase separation. Of these regents, CYANEX 272 is selective for Mn over Ca and, therefore, it would be the better choice if there is Ca present in solution. A three-stage Mn extraction followed by a two-stage scrubbing and two-stage sulfuric acid stripping is an effective method of producing a very pure MnSO4 intermediate solution for further processing. From the intermediate MnSO4 some special Mn- products for ion exchange applications were synthesized and studied. Three types of octahedrally coordinated manganese oxide materials as an alternative final product for manganese were chosen for synthesis: layer structured Nabirnessite, tunnel structured Mg-todorokite and K-kryptomelane. As an alternative source of pure MnSO4 intermediate, kryptomelane was synthesized by using a synthetic hydrometallurgical tailings. The results show that the studied OMS materials adsorb selectively Cu, Ni, Cd and K in the presence of Ca and Mg. It was also found that the exchange rates were reasonably high due to the small particle dimensions. Materials are stable in the studied conditions and their maximum Cu uptake capacity was 1.3 mmol/g. Competitive uptake of metals and acid was studied using equilibrium, batch kinetic and fixed-bed measurements. The experimental data was correlated with a dynamic model, which also accounts for the dissolution of the framework manganese. Manganese oxide micro-crystals were also bound onto silica to prepare a composite material having a particle size large enough to be used in column separation experiments. The MnOx/SiO2 ratio was found to affect significantly the properties of the composite. The higher the ratio, the lower is the specific surface area, the pore volume and the pore size. On the other hand, higher amount of silica binder gives composites better mechanical properties. Birnesite and todorokite can be aggregated successfully with colloidal silica at pH 4 and with MnO2/SiO2 weight ratio of 0.7. The best gelation and drying temperature was 110oC and sufficiently strong composites were obtained by additional heat-treatment at 250oC for 2 h. The results show that silica–supported MnO2 materials can be utilized to separate copper from nickel and cadmium. The behavior of the composites can be explained reasonably well with the presented model and the parameters estimated from the data of the unsupported oxides. The metal uptake capacities of the prepared materials were quite small. For example, the final copper loading was 0.14 mmol/gMnO2. According to the results the special MnO2 materials are potential for a specific environmental application to uptake harmful metal ions.
Resumo:
Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.
Resumo:
Separation of carboxylic acids from aqueous streams is an important part of their manufacturing process. The aqueous solutions are usually dilute containing less than 10 % acids. Separation by distillation is difficult as the boiling points of acids are only marginally higher than that of water. Because of this distillation is not only difficult but also expensive due to the evaporation of large amounts of water. Carboxylic acids have traditionally been precipitated as calcium salts. The yields of these processes are usually relatively low and the chemical costs high. Especially the decomposition of calcium salts with sulfuric acid produces large amounts of calcium sulfate sludge. Solvent extraction has been studied as an alternative method for recovery of carboxylic acids. Solvent extraction is based on mixing of two immiscible liquids and the transfer of the wanted components form one liquid to another due to equilibrium difference. In the case of carboxylic acids, the acids are transferred from aqueous phase to organic solvent due to physical and chemical interactions. The acids and the extractant form complexes which are soluble in the organic phase. The extraction efficiency is affected by many factors, for instance initial acid concentration, type and concentration of the extractant, pH, temperature and extraction time. In this paper, the effects of initial acid concentration, type of extractant and temperature on extraction efficiency were studied. As carboxylic acids are usually the products of the processes, they are wanted to be recovered. Hence the acids have to be removed from the organic phase after the extraction. The removal of acids from the organic phase also regenerates the extractant which can be then recycled in the process. The regeneration of the extractant was studied by back-extracting i.e. stripping the acids form the organic solution into diluent sodium hydroxide solution. In the solvent regeneration, the regenerability of different extractants and the effect of initial acid concentration and temperature were studied.
Resumo:
The aim of the thesis was both to study wooden packaging waste reuse and refining generated in the forestry machine factory environment, and to find alternative wooden packaging waste utilization options in order to create a new operating model which would decrease the overall amount of waste produced. As environmental and waste legislation has become more rigid and companies' own environmental management systems’ requirements and control have increased, companies have had to consider their environmental aspects more carefully. Companies have to take into account alternative ways of reducing waste through an increase in reuse and recycling. A part of this waste is from different forms of packaging. In the metal industry the most heavily used packaging material is wooden packaging, as such material is heavy and the packaging has to be able to bear heavy stress. In the theoretical part of the thesis, the requirements of packaging and packaging waste legislation, as well as environmental management systems governing companies’ processing of their packaging waste, are studied. The theoretical part includes a process study of systems, which direct packaging waste and wooden packaging waste refining. In addition, methods related to the continuous improvement of these processes are introduced. This thesis concentrates on designing and creating a new operating model in relation to wooden packaging waste processing. The main target was to find an efficient model in order to decrease the total amount of wooden packaging waste and to increase refining. The empirical part introduces methods for approaches to wooden packaging waste re-utilization, as well as a description of a new operating model and its impact.
Resumo:
Latinalaisen Amerikan osuus maailmantaloudesta on pieni verrattuna sen maantieteelliseen kokoon, väkilukuun ja luonnonvaroihin. Aluetta pidetään kuitenkin yhtenä tulevaisuuden merkittävistä kasvumarkkinoista. Useissa Latinalaisen Amerikan maissa on teollisuutta, joka hyödyntää luonnonvaroja ja tuottaa raaka-aineita sekä kotimaan että ulkomaiden markkinoille. Tällaisia tyypillisiä teollisuudenaloja Latinalaisessa Amerikassa ovat kaivos- ja metsäteollisuus sekä öljyn ja maakaasun tuotanto. Näiden teollisuudenalojen tuotantolaitteiden ja koneiden valmistusta ei Latinalaisessa Amerikassa juurikaan ole. Ne tuodaan yleensä Pohjois-Amerikasta ja Euroopasta. Tässä diplomityössä tutkitaan sähkömoottorien ja taajuusmuuttajien markkinapotentiaalia Latinalaisessa Amerikassa. Tutkimuksessa perehdytään Latinalaisen Amerikan maiden kansantalouksien tilaan sekä arvioidaan sähkömoottorien ja taajuusmuuttajien markkinoiden kokoa tullitilastojen avulla. Chilen kaivosteollisuudessa arvioidaan olevan erityistä potentiaalia. Diplomityössä selvitetään ostoprosessin kulkua Chilen kaivosteollisuudessa ja eri asiakastyyppien roolia siinä sekä tärkeimpiä päätöskriteerejä toimittaja- ja teknologiavalinnoissa.
Resumo:
Optimointi on tavallinen toimenpide esimerkiksi prosessin muuttamisen tai uusimisen jälkeen. Optimoinnilla pyritään etsimään vaikkapa tiettyjen laatuominaisuuksien kannalta paras tapa ajaa prosessia tai erinäisiä prosessin osia. Tämän työn tarkoituksena oli investoinnin jälkeen optimoida neljä muuttujaa, erään runkoon menevän massan jauhatus ja määrä, märkäpuristus sekä spray –tärkin määrä, kolmen laatuominaisuuden, palstautumislujuuden, geometrisen taivutusjäykkyyden ja sileyden, suhteen. Työtä varten tehtiin viisi tehdasmittakaavaista koeajoa. Ensimmäisessä koeajossa oli tarkoitus lisätä vettä tai spray –tärkkiä kolmikerroskartongin toiseen kerrosten rajapintaan, toisessa koeajossa muutettiin, jo aiemmin mainitun runkoon menevän massan jauhatusta ja jauhinkombinaatioita. Ensimmäisessä koeajossa tutkittiin palstautumislujuuden, toisessa koeajossa muiden lujuusominaisuuksien kehittymistä. Kolmannessa koeajossa tutkittiin erään runkoon menevän massan jauhatuksen ja määrän sekä kenkäpuristimen viivapaineen muutoksen vaikutusta palstautumislujuuteen, geometriseen taivutusjäykkyyteen sekä sileyteen. Neljännessä koeajossa yritettiin toistaa edellisen koeajon paras piste ja parametreja hieman muuttamalla saada aikaan vieläkin paremmat laatuominaisuudet. Myös tässä kokeessa tutkittiin muuttujien vaikutusta palstautumislujuuteen, geometriseen taivutusjäykkyyteen ja sileyteen. Viimeisen kokeen tarkoituksena oli tutkia samaisen runkoon menevän massan vähentämisen vaikutusta palstautumislujuuteen. Erinäisistä vastoinkäymisistä johtuen, koeajoista saadut tulokset jäivät melko laihoiksi. Kokeista kävi kuitenkin ilmi, että lujuusominaisuudet eivät parantuneet, vaikka jauhatusta jatkettiin. Lujuusominaisuuksien kehittymisen kannalta turha jauhatus pystyttiin siis jättämään pois ja näin säästämään energiaa sekä säästymään pitkälle viedyn jauhatuksen mahdollisesti aiheuttamilta muilta ongelmilta. Vähemmällä jauhatuksella ominaissärmäkuorma saatiin myös pidettyä alle tehtaalla halutun tason. Puuttuvat lujuusominaisuudet täytyy saavuttaa muilla keinoin.
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
This master’s thesis investigates the significant macroeconomic and firm level determinants of CAPEX in Russian oil and mining sectors. It also studies the Russian oil and mining sectors, its development, characteristics and current situation. The panel data methodology was implemented to identify the determinants of CAPEX in Russian oil and mining sectors and to test derived hypotheses. The core sample consists of annual financial data of 45 publicly listed Russian oil and mining sector companies. The timeframe of the thesis research is a six year period from 2007 to 2013. The findings of the master’s thesis have shown that Gross Sales, Return On Assets, Free Cash Flow and Long Term Debt are firm level performance variables along with Russian GDP, Export, Urals and the Reserve Fund are macroeconomic variables that determine the magnitude of new capital expenditures reported by publicly listed Russian oil and mining sector companies. These results are not controversial to the previous research paper, indeed they confirm them. Furthermore, the findings from the emerging countries, such as Malaysia, India and Portugal, are analogous to Russia. The empirical research is edifying and novel. Findings from this master’s thesis are highly valuable for the scientific community, especially, for researchers who investigate the determinant of CAPEX in developing countries. Moreover, the results can be utilized as a cogent argument, when companies and investors are doing strategic decisions, considering the Russian oil and mining sectors.
Resumo:
Solvent extraction of calcium and magnesium impurities from a lithium-rich brine (Ca ~ 2,000 ppm, Mg ~ 50 ppm, Li ~ 30,000 ppm) was investigated using a continuous counter-current solvent extraction mixer-settler set-up. The literature review includes a general review about resources, demands and production methods of Li followed by basics of solvent extraction. Experimental section includes batch experiments for investigation of pH isotherms of three extractants; D2EHPA, Versatic 10 and LIX 984 with concentrations of 0.52, 0.53 and 0.50 M in kerosene respectively. Based on pH isotherms LIX 984 showed no affinity for solvent extraction of Mg and Ca at pH ≤ 8 while D2EHPA and Versatic 10 were effective in extraction of Ca and Mg. Based on constructed pH isotherms, loading isotherms of D2EHPA (at pH 3.5 and 3.9) and Versatic 10 (at pH 7 and 8) were further investigated. Furthermore based on McCabe-Thiele method, two extraction stages and one stripping stage (using HCl acid with concentration of 2 M for Versatic 10 and 3 M for D2EHPA) was practiced in continuous runs. Merits of Versatic 10 in comparison to D2EHPA are higher selectivity for Ca and Mg, faster phase disengagement, no detrimental change in viscosity due to shear amount of metal extraction and lower acidity in stripping. On the other hand D2EHPA has less aqueous solubility and is capable of removing Mg and Ca simultaneously even at higher Ca loading (A/O in continuous runs > 1). In general, shorter residence time (~ 2 min), lower temperature (~23 °C), lower pH values (6.5-7.0 for Versatic 10 and 3.5-3.7 for D2EHPA) and a moderately low A/O value (< 1:1) would cause removal of 100% of Ca and nearly 100% of Mg while keeping Li loss less than 4%, much lower than the conventional precipitation in which 20% of Li is lost.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
Liquid-liquid extraction is a mass transfer process for recovering the desired components from the liquid streams by contacting it to non-soluble liquid solvent. Literature part of this thesis deals with theory of the liquid-liquid extraction and the main steps of the extraction process design. The experimental part of this thesis investigates the extraction of organic acids from aqueous solution. The aim was to find the optimal solvent for recovering the organic acids from aqueous solutions. The other objective was to test the selected solvent in pilot scale with packed column and compare the effectiveness of the structured and the random packing, the effect of dispersed phase selection and the effect of packing material wettability properties. Experiments showed that selected solvent works well with dilute organic acid solutions. The random packing proved to be more efficient than the structured packing due to higher hold-up of the dispersed phase. Dispersing the phase that is present in larger volume proved to more efficient. With the random packing the material that was wetted by the dispersed phase was more efficient due to higher hold-up of the dispersed phase. According the literature, the behavior is usually opposite.
Resumo:
Virtually every cell and organ in the human body is dependent on a proper oxygen supply. This is taken care of by the cardiovascular system that supplies tissues with oxygen precisely according to their metabolic needs. Physical exercise is one of the most demanding challenges the human circulatory system can face. During exercise skeletal muscle blood flow can easily increase some 20-fold and its proper distribution to and within muscles is of importance for optimal oxygen delivery. The local regulation of skeletal muscle blood flow during exercise remains little understood, but adenosine and nitric oxide may take part in this process. In addition to acute exercise, long-term vigorous physical conditioning also induces changes in the cardiovasculature, which leads to improved maximal physical performance. The changes are largely central, such as structural and functional changes in the heart. The function and reserve of the heart’s own vasculature can be studied by adenosine infusion, which according to animal studies evokes vasodilation via it’s a2A receptors. This has, however, never been addressed in humans in vivo and also studies in endurance athletes have shown inconsistent results regarding the effects of sport training on myocardial blood flow. This study was performed on healthy young adults and endurance athletes and local skeletal and cardiac muscle blod flow was measured by positron emission tomography. In the heart, myocardial blood flow reserve and adenosine A2A receptor density, and in skeletal muscle, oxygen extraction and consumption was also measured. The role of adenosine in the control of skeletal muscle blood flow during exercise, and its vasodilator effects, were addressed by infusing competitive inhibitors and adenosine into the femoral artery. The formation of skeletal muscle nitric oxide was also inhibited by a drug, with and without prostanoid blockade. As a result and conclusion, it can be said that skeletal muscle blood flow heterogeneity decreases with increasing exercise intensity most likely due to increased vascular unit recruitment, but exercise hyperemia is a very complex phenomenon that cannot be mimicked by pharmacological infusions, and no single regulator factor (e.g. adenosine or nitric oxide) accounts for a significant part of exercise-induced muscle hyperemia. However, in the present study it was observed for the first time in humans that nitric oxide is not only important regulator of the basal level of muscle blood flow, but also oxygen consumption, and together with prostanoids affects muscle blood flow and oxygen consumption during exercise. Finally, even vigorous endurance training does not seem to lead to supranormal myocardial blood flow reserve, and also other receptors than A2A mediate the vasodilator effects of adenosine. In respect to cardiac work, atheletes heart seems to be luxuriously perfused at rest, which may result from reduced oxygen extraction or impaired efficiency due to pronouncedly enhanced myocardial mass developed to excel in strenuous exercise.
Resumo:
Corporate Social Responsibility is company’s interest and actions towards its environment and the society that the company takes from its free will, to give back to the community and environment. Corporate Social Responsibility is current topic as companies are challenged to take responsibility for their action, due to the constant tightening environmental legislations and raising pressure for transparency from the public. The objective of this Master’s Thesis research is to study if Corporate Social Responsibility affects suppliers’ brand image and mining companies’ buying decisions within global mining industry. The research method is qualitative and the research is conducted with secondary and primary research methods. The research aims to find out what are the implications of the research for the case company Larox. The objective is to answer to the question; how should case company Larox start to develop Corporate Social Responsibility (CSR) program of its own, and how the case company could benefit from CSR as a competitive advantage and what actions could be taken in the company marketing. Conclusions are drawn based on both the secondary and primary research results. Both of the researches imply that CSR is well present in the global mining industry, and that suppliers’ CSR policy has positive effect on company image, which positively affects company’s brand, and furthermore brand has a positive effect on mining companies buying decision. It can be concluded that indirectly CSR has an effect on buying decisions, and case company should consider developing a CSR program of its own.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.