837 resultados para Efficient Exploration
Resumo:
Coal, natural gas and petroleum-based liquid fuels are still the most widely used energy sources in modern society. The current scenario contrasts with the foreseen shortage of petroleum that was spread out in the beginning of the XXI century, when the concept of "energy security" emerged as an urgent agenda to ensure a good balance between energy supply and demand. Much beyond protecting refineries and oil ducts from terrorist attacks, these issues soon developed to a portfolio of measures related to process sustainability, involving at least three fundamental dimensions: (a) the need for technological breakthroughs to improve energy production worldwide; (b) the improvement of energy efficiency in all sectors of modern society; and (c) the increase of the social perception that education is a key-word towards a better use of our energy resources. Together with these technological, economic or social issues, "energy security" is also strongly influenced by environmental issues involving greenhouse gas emissions, loss of biodiversity in environmentally sensitive areas, pollution and poor solid waste management. For these and other reasons, the implementation of more sustainable practices in our currently available industrial facilities and the search for alternative energy sources that could partly replace the fossil fuels became a major priority throughout the world. Regarding fossil fuels, the main technological bottlenecks are related to the exploitation of less accessible petroleum resources such as those in the pre-salt layer, ranging from the proper characterization of these deep-water oil reservoirs, the development of lighter and more efficient equipment for both exploration and exploitation, the optimization of the drilling techniques, the achievement of further improvements in production yields and the establishment of specialized training programs for the technical staff. The production of natural gas from shale is also emerging in several countries but its production in large scale has several problems ranging from the unavoidable environmental impact of shale mining as well as to the bad consequences of its large scale exploitation in the past. The large scale use of coal has similar environmental problems, which are aggravated by difficulties in its proper characterization. Also, the mitigation of harmful gases and particulate matter that are released as a result of combustion is still depending on the development of new gas cleaning technologies including more efficient catalysts to improve its emission profile. On the other hand, biofuels are still struggling to fulfill their role in reducing our high dependence on fossil fuels. Fatty acid alkyl esters (biodiesel) from vegetable oils and ethanol from cane sucrose and corn starch are mature technologies whose market share is partially limited by the availability of their raw materials. For this reason, there has been a great effort to develop "second-generation" technologies to produce methanol, ethanol, butanol, biodiesel, biogas (methane), bio-oils, syngas and synthetic fuels from lower grade renewable feedstocks such as lignocellulosic materials whose consumption would not interfere with the rather sensitive issues of food security. Advanced fermentation processes are envisaged as "third generation" technologies and these are primarily linked to the use of algae feedstocks as well as other organisms that could produce biofuels or simply provide microbial biomass for the processes listed above. Due to the complexity and cost of their production chain, "third generation" technologies usually aim at high value added biofuels such as biojet fuel, biohydrogen and hydrocarbons with a fuel performance similar to diesel or gasoline, situations in which the use of genetically modified organisms is usually required. In general, the main challenges in this field could be summarized as follows: (a) the need for prospecting alternative sources of biomass that are not linked to the food chain; (b) the intensive use of green chemistry principles in our current industrial activities; (c) the development of mature technologies for the production of second and third generation biofuels; (d) the development of safe bioprocesses that are based on environmentally benign microorganisms; (e) the scale-up of potential technologies to a suitable demonstration scale; and (f) the full understanding of the technological and environmental implications of the food vs. fuel debate. On the basis of these, the main objective of this article is to stimulate the discussion and help the decision making regarding "energy security" issues and their challenges for modern society, in such a way to encourage the participation of the Brazilian Chemistry community in the design of a road map for a safer, sustainable and prosper future for our nation.
Resumo:
A new convenient method for preparation of 2-substituted benzimidazoles and bis-benzimidazoles is presented. In this method, o-phenylenediamines were condensed with bisulfite adducts of various aldehydes and di-aldehydes under neat conditions by microwave heating. The results were also compared with results of synthesis by conventional heating under reflux. Structures of the products were confirmed by infrared, ¹H- and 13C-NMR spectroscopy. Short reaction times, good yields, easy purification of products, and mild reaction conditions are the main advantages of this method.
Resumo:
Materials based on tungstophosphoric acid (TPA) immobilized on NH4ZSM5 zeolite were prepared by wet impregnation of the zeolite matrix with TPA aqueous solutions. Their concentration was varied in order to obtain TPA contents of 5%, 10%, 20%, and 30% w/w in the solid. The materials were characterized by N2 adsorption-desorption isotherms, XRD, FT-IR, 31P MAS-NMR, TGA-DSC, DRS-UV-Vis, and the acidic behavior was studied by potentiometric titration with n-butylamine. The BET surface area (SBET) decreased when the TPA content was raised as a result of zeolite pore blocking. The X-ray diffraction patterns of the solids modified with TPA only presented the characteristic peaks of NH4ZSM5 zeolites, and an additional set of peaks assigned to the presence of (NH4)3PW12O40. According to the Fourier transform infrared and 31P magic angle spinning-nuclear magnetic resonance spectra, the main species present in the samples was the [PW12O40]3- anion, which was partially transformed into the [P2W21O71]6- anion during the synthesis and drying steps. The thermal stability of the NH4ZSM5TPA materials was similar to that of their parent zeolites. Moreover, the samples with the highest TPA content exhibited band gap energy values similar to those reported for TiO2. The immobilization of TPA on NH4ZSM5 zeolite allowed the obtention of catalysts with high photocatalytic activity in the degradation of methyl orange dye (MO) in water, at 25 ºC. These can be reused at least three times without any significant decrease in degree of degradation.
Resumo:
Diplomityön tavoitteena oli tutkia kohdeyrityksen toiminnanohjaukseen sovellettujen tietojärjestelmien nykytilaa ja pyrkiä tunnistamaan siinä vaikuttavia, kehittämistä vaativia kohteita. Nykytilan kartoituksen perusteella työssä oli määrä laatia vertailu liiketoiminnan ohjaamiseen soveltuvien vaihtoehtoisten järjestelmäratkaisuiden välillä. Perusteellisen selvitystyön ja vaihtoehtojen vertailun avulla pyrittiin tuottamaan päätöksentekoa helpottavaa materiaalia kohdeyrityksen toiminnanohjausjärjestelmäratkaisun valinnan tueksi. Keskeisimpinä menetelminä tutkimuksessa hyödynnettiin asiakaskeskeistä toiminnanohjausjärjestelmän käyttöönottomenetelmää C-CEI:tä sekä analyyttistä hierarkiaprosessia AHP:tä. Tutkimuksen perusteella havaittiin, että onnistunut tietojärjestelmäratkaisun valinta edellyttää perusteellista liiketoiminnan nykytilan selvitystä. Yritysten on myös tärkeää tiedostaa, että tällaisiin monikriteerisiin ja hankaliin päätöksentekotilanteisiin on olemassa useita erilaisia päätöksenteon tukimenetelmiä, joiden avulla päätöksenteon ongelmaa kyetään selkeyttämään ja siten helpottamaan ratkaisun löytämistä.
Resumo:
De förändrade ansatserna inom feministisk utvecklingsekonomi för med sig nya sätt att tala om kvinnor, män och utveckling. Genom att analysera texter skrivna inom området feministisk ekonomi från 1960-talet fram till början av 2000-talet dokumenterar den föreliggande studien på vilket sätt språket hos textproducenter inom utvecklingsekonomi konstituerar och är beroende av dessa skribenters inställning till utvecklingsfrågor och till kvinnor och män. Analysen fokuserar på hur aktiverings- och passiveringsprocesser används i representationen av de två huvuddeltagarna, kvinnor och män, hur begreppet genus introduceras och hur utvecklingsfrågor förändras genom ansatser, över tid och mellan genrer. Den teoretiska ramen sträcker sig över olika discipliner: systemisk funktionell grammatik och kritisk diskursanalys, men även organisatorisk diskursanalys och utvecklingsstudier. Texterna som valts för analysen härstammar från tre olika källor: planer från världskvinnokonferenserna organiserade av Förenta Nationerna, resolutioner om kvinnor och utveckling antagna av Förenta Nationernas generalförsamling samt handlingsplaner för kvinnor och utveckling författade av Förenta Nationernas livsmedels- och jordbruksorganisation FAO. Den lingvistiska analysmetoden bygger på det system av roller och sätt att representera deltagare som utvecklats av Halliday och Van Leeuwen. För varje årtionde och varje genre granskar studien förändringarna i processtyper och deltagarroller, samt förändringen av fokus på kvinnorelaterade frågor och konceptualiseringen av genus. Den kvantitativa analysen kompletteras och förstärks av en detaljerad analys av textfragment från olika tidpunkter och ansatser. Studiens resultat är av grammatisk och lexikal natur och de är relaterade till genus, genre och tid. Studien visar att aktiveringsprocesserna är betydligt talrikare än passiveringsprocesserna i representationen av kvinnor. En bättre förståelse av deltagarrepresentation uppnås dock via en omgruppering av de grammatiska processerna i identifierande, aktiverande och riktade processer. Skiftet från fokus på kvinnor till fokus på genus är inte så mycket en förändring av processerna som representerar deltagarna, utan mer en förändring av retoriken i ansatserna och deras fokus: från integration av kvinnor till kvinnors empowerment, från kvinnors situation till genusrelationer, från brådskande tillägg till social konflikt och samarbete.
Resumo:
This Master´s thesis explores how the a global industrial corporation’s after sales service department should arrange its installed base management practices in order to maintain and utilize the installed base information effectively. Case company has product-related records, such as product’s lifecycle information, service history information and information about product’s performance. Information is collected and organized often case by case, therefore the systematic and effective use of installed base information is difficult also the overview of installed base is missing. The goal of the thesis study was to find out how the case company can improve the installed base maintenance and management practices and improve the installed base information availability and reliability. Installed base information management practices were first examined through the literature. The empirical research was conducted by the interviews and questionnaire survey, targeted to the case company’s service department. The research purpose was to find out the challenges related to case company´s service department’s information management practices. The study also identified the installed base information needs and improvement potential in the availability of information. Based on the empirical research findings, recommendations for improve installed base management practices and information availability were created. Grounding of the recommendations, the case company is suggested the following proposals for action: Service report development, improving the change management process, ensuring the quality of the product documentation in early stages of product life cycle and decision to improve installed base management practices.
Resumo:
Den snart 200 år gamla vetenskapsgrenen organisk synteskemi har starkt bidragit till moderna samhällens välfärd. Ett av flaggskeppen för den organiska synteskemin är utvecklingen och produktionen av nya läkemedel och speciellt de aktiva substanserna däri. Därmed är det viktigt att utveckla nya syntesmetoder, som kan tillämpas vid framställningen av farmaceutiskt relevanta målstrukturer. I detta sammanhang är den ultimata målsättningen dock inte endast en lyckad syntes av målmolekylen, utan det är allt viktigare att utveckla syntesrutter som uppfyller kriterierna för den hållbara utvecklingen. Ett av de centralaste verktygen som en organisk kemist har till förfogande i detta sammanhang är katalys, eller mera specifikt möjligheten att tillämpa olika katalytiska reaktioner vid framställning av komplexa målstrukturer. De motsvarande industriella processerna karakteriseras av hög effektivitet och minimerad avfallsproduktion, vilket naturligtvis gynnar den kemiska industrin samtidigt som de negativa miljöeffekterna minskas avsevärt. I denna doktorsavhandling har nya syntesrutter för produktion av finkemikalier med farmaceutisk relevans utvecklats genom att kombinera förhållandevis enkla transformationer till nya reaktionssekvenser. Alla reaktionssekvenser som diskuteras i denna avhandling påbörjades med en metallförmedlad allylering av utvalda aldehyder eller aldiminer. De erhållna produkterna innehållende en kol-koldubbelbindning med en närliggande hydroxyl- eller aminogrupp modifierades sedan vidare genom att tillämpa välkända katalytiska reaktioner. Alla syntetiserade molekyler som presenteras i denna avhandling karakteriseras som finkemikalier med hög potential vid farmaceutiska tillämpningar. Utöver detta tillämpades en mängd olika katalytiska reaktioner framgångsrikt vid syntes av dessa molekyler, vilket i sin tur förstärker betydelsen för de katalytiska verktygen i organiska kemins verktygslåda.
Resumo:
In a modern dynamic environment organizations are facing new requirements for success and competitive advantage. This also sets new requirements for leaders. The term of ambidexterity is used in relation with organizations that are able to manage short-term efficiency and long-term innovation simultaneously. Ambidextrous leaders have the same capability at an individual level. They are able to balance between efficiency and flexibility. This study examined the confrontation of these two competing concepts in the leadership perspective. The aim of the study was to understand this recently arisen concept and its antecedents and examine what is currently known about ambidextrous leadership. This was a case study with data collected through theme interviews in a result orientated customer centre organization that has a cultural change at hand when it comes to leadership and empowerment. Organization wants to be efficient and flexible at the same time (a.k.a. ambidextrous) and that requires new type of leadership. In this study the aim was to describe the capabilities and criteria for ambidextrous leader and examine the leadership roles related to ambidextrous leadership in different hierarchical levels. The case organization had also created systematic means to support this cultural change and the effects of the process related to leadership were studied. This study showed that the area is yet widely unexplored and contradictory views are presented. This study contributes to the deprivation of study of ambidexterity in leadership and individuals. The study presents a description of ambidextrous leadership and describes the capabilities of ambidextrous leader. Ambidextrous leaders are able to make cognitive decisions between their leadership style according to situation that requires either leadership related to efficiency such as transactional leadership or leadership related to flexibility such as transformational leadership. Their leadership style supports both short-term and long-term goals. This study also shows that the role of top management is vital and operational leaders rely on their example.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
In recent years, chief information officers (CIOs) around the world have identified Business Intelligence (BI) as their top priority and as the best way to enhance their enterprises competitiveness. Yet, many enterprises are struggling to realize the business value that BI promises. This discrepancy causes important questions, for example: what are the critical success factors of Business Intelligence and, more importantly, how it can be ensured that a Business Intelligence program enhances enterprises competitiveness. The main objective of the study is to find out how it can be ensured that a BI program meets its goals in providing competitive advantage to an enterprise. The objective is approached with a literature review and a qualitative case study. For the literature review the main objective populates three research questions (RQs); RQ1: What is Business Intelligence and why is it important for modern enterprises? RQ2: What are the critical success factors of Business Intelligence programs? RQ3: How it can be ensured that CSFs are met? The qualitative case study covers the BI program of a Finnish global manufacturer company. The research questions for the case study are as follows; RQ4: What is the current state of the case company’s BI program and what are the key areas for improvement? RQ5: In what ways the case company’s Business Intelligence program could be improved? The case company’s BI program is researched using the following methods; action research, semi-structured interviews, maturity assessment and benchmarking. The literature review shows that Business Intelligence is a technology-based information process that contains a series of systematic activities, which are driven by the specific information needs of decision-makers. The objective of BI is to provide accurate, timely, fact-based information, which enables taking actions that lead to achieving competitive advantage. There are many reasons for the importance of Business Intelligence, two of the most important being; 1) It helps to bridge the gap between an enterprise’s current and its desired performance, and 2) It helps enterprises to be in alignment with key performance indicators meaning it helps an enterprise to align towards its key objectives. The literature review also shows that there are known critical success factors (CSFs) for Business Intelligence programs which have to be met if the above mentioned value is wanted to be achieved, for example; committed management support and sponsorship, business-driven development approach and sustainable data quality. The literature review shows that the most common challenges are related to these CSFs and, more importantly, that overcoming these challenges requires a more comprehensive form of BI, called Enterprise Performance Management (EPM). EPM links measurement to strategy by focusing on what is measured and why. The case study shows that many of the challenges faced in the case company’s BI program are related to the above-mentioned CSFs. The main challenges are; lack of support and sponsorship from business, lack of visibility to overall business performance, lack of rigid BI development process, lack of clear purpose for the BI program and poor data quality. To overcome these challenges the case company should define and design an enterprise metrics framework, make sure that BI development requirements are gathered and prioritized by business, focus on data quality and ownership, and finally define clear goals for the BI program and then support and sponsor these goals.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.