867 resultados para Graph-based method
Resumo:
Tässä diplomityössä kartoitetaan pienten kiinteää biopolttoainetta käyttävien yhteistuotantolaitosten kilpailukykyä kuuden Euroopan maan osalta. Potentiaalisimmiksi arvioidut maat, Alankomaat, Iso-Britannia, Italia, Itävalta, Puola ja Tanska, valittiin aiemmin tehtyjen markkinaselvitysten perusteella. Työ sisältää kootut katsaukset näiden maiden energiamarkkinoihin, energiapolitiikan suuntaviivoihin sekä kussakin maassa käytössä oleviin bioenergian käytön edistämiseen tähtääviin tuki- ja ohjausmuotoihin. Työn yhtenä tavoitteena oli selvittää paljonko kiinteää biopolttoainetta käyttävä 3,5MWth/1MWe -kokoluokan sähkön ja lämmön yhteistuotantolaitos saa asiakkaalle maksaa, kun tarkasteltavan maan energiamarkkinoiden ja asiakkaan energiantarpeen asettamat reunaehdot huomioidaan. Investoinnin kannattavuusrajaa selvitettiin yksinkertaisen takaisinmaksuaikaan perustuvan vertailumallin avulla, jossa asiakkaan energianhankintavaihtoehtona voimalaitoshankinnan lisäksi oli hake- tai maakaasulämpölaitoksen hankinta ja ostosähkö. Selvityksen perusteella otollisimmat markkinat tarkastellulle voimalaitokselle näyttäisivät olevan Itävallassa, Italiassa ja Tanskassa. Pienen kiinteän biopolttoaineen yhteistuotantolaitoksen kannattava maksimi-investointikustannus oli näissä maissa mm. kalliista markkinasähköstä johtuen tasolla, johon jo nykyisillä voimalaitosten rakentamiskustannuksilla on mahdollista päästä. Lisäksi näissä maissa kannustettiin viranomaistoimenpitein kiinteän biopolttoaineen ja yhteistuotannon käyttöön ja energiapolitiikan tavoitteeksi oli asetettu tuotannon merkittävä lisääminen tulevaisuudessa.
Resumo:
Diplomityön tavoitteena oli kartoittaa Finnsementti Oy:n Lappeenrannan sementtitehtaan energiankäyttöä ja etsiä potentiaalisia energiansäästökohteita. Diplomityö liittyy Kauppa- ja teollisuusministeriön ja Teollisuuden ja Työnantajain Keskusliiton väliseen teollisuuden energiankäytön tehostamiseen tähtäävään puitesopimukseen, johon Finnsementti Oy on liittynyt. Diplomityön teoriaosassa tutustutaan sementin kemiaan raaka-aineista valmiisiin tuotteisiin. Lisäksi kartoitetaan sementtiteollisuuden energiansäästömahdollisuuksia laite-hankintojen ja prosessin optimointinnin kannalta. Työssä tutustutaan sementin-valmistuksessa käytettäviin laitteistoihin ja selvitetään niiden teknistä kehitystä ja energiaasäästäviä ratkaisuja. Kokeellinen osuus alkaa yksityiskohtaisella kuvauksella Lappeenrannan sementtitehtaan valmistusprosessista. Kokeellisessa osassa tutustutaan tehtaan energiankulutukseen sähkön ja polttoaineen muodossa. Lisäksi käsitellään paineilmaa ja vettä. Sähkön osalta kokeellinen osa sisältää sähkönkulutuksen historian ja sähkönjakeluverkon selvityksen lisäksi tehtaalla käytössä olevien luokittimien tehokkuustarkasteluita sähkönkulutuksen kannalta. Polttoaineen osalta diplomityö sisältää sementtiuunien energiataseiden mittaukset ja tulosten laskemisen sekä Excel-pohjaisen menetelmän kehittämisen energiataseiden laskemiseksi tulevaisuudessa. Paineilman ja veden osalta selvitetään niiden kulutusta tehtaalla ja lasketaan paineilmalle teoreettinen ominaissähkönkulutus. Luokittimien tehokkuustarkastelujen osalta havaittiin raakamyllyn luokittimen erottelu-terävyyden olevan huono verrattuna nykyaikaisiin korkeatehokkuusluokittimiin. Sementtimyllyjen luokittimien toiminnasta ei havaittu merkittäviä ongelmia. Energia-taseiden tuloksina havaittiin Lappeenrannan sementtitehtaan sementtiuunien edustavan vanhentunutta sementinvalmistustekniikkaa. Molempien uunien savukaasukanavien vuotoilmojen määrien havaittiin olevan suuret. Energiataloutta pystyttäisiin parantamaan mm. uusilla polttimilla, jolloin lämmittämättömän ensiöilman osuutta saataisiin pienentymään ja polttoaineen polttoa tehostumaan. Paineilman suhteen havaittiin kuumailman paineenalennuksen aiheuttavan turhia energiahäviöitä.
Resumo:
Despite the development of novel typing methods based on whole genome sequencing, most laboratories still rely on classical molecular methods for outbreak investigation or surveillance. Reference methods for Clostridium difficile include ribotyping and pulsed-field gel electrophoresis, which are band-comparing methods often difficult to establish and which require reference strain collections. Here, we present the double locus sequence typing (DLST) scheme as a tool to analyse C. difficile isolates. Using a collection of clinical C. difficile isolates recovered during a 1-year period, we evaluated the performance of DLST and compared the results to multilocus sequence typing (MLST), a sequence-based method that has been used to study the structure of bacterial populations and highlight major clones. DLST had a higher discriminatory power compared to MLST (Simpson's index of diversity of 0.979 versus 0.965) and successfully identified all isolates of the study (100 % typeability). Previous studies showed that the discriminatory power of ribotyping was comparable to that of MLST; thus, DLST might be more discriminatory than ribotyping. DLST is easy to establish and provides several advantages, including absence of DNA extraction [polymerase chain reaction (PCR) is performed on colonies], no specific instrumentation, low cost and unambiguous definition of types. Moreover, the implementation of a DLST typing scheme on an Internet database, such as that previously done for Staphylococcus aureus and Pseudomonas aeruginosa ( http://www.dlst.org ), will allow users to easily obtain the DLST type by submitting directly sequencing files and will avoid problems associated with multiple databases.
Resumo:
X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.
Resumo:
Peer-reviewed
Resumo:
Viruses are among the most important pathogens present in water contaminated with feces or urine and represent a serious risk to human health. Four procedures for concentrating viruses from sewage have been compared in this work, three of which were developed in the present study. Viruses were quantified using PCR techniques. According to statistical analysis and the sensitivity to detect human adenoviruses (HAdV), JC polyomaviruses (JCPyV) and noroviruses genogroup II (NoV GGII): (i) a new procedure (elution and skimmed-milk flocculation procedure (ESMP)) based on the elution of the viruses with glycine-alkaline buffer followed by organic flocculation with skimmed-milk was found to be the most efficient method when compared to (ii) ultrafiltration and glycine-alkaline elution, (iii) a lyophilization-based method and (iv) ultracentrifugation and glycine-alkaline elution. Through the analysis of replicate sewage samples, ESMP showed reproducible results with a coefficient of variation (CV) of 16% for HAdV, 12% for JCPyV and 17% for NoV GGII. Using spiked samples, the viral recoveries were estimated at 30-95% for HAdV, 55-90% for JCPyV and 45-50% for NoV GGII. ESMP was validated in a field study using twelve 24-h composite sewage samples collected in an urban sewage treatment plant in the North of Spain that reported 100% positive samples with mean values of HAdV, JCPyV and NoV GGII similar to those observed in other studies. Although all of the methods compared in this work yield consistently high values of virus detection and recovery in urban sewage, some require expensive laboratory equipment. ESMP is an effective low-cost procedure which allows a large number of samples to be processed simultaneously and is easily standardizable for its performance in a routine laboratory working in water monitoring. Moreover, in the present study, a CV was applied and proposed as a parameter to evaluate and compare the methods for detecting viruses in sewage samples.
Resumo:
Viruses are among the most important pathogens present in water contaminated with feces or urine and represent a serious risk to human health. Four procedures for concentrating viruses from sewage have been compared in this work, three of which were developed in the present study. Viruses were quantified using PCR techniques. According to statistical analysis and the sensitivity to detect human adenoviruses (HAdV), JC polyomaviruses (JCPyV) and noroviruses genogroup II (NoV GGII): (i) a new procedure (elution and skimmed-milk flocculation procedure (ESMP)) based on the elution of the viruses with glycine-alkaline buffer followed by organic flocculation with skimmed-milk was found to be the most efficient method when compared to (ii) ultrafiltration and glycine-alkaline elution, (iii) a lyophilization-based method and (iv) ultracentrifugation and glycine-alkaline elution. Through the analysis of replicate sewage samples, ESMP showed reproducible results with a coefficient of variation (CV) of 16% for HAdV, 12% for JCPyV and 17% for NoV GGII. Using spiked samples, the viral recoveries were estimated at 30-95% for HAdV, 55-90% for JCPyV and 45-50% for NoV GGII. ESMP was validated in a field study using twelve 24-h composite sewage samples collected in an urban sewage treatment plant in the North of Spain that reported 100% positive samples with mean values of HAdV, JCPyV and NoV GGII similar to those observed in other studies. Although all of the methods compared in this work yield consistently high values of virus detection and recovery in urban sewage, some require expensive laboratory equipment. ESMP is an effective low-cost procedure which allows a large number of samples to be processed simultaneously and is easily standardizable for its performance in a routine laboratory working in water monitoring. Moreover, in the present study, a CV was applied and proposed as a parameter to evaluate and compare the methods for detecting viruses in sewage samples.
Resumo:
This dissertation considers the segmental durations of speech from the viewpoint of speech technology, especially speech synthesis. The idea is that better models of segmental durations lead to higher naturalness and better intelligibility. These features are the key factors for better usability and generality of synthesized speech technology. Even though the studies are based on a Finnish corpus the approaches apply to all other languages as well. This is possibly due to the fact that most of the studies included in this dissertation are about universal effects taking place on utterance boundaries. Also the methods invented and used here are suitable for any other study of another language. This study is based on two corpora of news reading speech and sentences read aloud. The other corpus is read aloud by a 39-year-old male, whilst the other consists of several speakers in various situations. The use of two corpora is twofold: it involves a comparison of the corpora and a broader view on the matters of interest. The dissertation begins with an overview to the phonemes and the quantity system in the Finnish language. Especially, we are covering the intrinsic durations of phonemes and phoneme categories, as well as the difference of duration between short and long phonemes. The phoneme categories are presented to facilitate the problem of variability of speech segments. In this dissertation we cover the boundary-adjacent effects on segmental durations. In initial positions of utterances we find that there seems to be initial shortening in Finnish, but the result depends on the level of detail and on the individual phoneme. On the phoneme level we find that the shortening or lengthening only affects the very first ones at the beginning of an utterance. However, on average, the effect seems to shorten the whole first word on the word level. We establish the effect of final lengthening in Finnish. The effect in Finnish has been an open question for a long time, whilst Finnish has been the last missing piece for it to be a universal phenomenon. Final lengthening is studied from various angles and it is also shown that it is not a mere effect of prominence or an effect of speech corpus with high inter- and intra-speaker variation. The effect of final lengthening seems to extend from the final to the penultimate word. On a phoneme level it reaches a much wider area than the initial effect. We also present a normalization method suitable for corpus studies on segmental durations. The method uses an utterance-level normalization approach to capture the pattern of segmental durations within each utterance. This prevents the impact of various problematic variations within the corpora. The normalization is used in a study on final lengthening to show that the results on the effect are not caused by variation in the material. The dissertation shows an implementation and prowess of speech synthesis on a mobile platform. We find that the rule-based method of speech synthesis is a real-time software solution, but the signal generation process slows down the system beyond real time. Future aspects of speech synthesis on limited platforms are discussed. The dissertation considers ethical issues on the development of speech technology. The main focus is on the development of speech synthesis with high naturalness, but the problems and solutions are applicable to any other speech technology approaches.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
Larox Corporation is a provider of full service filtration in solid and liquid separation. Larox develops, designs, manufactures and supplies industrial filters. By Larox’s continuous development principle, a project for more efficient production was started. At the same time production planning was taken under review. Aim for this Master’s thesis was to find software designed for production planning purposes replacement for old Microsoft Excel based method. In this Master’s thesis current way of production planning was thoroughly analyzed and improvement targets were specified and also requirements for new software were assigned. Primary requirements for new software were possibility to production scheduling, planning, follow-up and also for long-time capacity planning and tracking. Also one demand was that new software should have data link to Larox’s current ERP-system. Result of this Master’s thesis was to start using Larox’s ERP-system also for production planning purposes. New mode of operation fulfils all requirements which were placed to new system. By using new method of production planning, production planners can get more easier and reliable data than from current system.
Resumo:
Cardiovascular mortality is 15 to 30 times higher in patients with chronic kidney disease than in the age-adjusted general population. Even minor renal dysfunction predicts cardiovascular events and death in the general population. In patients with atherosclerotic renovascular disease the annual cardiovascular event and death rate is even higher. The abnormalities in coronary and peripheral artery function in the different stages of chronic kidney disease and in renovascular disease are still poorly understood, nor have the cardiac effects of renal artery revascularization been well characterized, although considered to be beneficial. This study was conducted to characterize myocardial perfusion and peripheral endothelial function in patients with chronic kidney disease and in patients with atherosclerotic renovascular disease. Myocardial perfusion was measured with positron emission tomography (PET) and peripheral endothelial function with brachial artery flow-mediated dilatation. It has been suggested that the poor renal outcomes after the renal artery revascularization could be due to damage in the stenotic kidney parenchyma; especially the reduction in the microvascular density, changes mainly evident at the cortical level which controls almost 80% of the total renal blood flow. This study was also performed to measure the effect of renal artery stenosis revascularization on renal perfusion in patients with renovascular disease. In order to do that a PET-based method for quantification of renal perfusion was developed. The coronary flow reserve of patients with chronic kidney disease was similar to the coronary flow reserve of healthy controls. In renovascular disease the coronary flow reserve was, however, markedly reduced. Flow-mediated dilatation of brachial artery was decreased in patients with chronic kidney disease compared to healthy controls, and even more so in patients with renovascular disease. After renal artery stenosis revascularization, coronary vascular function and renal perfusion did not improve in patients with atherosclerotic renovascular disease, but in patients with bilateral renal artery stenosis, flow-mediated dilatation improved. Chronic kidney disease does not significantly affect coronary vascular function. On the contrary, coronary vascular function was severely deteriorated in patients with atherosclerotic renovascular disease, possibly because of diffuse coronary artery disease and/or diffuse microvascular disease. The peripheral endothelial function was disturbed in patients with chronic kidney disease and even more so in patient with atherosclerotic renovascular disease. Renal artery stenosis dilatation does not seem to offer any benefits over medical treatment in patients with renovascular disease, since revascularization does not improve coronary vascular function or renal perfusion.
Resumo:
Antimicrobial Resistance in Campylobacter jejuni and Campylobacter coli Campylobacters are a common cause of bacterial gastroenteritis worldwide, with Campylobacter jejuni and C. coli being the most common species isolated in human infections. If antimicrobial treatment is required, the drugs of choice at the moment are the macrolides and fluoroquinolones. In this thesis, the in vitro resistance profiles of the C. jejuni and C. coli strains were evaluated with emphasis on multidrug resistance. The aim was also to evaluate the different resistance mechanisms against the macrolides. Further, the disk diffusion method was compared to agar dilution method and its repeatability was evaluated, since it has been widely used for the susceptibility testing of campylobacters. The results of the present study showed that resistance to the fluoroquinolones is common in strains isolated from Finnish patients, but resistance to the macrolides is still rare. Multidrug resistance was associated with resistance to both ciprofloxacin and erythromycin. Among the available per oral drugs, least resistance was observed to coamoxiclav There was no resistance to the carbapenems. Sitafloxacin and tigecycline were in vitro highly effective towards Campylobacter species. A point mutation A2059G of the 23S rRNA gene was the main mechanism behind the macrolide resistance, whereas the efflux pumps did not seem to play an important role when a strain had A2059G mutation. A five amino acids insertion, which has not been described previously, in the ribosomal protein L22 of one highly-resistant C. jejuni strain without mutation in the 23S rRNA gene was also detected. Concerning the disk diffusion method, there was variation in the repeatability In conclusion, macrolides still appear to be the first-choice alternative for suspected Campylobacter enteritis. The in vitro susceptibilities found suggest that co-amoxiclav might be a candidate for clinical trials on campylobacteriosis, but in life-threatening situations, a carbapenem may be the drug of choice. More studies are needed on whether the disk diffusion test method could be improved or whether all susceptibilities of campylobacters should be done using a MIC based method.
Resumo:
Työn tavoitteena oli luoda työkalu kestomagneettikoneiden roottoreiden väsymisen analysointia varten. Työkalu toteutettiin siten, että siihen voidaan liittää oikeasta koneesta mitattu kuormitusdata, sekä tarvittavat materiaalitiedot. Kuormitusdata muunnetaan työkalussa jännityshistoriaksi käyttämällä elementtimenetelmän avulla laskettavaa skaalauskerrointa. Kestoiän laskemiseen analyysityökalu käyttää jännitykseen perustuvaa menetelmää sekä rainflowmenetelmää ja Palmgren-Minerin kumulatiivista vauriosääntöä. Lisäksi työkalu tekee tutkittavalle tapaukselle Smithin väsymislujuuspiirroksen. Edellä mainittujen menetelmien lisäksi työn teoriaosassa esiteltiin väsymisanalyysimenetelmistä myös paikalliseen venymään perustuva menetelmä sekä murtumismekaniikka. Nämä menetelmät jäivät monimutkaisuutensa vuoksi toteuttamatta työkalussa. Väsymisanalyysityökalulla laskettiin kestoiät kahdelle esimerkkitapaukselle. Kummassakin tapauksessa saatiin tulokseksi ääretön kestoikä, mutta aksiaalivuokoneen roottorin dynaaminen varmuus oli pieni. Vaikka tulokset vaikuttavat järkeviltä, ne olisi vielä hyvä verifioida esimerkiksi kaupallisen ohjelmiston avulla täyden varmuuden saamiseksi.
Resumo:
Percarboxylic acids are commonly used as disinfection and bleaching agents in textile, paper, and fine chemical industries. All of these applications are based on the oxidative potential of these compounds. In spite of high interest in these chemicals, they are unstable and explosive chemicals, which increase the risk of synthesis processes and transportation. Therefore, the safety criteria in the production process should be considered. Microreactors represent a technology that efficiently utilizes safety advantages resulting from small scale. Therefore, microreactor technology was used in the synthesis of peracetic acid and performic acid. These percarboxylic acids were produced at different temperatures, residence times and catalyst i.e. sulfuric acid concentrations. Both synthesis reactions seemed to be rather fast because with performic acid equilibrium was reached in 4 min at 313 K and with peracetic acid in 10 min at 343 K. In addition, the experimental results were used to study the kinetics of the formation of performic acid and peracetic acid. The advantages of the microreactors in this study were the efficient temperature control even in very exothermic reaction and good mixing due to the short diffusion distances. Therefore, reaction rates were determined with high accuracy. Three different models were considered in order to estimate the kinetic parameters such as reaction rate constants and activation energies. From these three models, the laminar flow model with radial velocity distribution gave most precise parameters. However, sulfuric acid creates many drawbacks in this synthesis process. Therefore, a ´´greener´´ way to use heterogeneous catalyst in the synthesis of performic acid in microreactor was studied. The cation exchange resin, Dowex 50 Wx8, presented very high activity and a long life time in this reaction. In the presence of this catalyst, the equilibrium was reached in 120 second at 313 K which indicates a rather fast reaction. In addition, the safety advantages of microreactors were investigated in this study. Four different conventional methods were used. Production of peracetic acid was used as a test case, and the safety of one conventional batch process was compared with an on-site continuous microprocess. It was found that the conventional methods for the analysis of process safety might not be reliable and adequate for radically novel technology, such as microreactors. This is understandable because the conventional methods are partly based on experience, which is very limited in connection with totally novel technology. Therefore, one checklist-based method was developed to study the safety of intensified and novel processes at the early stage of process development. The checklist was formulated using the concept of layers of protection for a chemical process. The traditional and three intensified processes of hydrogen peroxide synthesis were selected as test cases. With these real cases, it was shown that several positive and negative effects on safety can be detected in process intensification. The general claim that safety is always improved by process intensification was questioned.
Resumo:
Cyber security is one of the main topics that are discussed around the world today. The threat is real, and it is unlikely to diminish. People, business, governments, and even armed forces are networked in a way or another. Thus, the cyber threat is also facing military networking. On the other hand, the concept of Network Centric Warfare sets high requirements for military tactical data communications and security. A challenging networking environment and cyber threats force us to consider new approaches to build security on the military communication systems. The purpose of this thesis is to develop a cyber security architecture for military networks, and to evaluate the designed architecture. The architecture is described as a technical functionality. As a new approach, the thesis introduces Cognitive Networks (CN) which are a theoretical concept to build more intelligent, dynamic and even secure communication networks. The cognitive networks are capable of observe the networking environment, make decisions for optimal performance and adapt its system parameter according to the decisions. As a result, the thesis presents a five-layer cyber security architecture that consists of security elements controlled by a cognitive process. The proposed architecture includes the infrastructure, services and application layers that are managed and controlled by the cognitive and management layers. The architecture defines the tasks of the security elements at a functional level without introducing any new protocols or algorithms. For evaluating two separated method were used. The first method is based on the SABSA framework that uses a layered approach to analyze overall security of an organization. The second method was a scenario based method in which a risk severity level is calculated. The evaluation results show that the proposed architecture fulfills the security requirements at least at a high level. However, the evaluation of the proposed architecture proved to be very challenging. Thus, the evaluation results must be considered very critically. The thesis proves the cognitive networks are a promising approach, and they provide lots of benefits when designing a cyber security architecture for the tactical military networks. However, many implementation problems exist, and several details must be considered and studied during the future work.