63 resultados para non-additivity of Faradaic current
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
This paper introduces an important source of torque ripple in PMSMs with tooth-coil windings (TC-PMSMs). It is theoretically proven that saturation and cross-saturation phenomena caused by the non-synchronous harmonics of the stator current linkage cause a synchronous inductance variation with a particular periodicity. This, in turn, determines the magnitude of the torque ripple and can also deteriorate the performance of signal-injection-based rotor position estimation algorithms. An improved dq- inductance model is proposed. It can be used in torque ripple reduction control schemes and can enhance the self-sensing capabilities of TC-PMSMs
Resumo:
In the thesis the principle of work of eddy current position sensors and the main cautions that must be taken into account while sensor design process are explained. A way of automated eddy current position sensor electrical characteristics measurement is suggested. A prototype of the eddy current position sensor and its electrical characteristics are investigated. The results obtained by means of the automated measuring system are explained.
Resumo:
In the work eddy current sensors are described and evaluated. Theoretical part includes physical basics of the eddy currents, overview of available commercial products and technologies. Industrial sensors operation was assessed based on several working modes. Apart from this, the model was created in Matlab Simulink with Xilinx Blockset and then translated into a Xilinx ISE Design Suite compatible project. The performance of the resulting implementation was compared to the existing implementation in the Xilinx Spartan 3 FPGA board with the custom made sensor. Additionally, an introduction to FPGAs and VHDL is presented.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Building and sustaining competitive advantage through the creation of market imperfections is challenging in a constantly changing business environment - particularly since the sources of such advantages are increasingly knowledge-based. Facilitated by improved networks and communication, knowledge spills over to competitors more easily than before,thus creating an appropriability problem: the inability of an innovating firm to utilize its innovations commercially. Consequently, as the importance of intellectual assets increases, their protection also calls for new approaches. Companies have various means of protection at their disposal, and by taking advantage of them they can make intangibles more non-transferable and prevent, or at leastdelay, imitation of their most crucial intellectual assets. However, creating barriers against imitation has another side to it, and the transfer of knowledge in situations requiring knowledge sharing may be unintentionally obstructed. Theaim of this thesis is to increase understanding of how firms can balance knowledge protection and sharing so as to benefit most from their knowledge assets. Thus, knowledge protection is approached through an examination of the appropriability regime of a firm, i.e., the combination of available and effective means ofprotecting innovations, their profitability, and the increased rents due to R&D. A further aim is to provide a broader understanding of the formation and structure of the appropriability regime. The study consists of two parts. The first part introduces the research topic and the overall results of the study, and the second part consists of six complementary research publications covering various appropriability issues. The thesis contributes to the existing literature in several ways. Although there is a wide range of prior research on appropriability issues, a lot of it is restricted either to the study of individual appropriability mechanisms, or to comparing certain features of them. These approaches are combined, and the relevant theoretical concepts are clarified and developed. In addition, the thesis provides empirical evidence of the formation of the appropriability regime, which is consequently presented as an adaptive process. Thus, a framework is provided that better corresponds to the complex reality of the current business environment.
Resumo:
Neuropeptide Y (NPY) is a widely expressed neurotransmitter in the central and peripheral nervous systems. Thymidine 1128 to cytocine substitution in the signal sequence of the preproNPY results in a single amino acid change where leucine is changed to proline. This L7P change leads to a conformational change of the signal sequence which can have an effect on the intracellular processing of NPY. The L7P polymorphism was originally associated with higher total and LDL cholesterol levels in obese subjects. It has also been associated with several other physiological and pathophysiological responses such as atherosclerosis and T2 diabetes. However, the changes on the cellular level due to the preproNPY signal sequence L7P polymorphism were not known. The aims of the current thesis were to study the effects of the [p.L7]+[p.L7] and the [p.L7]+[p.P7] genotypes in primary cultured and genotyped human umbilical vein endothelial cells (HUVEC), in neuroblastoma (SK-N-BE(2)) cells and in fibroblast (CHO-K1) cells. Also, the putative effects of the L7P polymorphism on proliferation, apoptosis and LDL and nitric oxide metabolism were investigated. In the course of the studies a fragment of NPY targeted to mitochondria was found. With the putative mitochondrial NPY fragment the aim was to study the translational preferences and the mobility of the protein. The intracellular distribution of NPY between the [p.L7]+[p.L7] and the [p.L7]+[p.P7] genotypes was found to be different. NPY immunoreactivity was prominent in the [p.L7]+[p.P7] cells while the proNPY immunoreactivity was prominent in the [p.L7]+[p.L7] genotype cells. In the proliferation experiments there was a difference in the [p.L7]+[p.L7] genotype cells between early and late passage (aged) cells; the proliferation was raised in the aged cells. NPY increased the growth of the cells with the [p.L7]+[p.P7] genotype. Apoptosis did not seem to differ between the genotypes, but in the aged cells with the [p.L7]+[p.L7] genotype, LDL uptake was found to be elevated. Furthermore, the genotype seemed to have a strong effect on the nitric oxide metabolism. The results indicated that the mobility of NPY protein inside the cells was increased within the P7 containing constructs. The existence of the mitochondria targeted NPY fragment was verified, and translational preferences were proved to be due to the origin of the cells. Cell of neuronal origin preferred the translation of mature NPY (NPY1-36) when compared to the non neuronal cells that translated both, NPY and the mitochondrial fragment of NPY. The mobility of the mitochondrial fragment was found to be minimal. The functionality of the mitochondrial NPY fragment remains to be investigated. L7P polymorphism in the preproNPY causes a series of intracellular changes. These changes may contribute to the state of cellular senescence, vascular tone and lead to endothelial dysfunction and even to increased susceptibility to diseases, like atherosclerosis and T2 diabetes.
Resumo:
A model to solve heat and mass balances during the offdesign load calculations was created. These equations are complex and nonlinear. The main new ideas used in the created offdesign model of a kraft recovery boiler are the use of heat flows as torn iteration variables instead of the current practice of using the mass flows, vectorizing equation solving, thus speeding up the process, using non dimensional variables for solving the multiple heat transfer surface problem and using a new procedure for calculating pressure losses. Recovery boiler heat and mass balances are reduced to vector form. It is shown that these vectorized equations can be solved virtually without iteration. The iteration speed is enhanced by the use of the derived method of calculating multiple heat transfer surfaces simultaneously. To achieve this quick convergence the heat flows were used as the torn iteration parameters. A new method to handle pressure loss calculations with linearization was presented. This method enabled less time to be spent calculating pressure losses. The derived vector representation of the steam generator was used to calculate offdesign operation parameters for a 3000 tds/d example recovery boiler. The model was used to study recovery boiler part load operation and the effect of the black liquor dry solids increase on recovery boiler dimensioning. Heat flows to surface elements for part load calculations can be closely approximated with a previously defined exponent function. The exponential method can be used for the prediction of fouling in kraft recovery boilers. For similar furnaces the firing of 80 % dry solids liquor produces lower hearth heat release rate than the 65 % dry solids liquor if we fire at constant steam flow. The furnace outlet temperatures show that capacity increase with firing rate increase produces higher loadings than capacity increase with dry solids increase. The economizers, boiler banks and furnaces can be dimensioned smaller if we increase the black liquor dry solids content. The main problem with increased black liquor dry solids content is the decrease in the heat available to superheat. Whenever possible the furnace exit temperature should be increased by decreasing the furnace height. The increase in the furnace exit temperature is usually opposed because of fear of increased corrosion.
Resumo:
The main objective of this thesis was to compare the efficiency of counter-current and co-current filter cake washing techniques. Filter cake washing is a common unit operation which is used in the chemical process industry for improving the recovery of the liquid phase or for purifying the solid phase of the filter cake. Counter-current displacement washing is more difficult to arrange and it requires additional process equipment but the advantage of counter-current method is that the consumption of wash water that is required for achieving certain filter cake purity may be considerably decreased when compared to the co-current washing method. This is true especially for materials that are difficult to wash. The literature part of this thesis consists of a review of filter cake washing in general, including the basic principles of co-current and counter-current techniques, and a description of the structure and operation of a horizontal vacuum belt filter, which is the equipment considered in the experimental part of this thesis. Also the most common cake washing models are introduced. The experiments were performed by washing wheat apatite filter cakes in a laboratory scale vacuum filter by using both co-current and counter-current washing methods. The main results of these tests were the washing curves that relate the purity of the filter cake to the amount of wash liquid used. Comparison between the obtained washing curves showed that both washing methods could be efficiently applied for achieving good washing results. The differences between the wash liquid consumptions in the co-current and counter-current washing methods were found to be surprisingly small but this is most probably explained by the relatively good washing characteristics of the apatite cakes. The washing models introduced in the literature part were compared with the results obtained from the experiments and it was found out that the studied cake washing processes could be described
Resumo:
Suomenlahden lisääntynyt meriliikenne on herättänyt huolta meriliikenteen turvallisuuden tasosta, ja erityisesti Venäjän öljyviennin kasvu on lisännyt öljyonnettomuuden todennäköisyyttä Suomenlahdella. Erilaiset kansainväliset, alueelliset ja kansalliset ohjauskeinot pyrkivät vähentämään merionnettomuuden riskiä ja meriliikenteen muita haittavaikutuksia. Tämä raportti käsittelee meriturvallisuuden yhteiskunnallisia ohjauskeinoja: ohjauskeinoja yleisellä tasolla, meriturvallisuuden keskeisimpiä säätelijöitä, meriturvallisuuden ohjauskeinoja ja meriturvallisuuspolitiikan tulevaisuuden näkymiä, ohjauskeinojen tehokkuutta ja nykyisen meriturvallisuuden ohjausjärjestelmän heikkouksia. Raportti on kirjallisuuskatsaus meriturvallisuuden yhteiskunnalliseen sääntelyn rakenteeseen ja tilaan erityisesti Suomenlahden meriliikenteen näkökulmasta. Raportti on osa tutkimusprojektia ”SAFGOF - Suomenlahden meriliikenteen kasvunäkymät 2007 - 2015 ja kasvun vaikutukset ympäristölle ja kuljetusketjujen toimintaan” ja sen työpakettia 6 ”Keskeisimmät riskit ja yhteiskunnalliset vaikutuskeinot”. Yhteiskunnalliset ohjauskeinot voidaan ryhmitellä hallinnollisiin, taloudellisiin ja tietoohjaukseen perustuviin ohjauskeinoihin. Meriturvallisuuden edistämisessä käytetään kaikkia näitä, mutta hallinnolliset ohjauskeinot ovat tärkeimmässä asemassa. Merenkulun kansainvälisen luonteen vuoksi meriturvallisuuden sääntely tapahtuu pääosin kansainvälisellä tasolla YK:n ja erityisesti Kansainvälisen merenkulkujärjestön (IMO) toimesta. Lisäksi myös Euroopan Unionilla on omaa meriturvallisuuteen liittyvää sääntelyä ja on myös olemassa muita alueellisia meriturvallisuuden edistämiseen liittyviä elimiä kuten HELCOM. Joitakin meriturvallisuuden osa-alueita säädellään myös kansallisella tasolla. Hallinnolliset meriturvallisuuden ohjauskeinot sisältävät aluksen rakenteisiin ja varustukseen, alusten kunnon valvontaan, merimiehiin ja merityön tekemiseen sekä navigointiin liittyviä ohjauskeinoja. Taloudellisiin ohjauskeinoihin kuuluvat esimerkiksi väylä- ja satamamaksut, merivakuutukset, P&I klubit, vastuullisuus- ja korvauskysymykset sekä taloudelliset kannustimet. Taloudellisten ohjauskeinojen käyttö meriturvallisuuden edistämiseen on melko vähäistä verrattuna hallinnollisten ohjauskeinojen käyttöön, mutta niitä voitaisiin varmasti käyttää enemmänkin. Ongelmana taloudellisten ohjauskeinojen käytössä on se, että ne kuuluvat pitkälti kansallisen sääntelyn piiriin, joten alueellisten tai kansainvälisten intressien edistäminen taloudellisilla ohjauskeinoilla voi olla hankalaa. Tieto-ohjaus perustuu toimijoiden vapaaehtoisuuteen ja yleisen tiedotuksen lisäksi tieto-ohjaukseen sisältyy esimerkiksi vapaaehtoinen koulutus, sertifiointi tai meriturvallisuuden edistämiseen tähtäävät palkinnot. Poliittisella tasolla meriliikenteen aiheuttamat turvallisuusriskit Suomenlahdella on otettu vakavasti ja paljon työtä tehdään eri tahoilla riskien minimoimiseksi. Uutta sääntelyä on odotettavissa etenkin liittyen meriliikenteen ympäristövaikutuksiin ja meriliikenteen ohjaukseen kuten meriliikenteen sähköisiin seurantajärjestelmiin. Myös inhimilliseen tekijän merkitykseen meriturvallisuuden kehittämisessä on kiinnitetty lisääntyvissä määrin huomiota, mutta inhimilliseen tekijän osalta tehokkaiden ohjauskeinojen kehittäminen näyttää olevan haasteellista. Yleisimmin lääkkeeksi esitetään koulutuksen kehittämistä. Kirjallisuudessa esitettyjen kriteereiden mukaan tehokkaiden ohjauskeinojen tulisi täyttää seuraavat vaatimukset: 1) tarkoituksenmukaisuus – ohjauskeinojen täytyy olla sopivia asetetun tavoitteen saavuttamiseen, 2) taloudellinen tehokkuus – ohjauskeinon hyödyt vs. kustannukset tulisi olla tasapainossa, 3) hyväksyttävyys – ohjauskeinon täytyy olla hyväksyttävä asianosaisten ja myös laajemman yhteiskunnan näkökulmasta katsottuna, 4) toimeenpano – ohjauskeinon toimeenpanon pitää olla mahdollista ja sen noudattamista täytyy pystyä valvomaan, 5) lateraaliset vaikutukset – hyvällä ohjauskeinolla on positiivisia seurannaisvaikutuksia muutoinkin kuin vain ohjauskeinon ensisijaisten tavoitteiden saavuttaminen, 6) kannustin ja uuden luominen – hyvä ohjauskeino kannustaa kokeilemaan uusia ratkaisuja ja kehittämään toimintaa. Meriturvallisuutta koskevaa sääntelyä on paljon ja yleisesti ottaen merionnettomuuksien lukumäärä on ollut laskeva viime vuosikymmenien aikana. Suuri osa sääntelystä on ollut tehokasta ja parantanut turvallisuuden tasoa maailman merillä. Silti merionnettomuuksia ja muita vaarallisia tapahtumia sattuu edelleen. Nykyistä sääntelyjärjestelmää voidaan kritisoida monen asian suhteen. Kansainvälisen sääntelyn aikaansaaminen ei ole helppoa: prosessi on yleensä hidas ja tuloksena voi olla kompromissien kompromissi. Kansainvälinen sääntely on yleensä reaktiivista eli ongelmakohtiin puututaan vasta kun jokin onnettomuus tapahtuu sen sijaan että se olisi proaktiivista ja pyrkisi puuttumaan ongelmakohtiin jo ennen kuin jotain tapahtuu. IMO:n työskentely perustuu kansallisvaltioiden osallistumiseen ja sääntelyn toimeenpano tapahtuu lippuvaltioiden toimesta. Kansallisvaltiot ajavat IMO:ssa pääasiallisesti omia intressejään ja sääntelyn toimeenpanossa on suuria eroja lippuvaltioiden välillä. IMO:n kyvyttömyys puuttua havaittuihin ongelmiin nopeasti ja ottaa sääntelyssä huomioon paikallisia olosuhteita on johtanut siihen, että esimerkiksi Euroopan Unioni on alkanut itse säädellä meriturvallisuutta ja että on olemassa sellaisia alueellisia erityisjärjestelyjä kuin PSSA (particularly sensitive sea area – erityisen herkkä merialue). Merenkulkualalla toimii monenlaisia yrityksiä: toisaalta yrityksiä, jotka pyrkivät toimimaan turvallisesti ja kehittämään turvallisuutta vielä korkeammalle tasolle, ja toisaalta yrityksiä, jotka toimivat niin halvalla kuin mahdollista, eivät välitä turvallisuusseikoista, ja joilla usein on monimutkaiset ja epämääräiset omistusolosuhteet ja joita vahingon sattuessa on vaikea saada vastuuseen. Ongelma on, että kansainvälisellä merenkulkualalla kaikkien yritysten on toimittava samoilla markkinoilla. Vastuuttomien yritysten toiminnan mahdollistavat laivaajat ja muut alan toimijat, jotka suostuvat tekemään yhteistyötä niiden kanssa. Välinpitämätön suhtautuminen turvallisuuteen johtuu osaksi myös merenkulun vanhoillisesta turvallisuuskulttuurista. Verrattaessa meriturvallisuuden sääntelyjärjestelmää kokonaisuutena tehokkaiden ohjauskeinoihin kriteereihin, voidaan todeta, että monien kriteerien osalta nykyistä järjestelmää voidaan pitää tehokkaana ja onnistuneena. Suurimmat ongelmat lienevät sääntelyn toimeenpanossa ja ohjauskeinojen kustannustehokkuudessa. Lippuvaltioiden toimeenpanoon perustuva järjestelmä ei toimi toivotulla tavalla, josta mukavuuslippujen olemassa olo on selvin merkki. Ohjauskeinojen, sekä yksittäisten ohjauskeinojen että vertailtaessa eri ohjauskeinoja keskenään, kustannustehokkuutta on usein vaikea arvioida, minkä seurauksena ohjauskeinojen kustannustehokkuudesta ei ole saatavissa luotettavaa tietoa ja tuloksena voi olla, että ohjauskeino on käytännössä pienen riskin eliminoimista korkealla kustannuksella. Kansainvälisen tason meriturvallisuus- (ja merenkulku-) politiikan menettelytavoiksi on ehdotettu myös muita vaihtoehtoja kuin nykyinen järjestelmä, esimerkiksi monitasoista tai polysentristä hallintojärjestelmää. Monitasoisella hallintojärjestelmällä tarkoitetaan järjestelmää, jossa keskushallinto on hajautettu sekä vertikaalisesti alueellisille tasoille että horisontaalisesti ei-valtiollisille toimijoille. Polysentrinen hallintojärjestelmä menee vielä askeleen pidemmälle. Polysentrinen hallintojärjestelmä on hallintotapa, jonka puitteissa kaikentyyppiset toimijat, sekä yksityiset että julkiset, voivat osallistua hallintoon, siis esimerkiksi hallitukset, edunvalvontajärjestöt, kaupalliset yritykset jne. Kansainvälinen lainsäädäntö määrittelee yleiset tasot, mutta konkreettiset toimenpiteet voidaan päättää paikallisella tasolla eri toimijoiden välisessä yhteistyössä. Tämän tyyppisissä hallintojärjestelmissä merenkulkualan todellinen, kansainvälinen mutta toisaalta paikallinen, toimintaympäristö tulisi otetuksi paremmin huomioon kuin järjestelmässä, joka perustuu kansallisvaltioiden keskenään yhteistyössä tekemään sääntelyyn. Tällainen muutos meriturvallisuuden hallinnassa vaatisi kuitenkin suurta periaatteellista suunnanmuutosta, jollaisen toteutumista ei voi pitää kovin todennäköisenä ainakaan lyhyellä tähtäimellä.
Resumo:
Alzheimer`s disease (AD) is characterised neuropathologically by the presence of extracellular amyloid plaques, intraneuronal neurofibrillary tangles, and cerebral neuronal loss. The pathological changes in AD are believed to start even decades before clinical symptoms are detectable. AD gradually affects episodic memory, cognition, behaviour and the ability to perform everyday activities. Mild cognitive impairment (MCI) represents a transitional state between normal aging and dementia disorders, especially AD. The predictive accuracy of the current and commonly used MCI criteria devide this disorder into amnestic (aMCI) and non-amnestic (naMCI) MCI. It seems that many individuals with aMCI tend to convert to AD. However many MCI individuals will remain stable and some may even recover. At present, the principal drugs for the treatment of AD provide only symptomatic and palliative benefits. Safe and effective mechanism-based therapies are needed for this devastating neurodegenerative disease of later life. In conjunction with the development of new therapeutic drugs, tools for early detection of AD would be important. In future one of the challenges will be to detect at an early stage these MCI individuals who will convert to AD. Methods which can predict which MCI subjects will convert to AD will be much more important if the new drug candidates prove to have disease-arresting or even disease–slowing effects. These types of drugs are likely to have the best efficacy if administered in the early or even in the presymptomatic phase of the disease when the synaptic and neuronal loss has not become too widespread. There is no clinical method to determine with certainly which MCI individuals will progress to AD. However there are several methods which have been suggested as predictors of conversion to AD, e.g. increased [11C] PIB uptake, hippocampal atrophy in MRI, low CSF A beta 42 level, high CSF tau-protein level, apolipoprotein E (APOE) ε4 allele and impairment in episodic memory and executive functions. In the present study subjects with MCI appear to have significantly higher [11C] PIB uptake vs healthy elderly in several brain areas including frontal cortex, the posterior cingulate, the parietal and lateral temporal cortices, putamen and caudate. Also results from this PET study indicate that over time, MCI subjects who display increased [11C] PIB uptake appear to be significantly more likely to convert to AD than MCI subjects with negative [11C] PIB retention. Also hippocampal atrophy seems to increase in MCI individuals clearly during the conversion to AD. In this study [11C] PIB uptake increases early and changes relatively little during the AD process whereas there is progressive hippocampal atrophy during the disease. In addition to increased [11C] PIB retention and hippocampal atrophy, the status of APOE ε4 allele might contribute to the conversion from MCI to AD.
Resumo:
Factors affecting outcome after arthroscopic rotator cuff repair are unclear and there is still insufficient evidence of efficacy of any treatment modality for rotator cuff tears. The purpose of the current study was to determine in a prospective randomized multicenter trial whether there is a difference in clinical outcome between three different treatment modalities in the treatment of degenerative, atraumatic supraspinatus tendon tear in elderly patients. 180 shoulders were randomized into three treatment groups: 1) physiotherapy, 2) arthroscopic acromioplasty and physiotherapy, 3) arthroscopic rotator cuff reconstruction, acromioplasty and physiotherapy. The objective of this study was also to evaluate retrospectively the effect of trauma, the size of the rotator cuff tear, smoking habits and glenohumeral osteoarthritis on the clinical treatment outcome after arthroscopic rotator cuff repair in a consecutively prospectively collected series of patients. The patient data was gathered to the electronic database. The Constant score was used as a primary outcome measure. The follow‐up time was one year. The main finding was that operative treatment did not provide benefit over conservative regimen in elderly patients with atraumatic supraspinatus tear. Trauma did not affect on the clinical outcome and there was neither difference in the age of patients with traumatic vs. non‐traumatic rotator cuff tears. The size of the rotator cuff tear correlated significantly with the clinical results. The outcome was significantly poorer in tears with infraspinatus involvement compared to anterosuperior tears. Operatively treated rotator cuff tear patients who smoked were significantly younger than non‐smokers, and smoking was associated with poorer clinical outcome. Concomitant osteoarthritis of the glenohumeral joint was found to be a relatively common finding in supraspinatus tear patients. Osteoarthritis of the glenohumeral joint in operatively treated supraspinatus tear patients predicted poorer clinical results comparing to patients without osteoarthritis.
Resumo:
Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.
Resumo:
The information technology (IT) industry has recently witnessed the proliferation of cloud services, which have allowed IT service providers to deliver on-demand resources to customers over the Internet. This frees both service providers and consumers from traditional IT-related burdens such as capital and operating expenses and allows them to respond rapidly to new opportunities in the market. Due to the popularity and growth of cloud services, numerous researchers have conducted studies on various aspects of cloud services, both positive and negative. However, none of those studies have connected all relevant information to provide a holistic picture of the current state of cloud service research. This study aims to investigate that current situation and propose the most promising future directions. In order to determine achieve these goals, a systematic literature review was conducted on studies with a primary focus on cloud services. Based on carefully crafted inclusion criteria, 52 articles from highly credible online sources were selected for the review. To define the main focus of the review and facilitate the analysis of literature, a conceptual framework with five main factors was proposed. The selected articles were organized under the factors of the proposed framework and then synthesized using a narrative technique. The results of this systematic review indicate that the impacts of cloud services on enterprises were the factor best covered by contemporary research. Researchers were able to present valuable findings about how cloud services impact various aspects of enterprises such as governance, performance, and security. By contrast, the role of service provider sub-contractors in the cloud service market remains largely uninvestigated, as do cloud-based enterprise software and cloud-based office systems for consumers. Moreover, the results also show that researchers should pay more attention to the integration of cloud services into legacy IT systems to facilitate the adoption of cloud services by enterprise users. After the literature synthesis, the present study proposed several promising directions for cloud service research by outlining research questions for the underexplored areas of cloud services, in order to facilitate the development of cloud service markets in the future.
Resumo:
In this bachelor’s thesis are examined the benefits of current distortion detection device application in customer premises low voltage networks. The purpose of this study was to find out if there are benefits for measuring current distortion in low-voltage residential networks. Concluding into who can benefit from measuring the power quality. The research focuses on benefits based on the standardization in Europe and United States of America. In this research, were also given examples of appliances in which current distortion detection device could be used. Along with possible illustration of user interface for the device. The research was conducted as an analysis of the benefits of current distortion detection device in residential low voltage networks. The research was based on literature review. The study was divided to three sections. The first explain the reasons for benefitting from usage of the device and the second portrays the low-cost device, which could detect one-phase current distortion, in theory. The last section discuss of the benefits of usage of current distortion detection device while focusing on the beneficiaries. Based on the result of this research, there are benefits from usage to the current distortion detection device. The main benefitting party of the current distortion detection device was found to be manufactures, as they are held responsible of limiting the current distortion on behalf of consumers. Manufactures could adjust equipment to respond better to the distortion by having access to on-going current distortion in network. The other benefitting party are system operators, who would better locate distortion issues in low-voltage residential network to start prevention of long-term problems caused by current distortion early on.