1000 resultados para 952


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since its introduction, fuzzy set theory has become a useful tool in the mathematical modelling of problems in Operations Research and many other fields. The number of applications is growing continuously. In this thesis we investigate a special type of fuzzy set, namely fuzzy numbers. Fuzzy numbers (which will be considered in the thesis as possibility distributions) have been widely used in quantitative analysis in recent decades. In this work two measures of interactivity are defined for fuzzy numbers, the possibilistic correlation and correlation ratio. We focus on both the theoretical and practical applications of these new indices. The approach is based on the level-sets of the fuzzy numbers and on the concept of the joint distribution of marginal possibility distributions. The measures possess similar properties to the corresponding probabilistic correlation and correlation ratio. The connections to real life decision making problems are emphasized focusing on the financial applications. We extend the definitions of possibilistic mean value, variance, covariance and correlation to quasi fuzzy numbers and prove necessary and sufficient conditions for the finiteness of possibilistic mean value and variance. The connection between the concepts of probabilistic and possibilistic correlation is investigated using an exponential distribution. The use of fuzzy numbers in practical applications is demonstrated by the Fuzzy Pay-Off method. This model for real option valuation is based on findings from earlier real option valuation models. We illustrate the use of number of different types of fuzzy numbers and mean value concepts with the method and provide a real life application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction of second-generation biofuels is an essential factor for meeting the EU’s 2020 targets for renewable energy in the transport sector and enabling the more ambitious targets for 2030. Finland’s forest industry is strongly involved in the development and commercialising of second-generation biofuel production technologies. The goal of this paper is to provide a quantified insight into Finnish prospects for reaching the 2020 national renewable energy targets and concurrently becoming a large-scale producer of forest biomass based second-generation biofuels feeding the increasing demand in European markets. The focus of the paper is on assessing the potential for utilising forest biomass for liquid biofuels up to 2020. In addition, technological issues related to the production of second-generation biofuels were reviewed. Finland has good opportunities to realise a scenario to meet 2020 renewable energy targets and for large-scale production of wood based biofuels. In 2020, biofuel production from domestic forest biomass in Finland may reach nearly a million ton (40 PJ). With the existing biofuel production capacity (20 PJ/yr) and national biofuel consumption target (25 PJ) taken into account, the potential net export of biofuels from Finland in 2020 would be 35 PJ, corresponding to 2–3% of European demand. Commercialisation of second-generation biofuel production technologies, high utilisation of the sustainable harvesting potential of Finnish forest biomass, and allocation of a significant proportion of the pulpwood harvesting potential for energy purposes are prerequisites for this scenario. Large-scale import of raw biomass would enable remarkably greater biofuel production than is described in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis explores global and national-level issues related to the development of markets for biomass for energy. The thesis consists of five separate papers and provides insights on selected issues. The aim of Paper I was to identify methodological and statistical challenges in assessing international solid and liquid biofuels trade and provide an overview of the Finnish situation with respect to the status of international solid and liquid biofuels trade. We found that, for the Finnish case, it is possible to qualify direct and indirect trade volumes of biofuels. The study showed that indirect trade of biofuels has a highly significant role in Finland and may be a significant sector also in global biofuels trade. The purpose of Paper II was to provide a quantified insight into Finnish prospects for meeting the national 2020 renewable energy targets and concurrently becoming a largescale producer of forest-biomass-based second-generation biofuels for feeding increasing demand in European markets. We found that Finland has good opportunities to realise a scenario to meet 2020 renewable energy targets and for large-scale production of wood-based biofuels. The potential net export of transport biofuels from Finland in 2020 would correspond to 2–3% of European demand. Paper III summarises the global status of international solid and liquid biofuels trade as illuminated by several separate sources. International trade of biofuels was estimated at nearly 1 EJ for 2006. Indirect trade of biofuels through trading of industrial roundwood and material by-products comprises the largest proportion of the trading, with a share of about two thirds. The purpose of Paper IV was to outline a comprehensive picture of the coverage of various certification schemes and sustainability principles relating to the entire value-added chain of biomass and bioenergy. Regardless of the intensive work that has been done in the field of sustainability schemes and principles concerning use of biomass for energy, weaknesses still exist. The objective of Paper V was to clarify the alternative scenarios for the international biomass market until 2020 and identify the underlying steps needed toward a wellfunctioning and sustainable market for biomass for energy purposes. An overall conclusion drawn from this analysis concerns the enormous opportunities related to the utilisation of biomass for energy in the coming decades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Original sludge from wastewater treatment plants (WWTPs) usually has a poor dewaterability. Conventionally, mechanical dewatering methods are used to increase the dry solids (DS) content of the sludge. However, sludge dewatering is an important economic factor in the operation of WWTPs, high water content in the final sludge cake is commonly related to an increase in transport and disposal costs. Electro‐dewatering could be a potential technique to reduce the water content of the final sludge cake, but the parameters affecting the performance of electro‐dewatering and the quality of the resulting sludge cake, as well as removed water, are not sufficiently well known. In this research, non‐pressure and pressure‐driven experiments were set up to investigate the effect of various parameters and experimental strategies on electro‐dewatering. Migration behaviour of organic compounds and metals was also studied. Application of electrical field significantly improved the dewatering performance in comparison to experiments without electric field. Electro‐dewatering increased the DS content of the sludge from 15% to 40 % in non‐pressure applications and from 8% to 41% in pressure‐driven applications. DS contents were significantly higher than typically obtained with mechanical dewatering techniques in wastewater treatment plant. The better performance of the pressure‐driven dewatering was associated to a higher current density at the beginning and higher electric field strength later on in the experiments. The applied voltage was one of the major parameters affecting dewatering time, water removal rate and DS content of the sludge cake. By decreasing the sludge loading rate, higher electrical field strength was established between the electrodes, which has a positive effect on an increase in DS content of the final sludge cake. However interrupted voltage application had anegative impact on dewatering in this study, probably because the off‐times were too long. Other factors affecting dewatering performance were associated to the original sludge characteristics and sludge conditioning. Anaerobic digestion of the sludge with high pH buffering capacity, polymer addition and freeze/thaw conditioning had a positive impact on dewatering. The impact of pH on electro‐dewatering was related to the surface charge of the particles measured as zeta‐potential. One of the differences between electro‐dewatering and mechanical dewatering technologies is that electro‐dewatering actively removes ionic compounds from the sludge. In this study, dissolution and migration of organic compounds (such as shortchain fatty acids), macro metals (Na, K, Ca, Mg, Fe) and trace metals (Ni, Mn, Zn, Cr) was investigated. The migration of the metals depended on the fractionation and electrical field strength. These compounds may have both negative and positive impacts on the reuse and recycling of the sludge and removed water. Based on the experimental results of this study, electro‐dewatering process can be optimized in terms of dewatering time, desired DS content, power consumption and chemical usage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Julkisen sektorin itsensä synnyttämien innovaatioiden ja innovoinnin näkökulma on verrattain tuore innovaatiotutkimuksen tutkimuskohde. Vielä uudempaa lähestymistapaa edustaa käyttäjälähtöinen ja käyttäjää osallistava palveluinnovaatiotutkimus julkisella sektorilla. Käyttäjälähtöisen ja käyttäjää osallistavan palveluinnovoinnin lähestymistavan toteuttamisesta ollaan kiinnostuneita, mutta tieteelliseen tutkimukseen perustuva tieto lähestymistavasta on vielä kohtuullisen niukkaa. Tämän käyttäjälähtöiseen palvelujen innovointiin keskittyvän väitöskirjatutkimuksen päätavoitteena on mikrotasolla tunnistaa ja ryhmitellä käyttäjää osallistavan palveluinnovoinnin lähestymistavan toteuttamisen haasteita julkisella sektorilla. Väitöskirjatutkimuksen alatavoitteena on tutkimuksesta saatavan tiedon avulla muodostaa kysymyslista tukemaan lähestymistavan käyttöönottamista ja toteuttamista julkisen sektorin palveluorganisaatioissa ja -verkostoissa. Julkisen palvelusektorin ohella väitöskirjan tutkimustuloksia voivat soveltuvin osin hyödyntää myös yksityisen ja kolmannen sektorin palveluorganisaatiot ja -verkostot sekä käyttäjälähtöisen innovaatiopolitiikan suunnitteluun ja sen jalkauttamiseen osallistuvat tahot. Haasteita lähestytään tutkimuksessa käyttäjälähtöisen ja käyttäjää osallistavan palveluinnovoinnin lähestymistavan piirteiden kautta ja haasteita tarkastellaan kehittäjäviranomaisten (ryhmätaso) näkökulmasta. Kuntasektori on valittu tutkimukseen edustamaan julkista sektoria. Lähestymistavan piirteiksi tutkimuksessa määritellään käyttäjänäkökulman ohjaava rooli organisaation innovaatiotoiminnan strategisella tasolla ja palvelujen uudistamisprosessien tasoilla, avoimuus (erityisesti käyttäjärajapinta) ja tulkinnallisuus innovaatioprosessien varhaisessa vaiheessa sekä laaja-alainen käsitys innovaatioiden lähteistä käyttäjänäkökulmaa muodostettaessa. Tutkimuksen kohteena on hyvinvointipalveluinnovaatioprosessien varhainen vaihe, jolloin keskeisessä asemassa on uusien ideoiden sekä uuden tiedon ja ymmärryksen hankinta hyödynnettäväksi innovaatioprosessien seuraavissa vaiheissa. Tutkimuksessa rajaudutaan käyttäjälähtöisen palveluinnovoinnin muotoon, jossa käyttäjät intentionaalisesti ja konkreettisesti osallistetaan kehittäjäviranomaisjohtoisiin palveluinnovaatioprosesseihin. Käyttäjiksi tutkimuksessa ymmärretään palvelun loppukäyttäjät palvelujen ”ulkoisina hyödyntäjinä” ja yli sektorialisten palveluprosessien henkilöstö palvelujen ”sisäisinä hyödyntäjinä”. Hyvinvointipalveluista tutkimuksessa ovat edustettuina sosiaali- ja terveyspalvelut sekä ikäihmisten palvelukeskusten tarjoamat palvelut. Kuntasektorin innovaatiotoiminnan kenttä ymmärretään tutkimuksessa verkostomaisena kokonaisuutena, joka ylittää kuntien hallinnolliset rajat. Artikkeliväitöskirjana toteutetun väitöskirjatutkimuksen metodologia perustuu usean tapauksen tapaustutkimukseen (multiple case-studies) ja kvalitatiiviseen tutkimusotteeseen. Työn empiirinen osuus koostuu viidestä artikkelina julkaistusta osatutkimuksesta. Osatutkimuksissa käytetään tapaustutkimuksen eri variaatioita, ja tutkimusaineistot on kerätty kolmesta eri perustutkimusympäristöstä. Osatutkimuksien tapaukset on valittu palvelun käyttäjien ”äänen jatkumon” (the voice of the customer) eri kohdista. Käyttäjän ääntä käytetään tutkimuksessa metodisena ratkaisuna ja metaforana. Lisäksi käyttäjän ääni ymmärretään tutkimuksessa paremminkin kollektiivisena ja laajemmista palvelujen kehittämisnäkökulmista kertovana tekijänä kuin yksittäisten palvelun käyttäjien tarpeista ja toiveista kertovana metaforana. Käyttäjää osallistavan palveluinnovoinnin lähestymistavan toteuttamisen haasteiksi julkisella sektorilla tutkimuksessa tunnistetaan viisi haastetta. Tiivistetysti haasteena on 1. palvelujen käyttäjien subjektiuteen perustuva käyttäjälähtöisyys palvelujen uudistamisessa 2. tunnistaa palvelun käyttäjät innovaatiotoiminnan voimavarana ja rohkaistua heidän osallistamiseensa 3. sitoutuminen yhteistoiminnallisuuteen käyttäjä- ja muita rajapintoja ylittävissä palvelujen uudistamisprosesseissa ja innovaatiohakuisuus työskentelyssä 4. oivaltaa palvelutoivelistoja ja asiakaspalautteita laajempia kehittämisnäkökulmia 5. synnyttää luottamukseen perustuva hyvä kierre palvelun käyttäjien ja kehittäjien välille. Tutkimustuloksena syntyneet haasteet paikannetaan tutkimuksessa käyttäjän äänen jatkumolle erilaisin painotuksin. Lisäksi tutkimustulosten pohjalta tehdään kolme keskeistä johtopäätöstä. Ensinnäkin palvelun kehittäjätahon sekä palvelun loppukäyttäjien ja palvelujen sisäisten hyödyntäjien väliltä on tunnistettavissa innovaatiopotentiaalia sisältäviä rakenteellisia aukkoja. Toiseksi kehittäjäviranomaistahon valmius ja halu laajentaa tiedonmuodostustaan palvelujen uudistamisessa palvelun käyttäjien kanssa yhteisöllisen tiedonmuodostuksen suuntaan on puutteellinen. Kolmanneksi palvelujen kehittäjätaho ei ole sisäistänyt riittävässä määrin käyttäjää osallistavan palveluinnovoinnin lähestymistavan metodologisia perusajatuksia. Tutkimuksessa tunnistetut viisi haastetta osoittavat, että käyttäjää osallistavan palveluinnovoinnin lähestymistavan käyttöönotto hyvinvointipalveluorganisaation tai -verkoston palvelujen innovoinnin lähestymistavaksi ei ole mekaaninen toimenpide. Lähestymistavan käyttöönottoa tukeva kysymyslista perustuu tutkimuksessa tunnistettuihin haasteisiin. Kysymyslista on laadittu siten, että kysymykset liittyvät laajasti julkisten palveluorganisaatioiden ja -verkostojen innovaatiokulttuuriin. Kaksiosaisen kysymyslistan ensimmäisen osan kysymykset käsittelevät innovointia ohjaavia mentaalisia malleja. Ensimmäisessä osassa esitetään esimerkiksi seuraava kysymys: ”Millaista käsitystäpalvelun käyttäjistä (kuntalaisista) sekä käyttäjien ja kehittäjien (viranomaisten) välisestä suhteesta ilmennämme palvelujen innovoinnissa; onko palvelujen käyttäjä (kuntalainen) kohde, jolle kehitetään palveluja, vai onko hän jopa välttämätön kehittämiskumppani?”. Kysymyslistan toisen osan kysymykset liittyvät innovaatiokäytänteisiin ja valmiuksiin. Esimerkkinä voidaan mainita seuraava kaksiosainen kysymys: ”Tukevatko innovaatiokäytänteemme käyttäjärajapinnan ylittäviä innovaatioprosesseja ja sitoudummeko avoimin mielin työskentelyyn palvelun käyttäjien, potentiaalisten käyttäjien tai ei-käyttäjien kanssa? Mitä hyötyjä koemme yhteistoiminnallisuudesta koituvan meille ja käyttäjille sekä innovaation laatuominaisuuksiin?”. Mitä tulee tutkimuksen otsikon alkuosaan ”kuulla vai kuunnella”, vastaus on, että pääpaino on sanalla ”kuulla”. Pohdintaluvussa tuodaan myös esille tarve – tai ainakin kriittisen tarkastelun tarve – käyttäjälähtöisen ja käyttäjää osallistavan palveluinnovoinnin käsitteen ja sen luonteen sekä tavoitteiden määrittelemiselle julkisen sektorin ominaispiirteistä käsin vastapainona alkuperältään yksityisen sektorin liiketoimintakontekstista lähtöisin oleville määrittelyille.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current industrial atomic layer deposition (ALD) processes are almost wholly confined to glass or silicon substrates. For many industrial applications, deposition on polymer substrates will be necessary. Current deposition processes are also typically carried out at temperatures which are too high for polymers. If deposition temperatures in ALD can be reduced to the level applicable for polymers, it will open new interesting areas and applications for polymeric materials. The properties of polymers can be improved for example by coatings with functional and protective properties. Although the ALD has shown its capability to operate at low temperatures suitable for polymer substrates, there are other issues related to process efficiency and characteristics of different polymers where new knowledge will assist in developing industrially conceivable ALD processes. Lower deposition temperature in ALD generally means longer process times to facilitate the self limiting film growth mode characteristic to ALD. To improve process efficiency more reactive precursors are introduced into the process. For example in ALD oxide processes these can be more reactive oxidizers, such as ozone and oxygen radicals, to substitute the more conventionally used water. Although replacing water in the low temperature ALD with ozone or plasma generated oxygen radicals will enable the process times to be shortened, they may have unwanted effects both on the film growth and structure, and in some cases can form detrimental process conditions for the polymer substrate. Plasma assistance is a very promising approach to improve the process efficiency. The actual design and placement of the plasma source will have an effect on film growth characteristics and film structure that may retard the process efficiency development. Due to the fact that the lifetime of the radicals is limited, it requires the placement of the plasma source near to the film growth region. Conversely this subjects the substrate to exposure byother plasma species and electromagnetic radiation which sets requirements for plasma conditions optimization. In this thesis ALD has been used to modify, activate and functionalize the polymer surfaces for further improvement of polymer performance subject to application. The issues in ALD on polymers, both in thermal and plasma-assisted ALD will be further discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of the thesis is to enhance understanding of the evolution of convergence. Previous research has shown that the technological interfaces between distinct industries are one of the major sources of new radical cross-industry innovations. Despite the fact that convergence in industry evolution has attracted a substantial managerial interest, the conceptual confusion within the field of convergence exists. Firstly, this study clarifies the convergence phenomenon and its impact to industry evolution. Secondly, the study creates novel patent analysis methods to analyze technological convergence and provide tools for anticipating the early stages of convergence. Overall the study combines the industry evolution perspective and the convergence view of industrial evolution. The theoretical background for the study consists of the industry life cycle theories, technology evolution, and technological trajectories. The study links several important concepts in analyzing industry evolution, technological discontinuities, path-dependency, technological interfaces as a source of industry transformation, and the evolutionary stagesof convergence. Based on reviewing the literature a generic understanding of industry transformation and industrial dynamics was generated. In the convergence studies, the theoretical basis is in the discussion of different convergence types and their impacts on industry evolution, and in anticipating and monitoring the stages of convergence. The study is divided in two parts. The first part gives a general overview, and the second part comprises eight research publications. Our case study is based historically on two very distinct industries of the paper and electronics companies as a test environment to evaluate the importance of emerging business sectors and technological convergence as a source of industry transformation. Both qualitative and quantitative research methodology are utilized. The results of this study reveal that technological convergence and complementary innovations from different fields have significant effect to the emerging new business sector formation. The patent-based indicators in the analysis of technological convergence can be utilized on analyzing technology competition, capability and competence development, knowledge accumulation, knowledge spill-overs, and technology-based industry transformation. The patent-based indicators can provide insights to the future competitive environment. Results and conclusions from empirical part seem not be in conflict with real observations in the industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

More and more innovations currently being commercialized exhibit network effects, in other words, the value of using the product increases as more and more people use the same or compatible products. Although this phenomenon has been the subject of much theoretical debate in economics, marketing researchers have been slow to respond to the growing importance of network effects in new product success. Despite an increase in interest in recent years, there is no comprehensive view on the phenomenon and, therefore, there is currently incomplete understanding of the dimensions it incorporates. Furthermore, there is wide dispersion in operationalization, in other words, the measurement of network effects, and currently available approaches have various shortcomings that limit their applicability, especially in marketing research. Consequently, little is known today about how these products fare on the marketplace and how they should be introduced in order to maximize their chances of success. Hence, the motivation for this study was driven by the need to increase our knowledge and understanding of the nature of network effects as a phenomenon, and of their role in the commercial success of new products. This thesis consists of two parts. The first part comprises a theoretical overview of the relevant literature, and presents the conclusions of the entire study. The second part comprises five complementary, empirical research publications. Quantitative research methods and two sets of quantitative data are utilized. The results of the study suggest that there is a need to update both the conceptualization and the operationalization of the phenomenon of network effects. Furthermore, there is a need for an augmented view on customers’ perceived value in the context of network effects, given that the nature of value composition has major implications for the viability of such products in the marketplace. The role of network effects in new product performance is not as straightforward as suggested in the existing theoretical literature. The overwhelming result of this study is that network effects do not directly influence product success, but rather enhance or suppress the influence of product introduction strategies. The major contribution of this study is in conceptualizing the phenomenon of network effects more comprehensively than has been attempted thus far. The study gives an augmented view of the nature of customer value in network markets, which helps in explaining why some products thrive on these markets whereas others never catch on. Second, the study discusses shortcomings in prior literature in the way it has operationalized network effects, suggesting that these limitations can be overcome in the research design. Third, the study provides some much-needed empirical evidence on how network effects, product introduction strategies, and new product performance are associated. In general terms, this thesis adds to our knowledge of how firms can successfully leverage network effects in product commercialization in order to improve market performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The value and benefits of user experience (UX) are widely recognized in the modern world and UX is seen as an integral part of many fields. This dissertation integrates UX and understanding end users with the early phases of software development. The concept of UX is still unclear, as witnessed by more than twenty-five definitions and ongoing argument about its different aspects and attributes. This missing consensus forms a problem in creating a link between UX and software development: How to take the UX of end users into account when it is unclear for software developers what UX stands for the end users. Furthermore, currently known methods to estimate, evaluate and analyse UX during software development are biased in favor of the phases where something concrete and tangible already exists. It would be beneficial to further elaborate on UX in the beginning phases of software development. Theoretical knowledge from the fields of UX and software development is presented and linked with surveyed and analysed UX attribute information from end users and UX professionals. Composing the surveys around the identified 21 UX attributes is described and the results are analysed in conjunction with end user demographics. Finally the utilization of the gained results is explained with a proof of concept utility, the Wizard of UX, which demonstrates how UX can be integrated into early phases of software development. The process of designing, prototyping and testing this utility is an integral part of this dissertation. The analyses show statistically significant dependencies between appreciation towards UX attributes and surveyed end user demographics. In addition, tests conducted by software developers and industrial UX designer both indicate the benefits and necessity of the prototyped Wizard of UX utility. According to the conducted tests, this utility meets the requirements set for it: It provides a way for software developers to raise their know-how of UX and a possibility to consider the UX of end users with statistical user profiles during the early phases of software development. This dissertation produces new and relevant information for the UX and software development communities by demonstrating that it is possible to integrate UX as a part of the early phases of software development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Longitudinal studies are quite rare in the area of Operations Management. One reason might be the time needed to conduct such studies, and then the lack of experience and real-life examples and results. The aim of the thesis is to examine longitudinal studies in the area of OM and the possible advantages, challenges and pitfalls of such studies. A longitudinal benchmarking study, Made in Finland, was analyzed in terms of the study methodology and its outcomes. The timeline of this longitudinal study is interesting. The first study was made in 1993, the second in 2004 and the third in 2010. Between these studies some major changes occurred in the Finnish business environment. Between the first and second studies, Finland joined the ETA and the EU, and globalization started with the rise of the Internet era, while between the second and third studies financial turmoil started in 2007. The sample and cases used in this study were originally 23 manufacturing sites in Finland. These sites were interviewed in 1993, in 2004 and 2010. One important and interesting aspect is that all the original sites participated in 2004, and 19 sites were still able to participate in 2010. Four sites had been closed and/or moved abroad. All of this gave a good opportunity to study the changes that occurred in the Finnish manufacturing sites and their environment, and how they reacted to these changes, and the effects on their performance. It is very seldom, if ever, that the same manufacturing sites have been studied in a longitudinal setting by using three data points. The results of this study are thus unique, and the experience gained is valuable for practitioners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Direct-driven permanent magnet synchronous generator is one of the most promising topologies for megawatt-range wind power applications. The rotational speed of the direct-driven generator is very low compared with the traditional electrical machines. The low rotational speed requires high torque to produce megawatt-range power. The special features of the direct-driven generators caused by the low speed and high torque are discussed in this doctoral thesis. Low speed and high torque set high demands on the torque quality. The cogging torque and the load torque ripple must be as low as possible to prevent mechanical failures. In this doctoral thesis, various methods to improve the torque quality are compared with each other. The rotor surface shaping, magnet skew, magnet shaping, and the asymmetrical placement of magnets and stator slots are studied not only by means of torque quality, but also the effects on the electromagnetic performance and manufacturability of the machine are discussed. The heat transfer of the direct-driven generator must be designed to handle the copper losses of the stator winding carrying high current density and to keep the temperature of the magnets low enough. The cooling system of the direct-driven generator applying the doubly radial air cooling with numerous radial cooling ducts was modeled with a lumped-parameter-based thermal network. The performance of the cooling system was discussed during the steady and transient states. The effect of the number and width of radial cooling ducts was explored. The large number of radial cooling ducts drastically increases the impact of the stack end area effects, because the stator stack consists of numerous substacks. The effects of the radial cooling ducts on the effective axial length of the machine were studied by analyzing the crosssection of the machine in the axial direction. The method to compensate the magnet end area leakage was considered. The effect of the cooling ducts and the stack end area effects on the no-load voltages and inductances of the machine were explored by using numerical analysis tools based on the three-dimensional finite element method. The electrical efficiency of the permanent magnet machine with different control methods was estimated analytically over the whole speed and torque range. The electrical efficiencies achieved with the most common control methods were compared with each other. The stator voltage increase caused by the armature reaction was analyzed. The effect of inductance saturation as a function of load current was implemented to the analytical efficiency calculation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is devoted to growth and investigations of Mn-doped InSb and II-IV-As2 semiconductors, including Cd1-xZnxGeAs2:Mn, ZnSiAs2:Mn bulk crystals, ZnSiAs2:Mn/Si heterostructures. Bulk crystals were grown by direct melting of starting components followed by fast cooling. Mn-doped ZnSiAs2/Si heterostructures were grown by vacuum-thermal deposition of ZnAs2 and Mn layers on Si substrates followed by annealing. The compositional and structural properties of samples were investigated by different methods. The samples consist of micro- and nano- sizes clusters of an additional ferromagnetic Mn-X phases (X = Sb or As). Influence of magnetic precipitations on magnetic and electrical properties of the investigated materials was examined. With relatively high Mn concentration the main contribution to magnetization of samples is by MnSb or MnAs clusters. These clusters are responsible for high temperature behavior of magnetization and relatively high Curie temperature: up to 350 K for Mn-doped II-IV-As2 and about 600 K for InMnSb. The low-field magnetic properties of Mn-doped II-IV-As2 semiconductors and ZnSiAs2:Mn/Si heterostructures are connected to the nanosize MnAs particles. Also influence of nanosized MnSb clusters on low-field magnetic properties of InMnSb have been observed. The contribution of paramagnetic phase to magnetization rises at low temperatures or in samples with low Mn concentration. Source of this contribution is not only isolated Mn ions, but also small complexes, mainly dimmers and trimmers formed by Mn ions, substituting cation positions in crystal lattice. Resistivity, magnetoresistance and Hall resistivity properties in bulk Mn-doped II-IV-As2 and InSb crystals was analyzed. The interaction between delocalized holes and 3d shells of the Mn ions together with giant Zeeman splitting near the cluster interface are respond for negative magnetoresistance. Additionally to high temperature critical pointthe low-temperature ferromagnetic transition was observed Anomalous Hall effect was observed in Mn doped samples and analyzed for InMnSb. It was found that MnX clusters influence significantly on magnetic scattering of carriers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.