38 resultados para Multiple input and multiple output autonomous flight systems
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
This thesis is a research about the recent complex spatial changes in Namibia and Tanzania and local communities’ capacity to cope with, adapt to and transform the unpredictability engaged to these processes. I scrutinise the concept of resilience and its potential application to explaining the development of local communities in Southern Africa when facing various social, economic and environmental changes. My research is based on three distinct but overlapping research questions: what are the main spatial changes and their impact on the study areas in Namibia and Tanzania? What are the adaptation, transformation and resilience processes of the studied local communities in Namibia and Tanzania? How are innovation systems developed, and what is their impact on the resilience of the studied local communities in Namibia and Tanzania? I use four ethnographic case studies concerning environmental change, global tourism and innovation system development in Namibia and Tanzania, as well as mixed-methodological approaches, to study these issues. The results of my empirical investigation demonstrate that the spatial changes in the localities within Namibia and Tanzania are unique, loose assemblages, a result of the complex, multisided, relational and evolutional development of human and non-human elements that do not necessarily have linear causalities. Several changes co-exist and are interconnected though uncertain and unstructured and, together with the multiple stressors related to poverty, have made communities more vulnerable to different changes. The communities’ adaptation and transformation measures have been mostly reactive, based on contingency and post hoc learning. Despite various anticipation techniques, coping measures, adaptive learning and self-organisation processes occurring in the localities, the local communities are constrained by their uneven power relationships within the larger assemblages. Thus, communities’ own opportunities to increase their resilience are limited without changing the relations in these multiform entities. Therefore, larger cooperation models are needed, like an innovation system, based on the interactions of different actors to foster cooperation, which require collaboration among and input from a diverse set of stakeholders to combine different sources of knowledge, innovation and learning. Accordingly, both Namibia and Tanzania are developing an innovation system as their key policy to foster transformation towards knowledge-based societies. Finally, the development of an innovation system needs novel bottom-up approaches to increase the resilience of local communities and embed it into local communities. Therefore, innovation policies in Namibia have emphasised the role of indigenous knowledge, and Tanzania has established the Living Lab network.
Resumo:
Even though the research on innovation in services has expanded remarkably especially during the past two decades, there is still a need to increase understanding on the special characteristics of service innovation. In addition to studying innovation in service companies and industries, research has also recently focused more on services in innovation, as especially the significance of so-called knowledge intensive business services (KIBS) for the competitive edge of their clients, othercompanies, regions and even nations has been proved in several previous studies. This study focuses on studying technology-based KIBS firms, and technology andengineering consulting (TEC) sector in particular. These firms have multiple roles in innovation systems, and thus, there is also a need for in-depth studies that increase knowledge about the types and dimensions of service innovations as well as underlying mechanisms and procedures which make the innovations successful. The main aim of this study is to generate new knowledge in the fragmented research field of service innovation management by recognizing the different typesof innovations in TEC services and some of the enablers of and barriers to innovation capacity in the field, especially from the knowledge management perspective. The study also aims to shed light on some of the existing routines and new constructions needed for enhancing service innovation and knowledge processing activities in KIBS companies of the TEC sector. The main samples of data in this research include literature reviews and public data sources, and a qualitative research approach with exploratory case studies conducted with the help of the interviews at technology consulting companies in Singapore in 2006. These complement the qualitative interview data gathered previously in Finland during a larger research project in the years 2004-2005. The data is also supplemented by a survey conducted in Singapore. The respondents for the survey by Tan (2007) were technology consulting companies who operate in the Singapore region. The purpose ofthe quantitative part of the study was to validate and further examine specificaspects such as the influence of knowledge management activities on innovativeness and different types of service innovations, in which the technology consultancies are involved. Singapore is known as a South-east Asian knowledge hub and is thus a significant research area where several multinational knowledge-intensive service firms operate. Typically, the service innovations identified in the studied TEC firms were formed by several dimensions of innovations. In addition to technological aspects, innovations were, for instance, related to new client interfaces and service delivery processes. The main enablers of and barriers to innovation seem to be partly similar in Singaporean firms as compared to the earlier study of Finnish TEC firms. Empirical studies also brought forth the significance of various sources of knowledge and knowledge processing activities as themain driving forces of service innovation in technology-related KIBS firms. A framework was also developed to study the effect of knowledge processing capabilities as well as some moderators on the innovativeness of TEC firms. Especially efficient knowledge acquisition and environmental dynamism seem to influence the innovativeness of TEC firms positively. The results of the study also contributeto the present service innovation literature by focusing more on 'innovation within KIBs' rather than 'innovation through KIBS', which has been the typical viewpoint stressed in the previous literature. Additionally, the study provides several possibilities for further research.
Resumo:
Systems suppliers are focal actors in mechanical engineering supply chains, in between general contractors and component suppliers. This research concentrates on the systems suppliers’ competitive flexibility, as a competitive advantage that the systems supplier gains from independence from the competitive forces of the market. The aim is to study the roles that power, dependence relations, social capital, and interorganizational learning have on the competitive flexibility. Research on this particular theme is scarce thus far. The research method applied here is the inductive multiple case study. Interviews from four case companies were used as main source of the qualitative data. The literature review presents previous literature on subcontracting, supply chain flexibility, supply chain relationships, social capital and interorganizational learning. The result of this study are seven propositions and consequently a model on the effects that the dominance of sales of few customers, power of competitors, significance of the manufactured system in the end product, professionalism in procurement and the significance of brand products in the business have on the competitive flexibility. These relationships are moderated by either social capital or interorganizational learning. The main results obtained from this study revolve around social capital and interorganizational learning, which have beneficial effects on systems suppliers’ competitive flexibility, by moderating the effects of other constructs of the model. Further research on this topic should include quantitative research to provide the extent to which the results can be reliably generalized. Also each construct of the model gives possible focus for more thorough research.
Resumo:
Especially in global enterprises, key data is fragmented in multiple Enterprise Resource Planning (ERP) systems. Thus the data is inconsistent, fragmented and redundant across the various systems. Master Data Management (MDM) is a concept, which creates cross-references between customers, suppliers and business units, and enables corporate hierarchies and structures. The overall goal for MDM is the ability to create an enterprise-wide consistent data model, which enables analyzing and reporting customer and supplier data. The goal of the study was defining the properties and success factors of a master data system. The theoretical background was based on literature and the case consisted of enterprise specific needs and demands. The theoretical part presents the concept, background, and principles of MDM and then the phases of system planning and implementation project. Case consists of background, definition of as is situation, definition of project, evaluation criterions and concludes the key results of the thesis. In the end chapter Conclusions combines common principles with the results of the case. The case part ended up dividing important factors of the system in success factors, technical requirements and business benefits. To clarify the project and find funding for the project, business benefits have to be defined and the realization has to be monitored. The thesis found out six success factors for the MDM system: Well defined business case, data management and monitoring, data models and structures defined and maintained, customer and supplier data governance, delivery and quality, commitment, and continuous communication with business. Technical requirements emerged several times during the thesis and therefore those can’t be ignored in the project. Conclusions chapter goes through these factors on a general level. The success factors and technical requirements are related to the essentials of MDM: Governance, Action and Quality. This chapter could be used as guidance in a master data management project.
Resumo:
Laser additive manufacturing (LAM), known also as 3D printing, is a powder bed fusion (PBF) type of additive manufacturing (AM) technology used to manufacture metal parts layer by layer by assist of laser beam. The development of the technology from building just prototype parts to functional parts is due to design flexibility. And also possibility to manufacture tailored and optimised components in terms of performance and strength to weight ratio of final parts. The study of energy and raw material consumption in LAM is essential as it might facilitate the adoption and usage of the technique in manufacturing industries. The objective this thesis was find the impact of LAM on environmental and economic aspects and to conduct life cycle inventory of CNC machining and LAM in terms of energy and raw material consumption at production phases. Literature overview in this thesis include sustainability issues in manufacturing industries with focus on environmental and economic aspects. Also life cycle assessment and its applicability in manufacturing industry were studied. UPLCI-CO2PE! Initiative was identified as mostly applied exiting methodology to conduct LCI analysis in discrete manufacturing process like LAM. Many of the reviewed literature had focused to PBF of polymeric material and only few had considered metallic materials. The studies that had included metallic materials had only measured input and output energy or materials of the process and compared to different AM systems without comparing to any competitive process. Neither did any include effect of process variation when building metallic parts with LAM. Experimental testing were carried out to make dissimilar samples with CNC machining and LAM in this thesis. Test samples were designed to include part complexity and weight reductions. PUMA 2500Y lathe machine was used in the CNC machining whereas a modified research machine representing EOSINT M-series was used for the LAM. The raw material used for making the test pieces were stainless steel 316L bar (CNC machined parts) and stainless steel 316L powder (LAM built parts). An analysis of power, time, and the energy consumed in each of the manufacturing processes on production phase showed that LAM utilises more energy than CNC machining. The high energy consumption was as result of duration of production. Energy consumption profiles in CNC machining showed fluctuations with high and low power ranges. LAM energy usage within specific mode (standby, heating, process, sawing) remained relatively constant through the production. CNC machining was limited in terms of manufacturing freedom as it was not possible to manufacture all the designed sample by machining. And the one which was possible was aided with large amount of material removed as waste. Planning phase in LAM was shorter than in CNC machining as the latter required many preparation steps. Specific energy consumption (SEC) were estimated in LAM based on the practical results and assumed platform utilisation. The estimated platform utilisation showed SEC could reduce when more parts were placed in one build than it was in with the empirical results in this thesis (six parts).
Resumo:
The human striatum is a heterogeneous structure representing a major part of the dopamine (DA) system’s basal ganglia input and output. Positron emission tomography (PET) is a powerful tool for imaging DA neurotransmission. However, PET measurements suffer from bias caused by the low spatial resolution, especially when imaging small, D2/3 -rich structures such as the ventral striatum (VST). The brain dedicated high-resolution PET scanner, ECAT HRRT (Siemens Medical Solutions, Knoxville, TN, USA) has superior resolution capabilities than its predecessors. In the quantification of striatal D2/3 binding, the in vivo highly selective D2/3 antagonist [11C] raclopride is recognized as a well-validated tracer. The aim of this thesis was to use a traditional test-retest setting to evaluate the feasibility of utilizing the HRRT scanner for exploring not only small brain regions such as the VST but also low density D2/3 areas such as cortex. It was demonstrated that the measurement of striatal D2/3 binding was very reliable, even when studying small brain structures or prolonging the scanning interval. Furthermore, the cortical test-retest parameters displayed good to moderate reproducibility. For the first time in vivo, it was revealed that there are significant divergent rostrocaudal gradients of [11C]raclopride binding in striatal subregions. These results indicate that high-resolution [11C]raclopride PET is very reliable and its improved sensitivity means that it should be possible to detect the often very subtle changes occurring in DA transmission. Another major advantage is the possibility to measure simultaneously striatal and cortical areas. The divergent gradients of D2/3 binding may have functional significance and the average distribution binding could serve as the basis for a future database. Key words: dopamine, PET, HRRT, [11C]raclopride, striatum, VST, gradients, test-retest.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Terveydenhuollossa käytetään nykyisin informaatioteknologian (IT) mahdollisuuksia parantamaan hoidon laatua, vähentämään hoitoon liittyviä kuluja sekä yksinkertaistamaan ja selkeyttämään laakareiden työnkulkua. Tietojärjestelmät, jotka edustavat jokaisen IT-ratkaisun ydintä, täytyy kehittää täyttämään lukuisia vaatimuksia, ja yksi niistä on kyky integroitua saumattomasti toisten tietojärjestelmien kanssa. Järjestelmäintegraatio on kuitenkin yhä haastava tehtävä, vaikka sita varten on kehitetty useita standardeja. Tässä työssä kuvataan vastakehitetyn lääketieteellisen tietojärjestelmän liittymäratkaisu. Työssä pohditaan vaatimuksia, jotka tällaiselle sovellukselle asetetaan, ja myös tapa, jolla vaatimukset toteutuvat on esitetty. Liittymaratkaisu on jaettu kahteen osaan, tietojärjestelmaliittymään ja "liittymakoneeseen" (interfacing engine). Edellinen on käsittää perustoiminnallisuuden, jota tarvitaan vastaanottamaan ja lähettämään tietoa toisiin järjestelmiin, kun taas jälkimmäinen tarjoaa tuen tuotantoympäristössa käytettäville standardeille. Molempien osien suunnitelu on esitelty perusteellisesti tässä työssä. Ongelma ratkaistiin modulaarisen ja geneerisen suunnittelun avulla. Tämä lähestymistapa osoitetaan työssä kestäväksi ja joustavaksi ratkaisuksi, jota voidaan käyttää tarkastelemaan laajaa valikoimaa liittymäratkaisulle asetettuja vaatimuksia. Lisaksi osoitetaan kuinka tehty ratkaisu voidaan joustavuutensa ansiosta helposti mukauttaa vaatimuksiin, joita ei ole etukäteen tunnistettu, ja siten saavutetaan perusta myös tulevaisuuden tarpeille
Resumo:
Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.
Resumo:
Työn tavoitteena oli kehittää maalämpöpumppujärjestelmään kuuluvien komponenttien mitoitusta. Työ tehtiin Alufer Oy nimiselle yritykselle, joka on työskennellyt jo kolme vuotta maalämpöpumppujärjestelmän tuotekehityksen parissa. Maalämpöpumppujärjestelmä tullaan suunnittelemaan mahdollisimman suorituskykyiseksi ja joustavaksi. Suunnittelun lähtökohtana on, että lämmitysjärjestelmä on ns. matalalämpö-järjestelmä, joka käytännössä usein toteutetaan lattialämmityksenä. Ensimmäiseksi työssä on selvitetty mitä maalämpö on ja mitkä ovat yleisimmät maalämpöpumppujärjestelmän lämmönkeruuputkistojen asennustavat. Tällä hetkellä käytössä on joko vaakaan (maa, vesi) tai pystyyn asennettava lämmönkeruuputkisto (porakaivo). Seuraavaksi työssä on lähdetty selvittämään maalämpöpumppumarkkinoita Suomessa sekä selvitetty kolmen suurimman valmistajan Geopro Systemsin, Suomen Lämpöpumpputekniikan ja Ekowellin tuotteita sekä tekniikkaa. Työssä selvitetään myös muutamien Eurooppalaisten maiden markkinat. Mitoitusjärjestelmässä tarkastelu on aloitettu uudisrakennuksen lämmitystehon tarpeesta ja käyttöveden lämmityksen tarvitsemasta tehosta. Tarvittavan lämmitysenergian perusteella määriteltiin lämpöpumppujärjestelmään kuuluvat komponentit. Maalämpöpumppujärjestelmä koostuu seuraavista pääkomponenteista: höyrystin, kompressori, lauhdutin ja paisuntaventtiili. Höyrystimen tehon mitoituksessa on huomioitu lämmönkeruuputkistossa kulkevan nesteen aineominaisuudet, massavirta ja lämpötilaero höyrystimen nesteen ulostulon sekä sisään menon välillä. Kompressorin teho on määritetty valitun kylmäaineen (R407C) lg p-h piirroksesta tai määritetty teoreettisesti kompressorivalmistajien omista valintaohjelmista. Lauhduttimen teho on määritelty höyrystimen sekä kompressorin tehon summasta. Samalla määräytyy myös uudisrakennuksen lämmitystehontarve. Lopuksi työssä on käsitelty maalämpöpumppujärjestelmän kehitysmahdollisuuksia. Vaihtoehtoina on huomioitu tulistin, alijäähdytin ja varaaja, joilla voidaan huomattavasti parantaa maalämpöpumpun lämpökerrointa.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
An optimization tool has been developed to help companies to optimize their production cycles and thus improve their overall supply chain management processes. The application combines the functionality that traditional APS (Advanced Planning System) and ARP (Automatic Replenishment Program) systems provide into one optimization run. A qualitative study was organized to investigate opportunities to expand the product’s market base. Twelve personal interviews were conducted and the results were collected in industry specific production planning analyses. Five process industries were analyzed to identify the product’s suitability to each industry sector and the most important product development areas. Based on the research the paper and the plastic film industries remain the most potential industry sectors at this point. To be successful in other industry sectors some product enhancements would be required, including capabilities to optimize multiple sequential and parallel production cycles, handle sequencing of complex finishing operations and to include master planning capabilities to support overall supply chain optimization. In product sales and marketing processes the key to success is to find and reach the people who are involved directly with the problems that the optimization tool can help to solve.
Resumo:
The dissertation seeks to explore how to improve users‘ adoption of mobile learning in current education systems. Considering the difference between basic and tertiary education in China, the research consists of two separate but interrelated parts, which focus on the use of mobile learning in basic and tertiary education contexts, respectively. In the dissertation, two adoption frameworks are developed based on previous studies. The frameworks are then evaluated using different technologies. Concerning mobile learning use in basic education settings, case study methodology is utilized. A leading provider of mobile learning services and products in China, Noah Ltd., is investigated. Multiple sources of evidence are collected to test the framework. Regarding mobile learning adoption in tertiary education contexts, survey research methodology is utilized. Based on 209 useful responses, the framework is evaluated using structural equation modelling technology. Four proposed determinants of intention to use are evaluated, which are perceived ease of use, perceived near-term usefulness, perceived ong-term usefulness and personal innovativeness. The dissertation provides a number of new insights for both researchers and practitioners. In particular, the dissertation specifies a practical solution to deal with the disruptive effects of mobile learning in basic education, which keeps the use of mobile learning away from the schools across such as European countries. A list of new and innovative mobile learning technologies is systematically introduced as well. Further, the research identifies several key factors driving mobile learning adoption in tertiary education settings. In theory, the dissertation suggests that since the technology acceptance model is initiated in work-oriented innovations by testing employees, it is not necessarily the best model for studying educational innovations. The results also suggest that perceived longterm usefulness for educational systems should be as important as perceived usefulness for utilitarian systems, and perceived enjoyment for hedonic systems. A classification based on the nature of systems purpose (utilitarian, hedonic or educational) would contribute to a better understanding of the essence of IT innovation adoption.