55 resultados para Adaptive learning, Sticky information, Inflation dynamics, Nonlinearities


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thisthesis supplements the systematic approach to competitive intelligence and competitor analysis by introducing an information-processing perspective on management of the competitive environment and competitors therein. The cognitive questions connected to the intelligence process and also the means that organizational actors use in sharing information are discussed. The ultimate aim has been to deepen knowledge of the different intraorganizational processes that are used in acorporate organization to manage and exploit the vast amount of competitor information that is received from the environment. Competitor information and competitive knowledge management is examined as a process, where organizational actorsidentify and perceive the competitive environment by using cognitive simplification, make interpretations resulting in learning and finally utilize competitor information and competitive knowledge in their work processes. The sharing of competitive information and competitive knowledge is facilitated by intraorganizational networks that evolve as a means of developing a shared, organizational level knowledge structure and ensuring that the right information is in the right place at the right time. This thesis approaches competitor information and competitive knowledge management both theoretically and empirically. Based on the conceptual framework developed by theoretical elaboration, further understanding of the studied phenomena is sought by an empirical study. The empirical research was carried out in a multinationally operating forest industry company. This thesis makes some preliminary suggestions of improving the competitive intelligence process. It is concluded that managing competitor information and competitive knowledge is not simply a question of managing information flow or improving sophistication of competitor analysis, but the crucial question to be solved is rather, how to improve the cognitive capabilities connected to identifying and making interpretations of the competitive environment and how to increase learning. It is claimed that competitive intelligence can not be treated like an organizational function or assigned solely to a specialized intelligence unit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building and sustaining competitive advantage through the creation of market imperfections is challenging in a constantly changing business environment - particularly since the sources of such advantages are increasingly knowledge-based. Facilitated by improved networks and communication, knowledge spills over to competitors more easily than before,thus creating an appropriability problem: the inability of an innovating firm to utilize its innovations commercially. Consequently, as the importance of intellectual assets increases, their protection also calls for new approaches. Companies have various means of protection at their disposal, and by taking advantage of them they can make intangibles more non-transferable and prevent, or at leastdelay, imitation of their most crucial intellectual assets. However, creating barriers against imitation has another side to it, and the transfer of knowledge in situations requiring knowledge sharing may be unintentionally obstructed. Theaim of this thesis is to increase understanding of how firms can balance knowledge protection and sharing so as to benefit most from their knowledge assets. Thus, knowledge protection is approached through an examination of the appropriability regime of a firm, i.e., the combination of available and effective means ofprotecting innovations, their profitability, and the increased rents due to R&D. A further aim is to provide a broader understanding of the formation and structure of the appropriability regime. The study consists of two parts. The first part introduces the research topic and the overall results of the study, and the second part consists of six complementary research publications covering various appropriability issues. The thesis contributes to the existing literature in several ways. Although there is a wide range of prior research on appropriability issues, a lot of it is restricted either to the study of individual appropriability mechanisms, or to comparing certain features of them. These approaches are combined, and the relevant theoretical concepts are clarified and developed. In addition, the thesis provides empirical evidence of the formation of the appropriability regime, which is consequently presented as an adaptive process. Thus, a framework is provided that better corresponds to the complex reality of the current business environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main subject of this master's thesis was predicting diffusion of innovations. The prediction was done in a special case: product has been available in some countries, and based on its diffusion in those countries the prediction is done for other countries. The prediction was based on finding similar countries with Self-Organizing Map~(SOM), using parameters of countries. Parameters included various economical and social key figures. SOM was optimised for different products using two different methods: (a) by adding diffusion information of products to the country parameters, and (b) by weighting the country parameters based on their importance for the diffusion of different products. A novel method using Differential Evolution (DE) was developed to solve the latter, highly non-linear optimisation problem. Results were fairly good. The prediction method seems to be on a solid theoretical foundation. The results based on country data were good. Instead, optimisation for different products did not generally offer clear benefit, but in some cases the improvement was clearly noticeable. The weights found for the parameters of the countries with the developed SOM optimisation method were interesting, and most of them could be explained by properties of the products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effects of information and communication technology (ICT) on school from teachers’ and students’ perspectives. The focus was on three main subject matters: on ICT use and competence, on teacher and school community, and on learning environment and teaching practices. The study is closely connected to the national educational policy which has aimed strongly at supporting the implementation of ICT in pedagogical practices at all institutional levels. The phenomena were investigated using a mixed methods approach. The qualitative data from three cases studies and the quantitative data from three statistical studies were combined. In this study, mixed methods were used to investigate the complex phenomena from various stakeholders’ points of view, and to support validation by combining different perspectives in order to give a fuller and more complete picture of the phenomena. The data were used in a complementary manner. The results indicate that the technical resources for using ICT both at school and at homes are very good. In general, students are capable and motivated users of new technology; these skills and attitudes are mainly based on home resources and leisuretime use. Students have the skills to use new kinds of applications and new forms of technology, and their ICT skills are wide, although not necessarily adequate; the working habits might be ineffective and even wrong. Some students have a special kind of ICT-related adaptive expertise which develops in a beneficial interaction between school guidance and challenges, and individual interest and activity. Teachers’ skills are more heterogeneous. The large majority of teachers have sufficient skills for everyday and routine working practices, but many of them still have difficulties in finding a meaningful pedagogical use for technology. The intensive case study indicated that for the majority of teachers the intensive ICT projects offer a possibility for learning new skills and competences intertwined in the work, often also supported by external experts and a collaborative teacher community; a possibility that “ordinary” teachers usually do not have. Further, teachers’ good ICT competence help them to adopt new pedagogical practices and integrate ICT in a meaningful way. The genders differ in their use of and skills in ICT: males show better skills especially in purely technical issues also in schools and classrooms, whereas female students and younger female teachers use ICT in their ordinary practices quite naturally. With time, the technology has become less technical and its communication and creation affordances have become stronger, easier to use, more popular and motivating, all of which has increased female interest in the technology. There is a generation gap in ICT use and competence between teachers and students. This is apparent especially in the ICT-related pedagogical practices in the majority of schools. The new digital affordances not only replace some previous practices; the new functionalities change many of our existing conceptions, values, attitudes and practices. The very different conceptions that generations have about technology leads, in the worst case, to a digital gap in education; the technology used in school is boring and ineffective compared to the ICT use outside school, and it does not provide the competence needed for using advanced technology in learning. The results indicate that in schools which have special ICT projects (“ICT pilot schools”) for improving pedagogy, these have led to true changes in teaching practices. Many teachers adopted student-centred and collaborative, inquiry-oriented teaching practices as well as practices that supported students' authentic activities, independent work, knowledge building, and students' responsibility. This is, indeed, strongly dependent on the ICT-related pedagogical competence of the teacher. However, the daily practices of some teachers still reflected a rather traditional teacher-centred approach. As a matter of fact, very few teachers ever represented solely, e.g. the knowledge building approach; teachers used various approaches or mixed them, based on the situation, teaching and learning goals, and on their pedagogical and technical competence. In general, changes towards pedagogical improvements even in wellorganised developmental projects are slow. As a result, there are two kinds of ICT stories: successful “ICT pilot schools” with pedagogical innovations related to ICT and with school community level agreement about the visions and aims, and “ordinary schools”, which have no particular interest in or external support for using ICT for improvement, and in which ICT is used in a more routine way, and as a tool for individual teachers, not for the school community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän kandidaatintyön tavoitteena on esittää kuvaus kulutusoppimisen teorioista ja tämän lisäksi kuvata käytännön sovelluksia liittyen kulutuskäyttäytymiseen ja mainontaan. On olemassa kaksi keskeistä ajattelutapaa liittyen oppimisen teorioihin. Ensimmäisen suuntauksen kannattajat näkevät oppimisen puhtaasti behavioristisena, eli että se on seurausta toistoista, ja siten ne näkevät yksilön "mustana laatikkona", jossa syötteenä on ärsyke ja suoritteena on tietty käytös. Toisen suuntauksen kannattajien mielestä oppiminen on kognitiivinen prosessi; kaikista yksinkertaisimmista tapauksista lähtien yksilö prosessoi informaatiota ratkaistakseen omia ongelmiaan. Käytännössä kumpaakin teoriaa tarvitaan selittämään oppimista ilmiönä, koska oppiminen on yhdistelmä toistoja ja kognitiivisia prosesseja. Työmme näyttää kuinka markkinoijat hyödyntävät näitä kahta teoriaa käytännössä mainonnassaan, tarkoituksenaan tuotemerkkinsä ja tuotteidensa asemointi markkinoilla suhteessa kilpailijoihinsa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Engelskans dominerande roll som internationellt språk och andra globaliseringstrender påverkar också Svenskfinland. Dessa trender påverkar i sin tur förutsättningarna för lärande och undervisning i engelska som främmande språk, det vill säga undervisningsmålen, de förväntade elev- och lärarroller, materialens ändamålsenlighet, lärares och elevers initiala erfarenheter av engelska och engelskspråkiga länder. Denna studie undersöker förutsättningarna för lärande och professionell utveckling i det svenskspråkiga nybörjarklassrummet i engelska som främmande språk. Utgångsläget för 351 nybörjare i engelska som främmande språk och 19 av deras lärare beskrivs och analyseras. Resultaten tyder på att engelska håller på att bli ett andraspråk snarare än ett traditionellt främmande språk för många unga elever. Dessa elever har också goda förutsättningar att lära sig engelska utanför skolan. Sådan var dock inte situationen för alla elever, vilket tyder på att det finns en anmärkningsvärd heterogenitet och även regional variation i det finlandssvenska klassrummet i engelska som främmande språk. Lärarresultaten tyder på att vissa lärare har klarat av att på ett konstruktivt sätt att tackla de förutsättningar de möter. Andra lärare uttrycker frustration över sin arbetssituation, läroplanen, undervisningsmaterialen och andra aktörer som kommer är av betydelse för skolmiljön. Studien påvisar att förutsättningarna för lärande och undervisning i engelska som främmande språk varierar i Svenskfinland. För att stöda elevers och lärares utveckling föreslås att dialogen mellan aktörer på olika nivå i samhället bör förbättras och systematiseras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electronic learning has become crucial in higher education with increased usage of learning management systems as a key source of integration on distance learning. The objective of this study is to understand how university teachers are influenced to use and adopt web-based learning management systems. Blackboard, as one of the systems used internationally by various universities is applied as a case. Semi-structured interviews were made with professors and lecturers who are using Blackboard at Lappeenranta University of Technology. The data collected were categorized under constructs adapted from Unified Theory of Acceptance and Use of Technology (UTAUT) and interpretation and discussion were based on reviewed literature. The findings suggest that adoption of learning management systems by LUT teachers is highly influenced by perceived usefulness, facilitating conditions and gained experience. The findings also suggest that easiness of using the system and social influence appear as medium influence of adoption for teachers at LUT.