864 resultados para Information search – models
Resumo:
The purpose of this Thesis was to study what is the present situation of Business Intelligence of the company unit. This means how efficiently unit uses possibilities of modern information management systems. The aim was to resolve how operative informa-tion management of unit’s tender process could be improved by modern information technology applications. This makes it possible that tender processes could be faster and more efficiency. At the beginning it was essential to acquaint oneself with written literature of Business Intelligence. Based on Business Intelligence theory is was relatively easy but challenging to search and discern how tender business could be improved by methods of Busi-ness Intelligence. The empirical phase of this study was executed as qualitative research method. This phase includes theme and natural interviews on the company. Problems and challenges of tender process were clarified in a part an empirical phase. Group of challenges were founded when studying information management of company unit. Based on theory and interviews, group of improvements were listed which company could possible do in the future when developing its operative processes.
Resumo:
Tutkimuksen tavoitteena oli kehittää Larox Oyj:n alihankintaprojektien kustannuslaskentaa. Yrityksessä oli havaittu, että pitkäaikaishankkeiden kustan-nusten kertymistä pitää pystyä ennustamaan tarkemmin. Konstruktiivisen tutkimusotteen mukaisesti tutkimuksessa luotiin esiymmärrys kohdeyrityksen nykytilanteesta, perehdyttiin vaikuttavaan lainsäädäntöön ja aikaisempaan tutkimustietoon. Kuvaus nykytilanteesta luotiin kohdeyrityksen ja alihankkijan edustajien haastatteluiden avulla, tutustumalla yrityksen toimintaohjeisiin ja keräämällä tietoa projektien kustannusten kertymisestä. Kerätyn tiedon perusteella luotiin konstruktiot eli ratkaisuehdotukset toiminnan kehittämiseksi. Tutkimuksessa kehitettiin raportointimalli alihankintaprojektien edistymisen raportointiin. Mallin tavoitteena on yhtenäistää alihankkijoiden raportointikäy-täntöjä ja tuottaa sellaista tietoa, jota Larox tarvitsee tuottojen tunnistamista varten. Toinen konkreettinen ratkaisu on alihankintaprojektien kustannuskertymän ennustetyökalu, jonka avulla voidaan ennakoida hankkeen valmiusasteen kehitystä projektin aikana. Malli on rakennettu yhden konetyypin projektien ennustamiseen, mutta siitä voidaan helposti muokata ennustemallit muidenkin projektityyppien tarpeisiin. Tarkempien ennusteiden avulla voidaan kehittää johdon raportointia ja parantaa kassavirtojen ennustettavuutta.
Resumo:
Peering into the field of Alzheimer's disease (AD), the outsider realizes that many of the therapeutic strategies tested (in animal models) have been successful. One also may notice that there is a deficit in translational research, i.e., to take a successful drug in mice and translate it to the patient. Efforts are still focused on novel projects to expand the therapeutic arsenal to 'cure mice.' Scientific reasons behind so many successful strategies are not obvious. This article aims to review the current approaches to combat AD and to open a debate on common mechanisms of cognitive enhancement and neuroprotection. In short, either the rodent models are not good and should be discontinued, or we should extract the most useful information from those models. An example of a question that may be debated for the advancement in AD therapy is: In addition to reducing amyloid and tau pathologies, would it be necessary to boost synaptic strength and cognition? The debate could provide clues to turn around the current negative output in generating effective drugs for patients. Furthermore, discovery of biomarkers in human body fluids, and a clear distinction between cognitive enhancers and disease modifying strategies, should be instrumental for advancing in anti-AD drug discovery.
Resumo:
In the present paper we characterize the optimal use of Poisson signals to establish incentives in the "bad" and "good" news models of Abreu et al. [1]. In the former, for small time intervals the signals' quality is high and we observe a "selective" use of information; otherwise there is a "mass" use. In the latter, for small time intervals the signals' quality is low and we observe a "fine" use of information; otherwise there is a "non-selective" use. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Public Monitoring, Infor- mation Characteristics.
Resumo:
The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
This study was conducted in order to learn how companies’ revenue models will be transformed due to the digitalisation of its products and processes. Because there is still only a limited number of researches focusing solely on revenue models, and particularly on the revenue model change caused by the changes at the business environment, the topic was initially approached through the business model concept, which organises the different value creating operations and resources at a company in order to create profitable revenue streams. This was used as the base for constructing the theoretical framework for this study, used to collect and analyse the information. The empirical section is based on a qualitative study approach and multiple-case analysis of companies operating in learning materials publishing industry. Their operations are compared with companies operating in other industries, which have undergone comparable transformation, in order to recognise either similarities or contrasts between the cases. The sources of evidence are a literature review to find the essential dimensions researched earlier, and interviews 29 of managers and executives at 17 organisations representing six industries. Based onto the earlier literature and the empirical findings of this study, the change of the revenue model is linked with the change of the other dimen-sions of the business model. When one dimension will be altered, as well the other should be adjusted accordingly. At the case companies the transformation is observed as the utilisation of several revenue models simultaneously and the revenue creation processes becoming more complex.
Resumo:
Tutkielman tarkoituksena oli tutkia viestinnän merkitystä osaamisen kehittämisessä. Tavoitteena oli tutkia, miten viestintä edistää ravitsemusosaamisen kehittämistä sairaalan ateriaprosessissa. Tutkimuksessa etsittiin vastausta kysymyksiin, mitkä ovat ravitsemusosaamisen kehittämisen ja viestinnän tavoitteet, millä työyhteisöviestinnän foorumeilla uuden ravitsemushoitosuosituksen ja ravitsemushoidon strategian edellyttämiä muutoksia käsitellään ja millaisia työssä oppimisen prosesseja näillä foorumeilla on tunnistettavissa. Empirian näkökulmasta tutkimusta voidaan kuvata tapaustutkimukseksi. Tapauksena on sairaalan ateriaprosessi. Tutkimuksen valmistelevana aineistona käytettiin uutta ravitsemushoitosuositusta (Nuutinen ym. 2010), jota täydennettiin haastatteluaineistolla. Tutkimuksessa ovat edustettuina hoitotyön, ruokapalvelun ja ravitsemushoidon asiantuntemuksen näkökulmat sairaalasta sekä ammatti- ja aikuisopistosta. Tutkimusmenetelmänä käytettiin teemahaastatteluja. Haastattelut nauhoitettiin ja litteroitiin tekstimuotoon. Aineisto analysoitiin teemakortiston ja teemoittelun avulla. Tutkimuksen tulokset osoittavat, että ravitsemusosaamisen kehittämisen tavoitteena on uuden ravitsemushoitosuosituksen ja ravitsemushoidon strategian edellyttämien muutosten toteuttaminen sairaalan ravitsemushoidon prosesseissa ja tuotteissa. Ravitsemusosaamisen kehittämisen tavoitteena on tässä yhteydessä ateriaprosessin ja ruokapalvelun tuotteiden eli ruokavalioiden kehittäminen. Ravitsemushoidon kehittämisen tarkoituksena on asiakkaiden toipumisen, elämänlaadun ja hyvinvoinnin edistäminen sekä terveydenhuollon kustannusten säästäminen. Viestinnällä on tärkeä merkitys ravitsemusosaamisen kehittämisessä. Viestinnän avulla edistetään yksilöllistä ja yhteistä eli tiimioppimista vuorovaikutuksen kautta. Ruokapalvelu- ja hoitohenkilöstön sekä ravitsemushoidon asiantuntijoiden välinen vuoropuhelu nähdään tärkeänä ravitsemusosaamisen kehittämisessä. Vuoropuhelun avulla vahvistetaan ravitsemushoitoon liittyvää tietopohjaa ja yhteistä käsitteistöä. Tavoitteena on yhteisen kielen ja toimintamallin luominen ravitsemushoidon kehittämiseen. Ravitsemushoitosuosituksen ja ravitsemushoidon strategian edellyttämiä muutoksia käsitellään ulkoisissa ja sisäisissä verkostoissa esimerkiksi ravitsemus-yhdyshenkilöverkoston tapaamisissa, moniammatillisissa työryhmissä, henkilöstö- ja oppisopimuskoulutuksissa sekä työfoorumilla eli fyysisessä työtilassa ja hyödyntäen viestintäteknologiaa. Hoitotyön, ruokapalvelun ja ravitsemushoidon asiantuntijoilla/opettajilla on tärkeä rooli ravitsemusosaamisen kehittämiseen liittyvässä työssä oppimisen ohjaamisessa. Ravitsemusosaamisen kehittämisessä on tunnistettavissa sosiaalisia, reflektiivisiä, kognitiivisia ja operationaalisia työssä oppimisen prosesseja. Sosiaalisia prosesseja ovat työkokemusten vaihdanta ja reflektiivisiä niiden arviointi. Kognitiivisten prosessien tarkoitus on tiedonhankinta ja prosessointi, jolloin yhdistetään kokemustietoa sekä uutta ravitsemustieteellistä tietoa. Tavoitteena on yhteisen kielen ja toimintamallin luominen, jota kokeillaan käytännössä. Operationaalisia prosesseja ovat fyysisessä työtilassa tapahtuva kokeilemalla, tekemällä ja soveltamalla oppiminen, jolloin uutta toimintamallia esimerkiksi vajaaravitsemuksen seulontaa, ateriatilausta tai reseptiikkaa kokeillaan käytännössä. Johtopäätöksenä voidaan todeta, että sairaalassa on omaksuttu oppivan organisaation periaatteita ravitsemusosaamisen kehittämisessä. Ravitsemusosaamisen kehittäminen on yhteydessä muutokseen, strategiaan, prosessien ja tuotteiden kehittämiseen. Viestinnän avulla edistetään ravitsemushoitosuosituksen ja ravitsemushoidon strategian edellyttämien muutosten toteuttamista sairaalan ateriaprosessissa ja ruokavalioissa. Hoito- ja ruokapalveluhenkilöstön sekä ravitsemushoidon asiantuntijoiden välisen vuoropuhelun tavoitteena on yhteisen kielen ja toimintamallin luominen ravitsemushoidon kehittämiseen. Tutkimus palvelee ravitsemusosaamisen kehittämistä sairaalan ateriaprosessissa. Tutkimuksen tuloksia on mahdollista käyttää vertailuoppimismateriaalina terveydenhuollon organisaatioissa ja verkostoissa.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
The aim of this study was to compare the hydrographically conditioned digital elevation models (HCDEMs) generated from data of VNIR (Visible Near Infrared) sensor of ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), of SRTM (Shuttle Radar Topography Mission) and topographical maps from IBGE in a scale of 1:50,000, processed in the Geographical Information System (GIS), aiming the morphometric characterization of watersheds. It was taken as basis the Sub-basin of São Bartolomeu River, obtaining morphometric characteristics from HCDEMs. Root Mean Square Error (RMSE) and cross validation were the statistics indexes used to evaluate the quality of HCDEMs. The percentage differences in the morphometric parameters obtained from these three different data sets were less than 10%, except for the mean slope (21%). In general, it was observed a good agreement between HCDEMs generated from remote sensing data and IBGE maps. The result of HCDEM ASTER was slightly higher than that from HCDEM SRTM. The HCDEM ASTER was more accurate than the HCDEM SRTM in basins with high altitudes and rugged terrain, by presenting frequency altimetry nearest to HCDEM IBGE, considered standard in this study.
Resumo:
This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.
Resumo:
Alumni are considered as precious resource of the institutions, thus improving alumni adminis-tration is critical. In information era, alumni administration is assisted by widespread information technology, such as social network sites. This paper aims to discover if a self-built information sys-tem would enhance alumni connection in the IMMIT context, and what kind of attributes would be helpful applying to the special context. The current online alumni services at other universities and at the IMMIT host university are analyzed, and then social media is introduced. After illustrating the social capital existing in IM-MIT, the type of the self-built information system is suggested, following an interpretation of the prototype. Two research models are utilized in this article: TAM and intentional social action model. The second model is adjusted with proposed parameters. Afterwards, a survey and an interview protocol are designed under the guidance of the models. The results are analyzed in several groups, and the proposed parameters are tested. A conclusion is drawn to indicate how to improve alumni‟s intention to use and how to achieve a better-accepted design.
Resumo:
This dissertation explores the use of internal and external sources of knowledge in modern innovation processes. It builds on a framework that combines theories such as a behavioural theory of the firm, the evolutionary theory of economic change, and modern approaches to strategic management. It follows the recent increase in innovation research focusing on the firm-level examination of innovative activities instead of traditional industry-level determinants. The innovation process is seen as a problem- and slack- driven search process, which can take several directions in terms of organizational boundaries in the pursuit of new knowledge and other resources. It thus draws on recent models of technological change, according to which firms nowadays should build their innovative activities on both internal and external sources of innovation rather than relying solely on internal resources. Four different research questions are addressed, all of which are empirically investigated via a rich dataset covering Finnish innovators collected by Statistics Finland. Firstly, the study examines how the nature of problems shapes the direction of any search for new knowledge. In general it demonstrates that the nature of the problem does affect the direction of the search, although under resource constraints firms tend to use external rather than internal sources of knowledge. At the same time, it shows that those firms that are constrained in terms of finance seem to search both internally and externally. Secondly, the dissertation investigates the relationships between different kinds of internal and external sources of knowledge in an attempt to find out where firms should direct their search in order to exploit the potential of a distributed innovation process. The concept of complementarities is applied in this context. The third research question concerns how the use of external knowledge sources – openness to external knowledge – influences the financial performance of firms. Given the many advantages of openness presented in the current literature, the focus is on how it shapes profitability. The results reveal a curvilinear relationship between profitability and openness (taking an inverted U-shape), the implication being that it pays to be open up to a certain point, but being too open to external sources may be detrimental to financial performance. Finally, the dissertation addresses some challenges in CISbased innovation research that have received relatively little attention in prior studies. The general aim is to underline the fact that comprehensive understanding of the complex process of technological change requires the constant development of methodological approaches (in terms of data and measures, for example). All the empirical analyses included in the dissertation are based on the Finnish CIS (Finnish Innovation Survey 1998-2000).