21 resultados para retain
Resumo:
The goal of this research is to study how knowledge-intensive business services can be productized by using the service blueprinting tool. As services provide the majority of jobs, GDP and productivity growth in Europe, their continuous development is needed for Europe to retain its global competitiveness. As services are turning more complex, their development becomes more difficult. The theoretical part of this study is based on researching productization in the context of knowledge-intensive business services. The empirical part is carried out as a case study in a KIBS company, and utilizes qualitative interviews and case materials. The final outcome of this study is an updated productization framework, designed for KIBS companies, and recommendations for the case company. As the results of this study indicate, productization expanded with service blueprinting can be a useful tool for KIBS companies to develop their services. The updated productization framework is provided for future reference.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Tämän Pro gradu -tutkielman aiheena on tutkia suomalaisten päivittäistavarakaupan alan yritysten likviditeetin hallintaa vuosina 2009 - 2013. Tutkielmassa tutkitaan, millä tavalla suomalaisten päivittäistavarakaupan alan yritysten käyttöpääoman hallinta on muuttunut rajatulla ajanjaksolla. Lisäksi työssä tutkitaan millä tavoin valikoitujen yritysten kannattavuus, maksuvalmius ja vakavaraisuus ovat muuttuneet vuosina 2009 - 2013. Tutkimuksessa tarkastellaan myös, miten suomalainen päivittäistavarakauppa on kehittynyt tarkasteluajanjaksolla. Tutkimus on rajattu koskemaan neljää suurinta suomalaista päivittäistavarakaupan, pois lukien Lild Suomi Ky taloudellisten tietojen puuttumisen takia, alan ryhmittymää käyttäen kriteerinä vuoden 2013 päivittäistavaramyyntiä sekä markkinaosuuksia. Edellä mainittujen kriteerien perusteella tutkimukseen valikoitui seuraavat ryhmittymät: S-ryhmä, K-ryhmä, Suomen Lähikauppa Oy sekä Stockmann Oyj Abp.Teoriapohjaan tutkimuksessa käytetään aikaisempaa kirjallisuutta ja julkaistuja akateemisia tutkimuksia toimitusketjun ja sen hallinnasta, sekä käyttöpääomasta ja sen hallinnasta. Valikoitujen yritysten tilinpäätöstiedot on koottu Virre -tietokannasta ja toimiala tiedot tilastokeskuksen ohjelmalla PC -Axis 2008. Tutkimuksessa havaittiin käyttöpääomaprosentin ja quick ratio - tunnusluvun välillä pieniä yhtymäkohtia. Kun käyttöpääomaprosentti pienenee, quick ratio -tunnusluku paranee. Käyttöpääomaprosentin muutoksilla oli negatiivinen korrelaatio koko pääoman tuottoprosenttiin sekä liikevoittoprosenttiin. Tutkimuksen kohdeyritykset ovat pystyneet pitämään käyttöpääomaprosentin erilaisilla tehostamistoiminnoilla hyvin tasaisena tiukasta taloustilanteesta huolimatta.
Resumo:
Today’s healthcare organizations are under constant pressure for change, as hospitals should be able to offer their patients the best possible medical care with limited resources and, at the same time, to retain steady efficiency level in their operation. This is challenging, especially in trauma hospitals, in which the variation in the patient cases and volumes is relatively high. Furthermore, the trauma patient's care requires plenty of resources as most the patients have to be treated as single cases. Occasionally, the sudden increases in demand causes congestion in the operations of the hospital, which in Töölö hospital appears as an increase in the surgery waiting times within the yellow urgency class patients. An increase in the surgery waiting times may cause the diminution of the patient's condition, which also raises the surgery risks. The congestion itself causes overloading of the hospital capacity and staff. The aim of this master’s thesis is to introduce the factors contributing to the trauma process, and to examine the correlation between the different variables and the lengthened surgery waiting times. The results of this study are based on a three-year patient data and different quantitative analysis. Based on the analysis, a daily usable indicator was created in order to support the decision making in the operations management. By using the selected indicator, the effects of congestion can be acknowledged and the corrective action can also be taken more proactively.
Resumo:
The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.