796 resultados para Empirical Algorithm Analysis
Resumo:
Tutkimuksen tarkoituksena on tutkia markkinamahdollisuuksia Suomeneristevillamarkkinoilla. Tutkimuksessa on sekä teoreettinen, että käytännön osuus. Teoreettisessa osiossa luodaan pohja empiiriselle tutkimukselle tutkimalla markkinamahdollisuusanalyysin eri komponenttien sisältöä. Empiirisessä osiossa haastatellaan markkinoiden suurimpia asiakkaita, jotka vastaavat valtaosasta markkinoiden ostovolyymista. Näiden haastattelujen pohjalta pyritään löytämään ne mahdollisuudet mitä eristevillamarkkinoilla on. Tutkimus osoittaa, että markkinamahdollisuusanalyysin eri komponentit ovat tiiviissä keskinäisessä suhteessa. Kattavalla tutkimuksella kyetään osoittamaan eri mahdollisuuksia markkinoilla. Empiirisessä osiossa kyetään osoittamaan, että Suomen eristevillamarkkinoilla on mahdollisuuksia uudelle toimittajalle. Tutkimuksessa esitetään myös joitakin toimenpide-ehdotuksia näiden havaittujen mahdollisuuksien hyödyntämiseksi.
Resumo:
Tutkimuksen tavoitteena on selvittää, onko perheomistajuus, eli yksityisomistus, kannattavampi omistusmuoto kuin institutionaalinen omistajuus ja, onko yrityksen iällä ja koolla vaikutusta perheyritysten menestymiseen. Aikaisempaan tutkimustietoon tukeutuen, tutkimuksen aluksi käydään myös läpi perheomistajuuteen yleisesti liitettyjä ominaispiirteitä sekä perheyritysten menestymistä verrattuna ei-perheyrityksiin. Empiirinen analyysi perheomistajuuden vaikutuksista yrityksen kannattavuuteen sekä yrityksen iän ja koon vaikutuksista perheyritysten menestymiseen toteutetaan kahden otoksen avulla, jotka koostuvat listaamattomista norjalaisista pienistä ja keskisuurista yrityksistä (pk-yrityksistä). Näin ollen satunnaisotos ja päätoimialaotos, johon listaamattomat pk-yritykset on valittu satunnaisesti Norjan tärkeimmiltä toimialoilta, analysoidaan erikseen. Analyysi toteutetaan käyttäen lineaarista regressioanalyysia. Vaikka satunnaisotoksen perusteella perheyritykset eivät näytä olevan ei-perheyrityksiä kannattavampia, päätoimialaotos osoittaa, että listaamattomissa pk-yrityksissä perhe- eli yksityisomistajuus on merkittävästi institutionaalista omistajuutta kannattavampi omistusmuoto. Eritoten nuoret ja pienet yritykset vastaavat perheyritysten paremmasta kannattavuudesta.
Resumo:
Tutkimuksen tavoitteena on analysoida 74 sellu- ja paperiyrityksen taloudellista suorituskykyä kannattavuutta, maksuvalmiutta, vakavaraisuutta ja arvonluontikykyä kuvaavilla tunnusluvuilla. Tutkimuksen teoriaosa esittelee liiketoiminta-analyysin välineet, jonka jälkeen esitellään taloudelliset tunnusluvut. Empiriaosassa käydään läpi vuoden 2005 tunnusluvut yritystasolla. Jotta voidaan tarkastella tunnuslukujen muutoksia pitkällä aikavälillä, yritykset ryhmitellään maantieteellisen sijainnin sekä liiketoimintaorientaation mukaan. Tutkimus on kuvaileva. Tunnusluvuista voidaan todeta sellu- ja paperiteollisuudessa meneillään oleva toimialan rakennemuutos. Eteläamerikkalaiset yritykset, jotka hyötyvät uudesta ja kustannustehokkaasta raaka-aineesta, ovat siirtyneet lähemmäs arvonluontia, kun taas suurin osa pohjoisamerikkalaisista yrityksistä, jotka olivat toimialan johtavia arvonluojia, ovat nyt arvon tuhoajia. Toimiala kärsii myös alhaisesta kannattavuudesta, joka vaikuttaa eniten pohjoisamerikkalaisiin yrityksiin. Samaan aikaan eteläamerikkalaiset yritykset ovat nostaneet kannattavuuttaan, mikä puolestaan korostaa meneillään olevaa muutosta.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
The complexity of the connexions within an economic system can only be reliably reflected in academic research if powerful methods are used. Researchers have used Structural Path Analysis (SPA) to capture not only the linkages within the production system but also the propagation of the effects into different channels of impacts. However, the SPA literature has restricted itself to showing the relations among sectors of production, while the connections between these sectors and final consumption have attracted little attention. In order to consider the complete set of channels involved, in this paper we propose a structural path method that endogenously incorporates not only sectors of production but also the final consumption of the economy. The empirical application comprises water usages, and analyses the dissemination of exogenous impacts into various channels of water consumption. The results show that the responsibility for water stress is imputed to different sectors and depends on the hypothesis used for the role played by final consumption in the model. This highlights the importance of consumers’ decisions in the determination of ecological impacts. Keywords: Input-Output Analysis, Structural Path Analysis, Final Consumption, Water uses.
Resumo:
The objective of research was to analyse the potential of Normalized Difference Vegetation Index (NDVI) maps from satellite images, yield maps and grapevine fertility and load variables to delineate zones with different wine grape properties for selective harvesting. Two vineyard blocks located in NE Spain (Cabernet Sauvignon and Syrah) were analysed. The NDVI was computed from a Quickbird-2 multi-spectral image at veraison (July 2005). Yield data was acquired by means of a yield monitor during September 2005. Other variables, such as the number of buds, number of shoots, number of wine grape clusters and weight of 100 berries were sampled in a 10 rows × 5 vines pattern and used as input variables, in combination with the NDVI, to define the clusters as alternative to yield maps. Two days prior to the harvesting, grape samples were taken. The analysed variables were probable alcoholic degree, pH of the juice, total acidity, total phenolics, colour, anthocyanins and tannins. The input variables, alone or in combination, were clustered (2 and 3 Clusters) by using the ISODATA algorithm, and an analysis of variance and a multiple rang test were performed. The results show that the zones derived from the NDVI maps are more effective to differentiate grape maturity and quality variables than the zones derived from the yield maps. The inclusion of other grapevine fertility and load variables did not improve the results.
Resumo:
In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.
Resumo:
A variation of task analysis was used to build an empirical model of how therapists may facilitate client assimilation process, described in the Assimilation of Problematic Experiences Scale. A rational model was specified and considered in light of an analysis of therapist in-session performances (N = 117) drawn from six inpatient therapies for depression. The therapist interventions were measured by the Comprehensive Psychotherapeutic Interventions Rating Scale. Consistent with the rational model, confronting interventions were particularly useful in helping clients elaborate insight. However, rather than there being a small number of progress-related interventions at lower levels of assimilation, therapists' use of interventions was broader than hypothesized and drew from a wide range of therapeutic approaches. Concerning the higher levels of assimilation, there was insufficient data to allow an analysis of the therapist's progress-related interventions.
Resumo:
Changes in human lives are studied in psychology, sociology, and adjacent fields as outcomes of developmental processes, institutional regulations and policies, culturally and normatively structured life courses, or empirical accounts. However, such studies have used a wide range of complementary, but often divergent, concepts. This review has two aims. First, we report on the structure that has emerged from scientific life course research by focusing on abstracts from longitudinal and life course studies beginning with the year 2000. Second, we provide a sense of the disciplinary diversity of the field and assess the value of the concept of 'vulnerability' as a heuristic tool for studying human lives. Applying correspondence analysis to 10,632 scientific abstracts, we find a disciplinary divide between psychology and sociology, and observe indications of both similarities of-and differences between-studies, driven at least partly by the data and methods employed. We also find that vulnerability takes a central position in this scientific field, which leads us to suggest several reasons to see value in pursuing theory development for longitudinal and life course studies in this direction.
Resumo:
Diplomityössä on käsitelty uudenlaisia menetelmiä riippumattomien komponenttien analyysiin(ICA): Menetelmät perustuvat colligaatioon ja cross-momenttiin. Colligaatio menetelmä perustuu painojen colligaatioon. Menetelmässä on käytetty kahden tyyppisiä todennäköisyysjakaumia yhden sijasta joka perustuu yleiseen itsenäisyyden kriteeriin. Työssä on käytetty colligaatio lähestymistapaa kahdella asymptoottisella esityksellä. Gram-Charlie ja Edgeworth laajennuksia käytetty arvioimaan todennäköisyyksiä näissä menetelmissä. Työssä on myös käytetty cross-momentti menetelmää joka perustuu neljännen asteen cross-momenttiin. Menetelmä on hyvin samankaltainen FastICA algoritmin kanssa. Molempia menetelmiä on tarkasteltu lineaarisella kahden itsenäisen muuttajan sekoituksella. Lähtö signaalit ja sekoitetut matriisit ovattuntemattomia signaali lähteiden määrää lukuunottamatta. Työssä on vertailtu colligaatio menetelmään ja sen modifikaatioita FastICA:an ja JADE:en. Työssä on myös tehty vertailu analyysi suorituskyvyn ja keskusprosessori ajan suhteen cross-momenttiin perustuvien menetelmien, FastICA:n ja JADE:n useiden sekoitettujen parien kanssa.
Resumo:
This Master's Thesis has been written for Stora Enso Flexible Packaging Papers business unit. In the North-American mills, the business unit has developed a range of new flexible packaging paper grades. The Master's Thesis researches opportunities for sales of these new flexible packaging papers in selected Western-European markets. This study consists of theoretical and empirical part. Theoretical part presents supply chain of flexible packaging, discovering of customer requirements, concept of an offering, general market analysis, customer analysis and basis for sales planning. Empirical part includes preliminary market analysis based on secondary sources, results of lead user interviews and conclusions and recommendations. Potential customers' technical and commercial requirements were found and these were compared to current Stora Enso Flexible Packaging Papers offering. Also a list of potential new customers was made and sales action suggestions were presented in order to gain new accounts.
Resumo:
Tämä diplomityö liittyy Spektrikuvien tutkimiseen tilastollisen kuvamallin näkökulmasta. Diplomityön ensimmäisessä osassa tarkastellaan tilastollisten parametrien jakaumien vaikutusta väreihin ja korostumiin erilaisissa valaistusolosuhteissa. Havaittiin, että tilastollisten parametrien väliset suhteet eivät riipu valaistusolosuhteista, mutta riippuvat kuvan häiriöttömyydestä. Ilmeni myös, että korkea huipukkuus saattaa aiheutua värikylläisyydestä. Lisäksi työssä kehitettiin tilastolliseen spektrimalliin perustuvaa tekstuurinyhdistämisalgoritmia. Sillä saavutettiin hyviä tuloksia, kun tilastollisten parametrien väliset riippuvuussuhteet olivat voimassa. Työn toisessa osassa erilaisia spektrikuvia tutkittiin käyttäen itsenäistä komponenttien analyysia (ICA). Seuraavia itsenäiseen komponenttien analyysiin tarkoitettuja algoritmia tarkasteltiin: JADE, kiinteän pisteen ICA ja momenttikeskeinen ICA. Tutkimuksissa painotettiin erottelun laatua. Paras erottelu saavutettiin JADE- algoritmilla, joskin erot muiden algoritmien välillä eivät olleet merkittäviä. Algoritmi jakoi kuvan kahteen itsenäiseen, joko korostuneeseen ja korostumattomaan tai kromaattiseen ja akromaattiseen, komponenttiin. Lopuksi pohditaan huipukkuuden suhdetta kuvan ominaisuuksiin, kuten korostuneisuuteen ja värikylläisyyteen. Työn viimeisessä osassa ehdotetaan mahdollisia jatkotutkimuskohteita.
Resumo:
Tutkimuksen selvitettiin miten skenaarioanalyysia voidaan käyttää uuden teknologian tutkimisessa. Työssä havaittiin, että skenaarioanalyysin soveltuvuuteen vaikuttaa eniten teknologisen muutoksen taso ja saatavilla olevan tiedon luonne. Skenaariomenetelmä soveltuu hyvin uusien teknologioiden tutkimukseen erityisesti radikaalien innovaatioiden kohdalla. Syynä tähän on niihin liittyvä suuri epävarmuus, kompleksisuus ja vallitsevan paradigman muuttuminen, joiden takia useat muut tulevaisuuden tutkimuksen menetelmät eivät ole tilanteessa käyttökelpoisia. Työn empiirisessä osiossa tutkittiin hilaverkkoteknologian tulevaisuutta skenaarioanalyysin avulla. Hilaverkot nähtiin mahdollisena disruptiivisena teknologiana, joka radikaalina innovaationa saattaa muuttaa tietokonelaskennan nykyisestä tuotepohjaisesta laskentakapasiteetin ostamisesta palvelupohjaiseksi. Tällä olisi suuri vaikutus koko nykyiseen ICT-toimialaan erityisesti tarvelaskennan hyödyntämisen ansiosta. Tutkimus tarkasteli kehitystä vuoteen 2010 asti. Teorian ja olemassa olevan tiedon perusteella muodostettiin vahvaan asiantuntijatietouteen nojautuen neljä mahdollista ympäristöskenaariota hilaverkoille. Skenaarioista huomattiin, että teknologian kaupallinen menestys on vielä monen haasteen takana. Erityisesti luottamus ja lisäarvon synnyttäminen nousivat tärkeimmiksi hilaverkkojen tulevaisuutta ohjaaviksi tekijöiksi.
Resumo:
This paper proposes a very simple method for increasing the algorithm speed for separating sources from PNL mixtures or invertingWiener systems. The method is based on a pertinent initialization of the inverse system, whose computational cost is very low. The nonlinear part is roughly approximated by pushing the observations to be Gaussian; this method provides a surprisingly good approximation even when the basic assumption is not fully satisfied. The linear part is initialized so that outputs are decorrelated. Experiments shows the impressive speed improvement.