27 resultados para alternative methods
Resumo:
The goals of the study were to describe patients’ perceptions of care after experiencing seclusion/restraint and their quality of life. The goal was moreover to identify methodological challenges related to studies from the perspective of coerced patients. The study was conducted in three phases between September 2008 and April 2012. In the first phase, the instrument Secluded/ Restrained Patients’ Perception of their Treatment (SR-PPT) was developed and validated in Japan in cooperation with a Finnish research group (n = 56). Additional data were collected over one year from secluded/restrained patients using the instrument (n = 90). In the second phase, data were collected during the discharge process (n = 264). In the third phase, data were collected from electronic databases. Methodological and ethical issues were reviewed (n = 32) using systematic review method. Patients perceived that co-operation with the staff was poor; patients’ opinions were not taken into account, treatment targets collated and treatment methods were seen in different ways. Patients also felt that their concerns were not well enough understood. However, patients received getting nurses’ time. In particular, seclusion/restraint was considered unnecessary. The patients felt that they benefited from the isolation in treating their problems more than they needed it, even if the benefit was seen to be minor. Patients treated on forensic wards rated their treatment and care significantly lower than in general units. During hospitalization secluded/restrained patients evaluated their quality of life, however, better than did non-secluded/restrained patients. However, no conclusion is drawn to the effect that the better quality of life assessment is attributable to the seclusion/restraint because patients’ treatment period after the isolation was long and because of many other factors, as rehabilitation, medication, diagnostic differences, and adaptation. According to the systematic mixed studies review variation between study designs was found to be a methodological challenge. This makes comparison of the results more difficult. A research ethical weakness is conceded as regards descriptions of the ethical review process (44 %) and informed consent (32 %). It can be concluded that patients in psychiatric hospital care and having a voice as an equal expert require special attention to clinical nursing, decision-making and service planning. Patients and their family members will be consulted in plans of preventive and alternative methods for seclusion and restraint. The study supports the theory that in ethical decision-making situations account should be taken of medical indications, in addition to the patients’ preferences, the effect of treatment on quality of life, and this depends on other factors. The connection between treatment decisions and a patient’s quality of life should be evaluated more structurally in practice. Changing treatment culture towards patients’ involvement will support daily life in nursing and service planning taking into account improvements in patients’ quality of life.
Resumo:
Tutkimuksen tavoitteena oli yhtenäistää Etelä-Karjalan alueen erilaisia tapoja toimia alueke-räyksen suhteen. Aluekeräyksellä tarkoitetaan jätteiden keräystä pisteiltä, joihin kotitaloudet, jotka eivät kuulu kiinteistökohtaiseen keräykseen, voivat tuoda syntypaikkalajitellun kuiva- eli sekajätteensä. Tavoitteena oli myös saada tietoa siitä, minkälaiset ovat eri kuivajätehuoltovaihtoehtojen ilmastonmuutos- ja kustannusvaikutukset. Lisäksi tavoitteena oli selvittää, miten ympäristönäkökohdat voidaan ottaa huomioon kuljetuskilpailutuksissa. Tutkimuksessa kerättiin tietoa internetistä, opinnäytetöistä ja tieteellisistä artikkeleista sekä yritysten edustajilta. Kasvihuonekaasupäästöjen laskennassa hyödynnettiin GaBi 6.0 -elinkaariarviointiohjelmaa. Tutkimuksen perusteella aluekeräyspisteet kannattaa sijoittaa reiteille, joita asukkaat käyttävät vähintään kerran viikossa ja mitkä ovat optimaalisesti myös kuljetusurakoitsijan kannalta. Taajama-alueelle ei nähty suositeltavaksi sijoittaa aluekeräyspisteitä. Suositeltavina astioina aluekeräyspisteille nähtiin syväkeräyssäiliöt, joiden tyhjennys onnistuu samalla keräyskalustolla kuin kiinteistöjen jäteastioiden, kun ajoneuvo on varustettu puominosturilla. Suositeltavaksi nähtiin myös harventaa jäteastioiden talvityhjennystiheyksiä, jos tyhjennystiheys on vakio ympäri vuoden, sillä pääosa aluekeräyspisteiden käyttäjistä on loma-asukkaita. Tyhjennystiheyksien harvennuksella olisi mahdollista saavuttaa kustannussäästöjä. Tutkimuksessa laskettiin kuivajätteen elinkaarenaikaisia kasvihuonekaasupäästöjä kuivajätteen keräyksestä loppusijoitukseen ja energiahyötykäyttöön. Energiahyötykäyttökohteiksi valittiin Riihimäen, Kotkan sekä Leppävirran (suunnitteilla) jätteenpolttolaitokset. Tulosten pohjalta kuivajätteen energiahyötykäyttö oli loppusijoitusta selkeästi parempi vaihtoehto. Kuivajätteen keräys- ja kuljetuspäästöjen vaikutus oli pieni. Kuivajätteen kuljetusmatkan pituus jätteenpolttolaitokselle ei ole siis ratkaisevassa roolissa kokonaiskasvihuonekaasupäästöjä tarkasteltaessa. Etäisyyttä suurempi vaikutus onkin kuivajätteen koostumuksella, polttolaitosten vuosihyötysuhteilla ja korvattavilla polttoaineilla. Jatkossa suositellaan selvittämään vielä vaihtoehtoisia käsittelytapoja kuivajätteen sisältämälle sekamuovijakeelle, jonka poltosta aiheutuu merkittävä osuus (noin 74 %) kuivajätteen polton kasvihuonekaasupäästöistä. Ajankohtaisia kuljetuskilpailutuksia varten tarkasteltiin vielä tarkemmin keräys- ja kuljetuspäästöjä. Tulosten pohjalta havaittiin, että keräys- ja kuljetuspäästöjä on mahdollista vähentää reilusti (46–74 %) siirtymällä dieselistä biopolttoaineiden käyttöön. Tuloksiin vaikuttaa kuitenkin merkittävästi, minkälaisista raaka-aineista biopolttoaineet on valmistettu. Kuivajätteen keräyspäästöjä on mahdollista pienentää myös päivittämällä aluekeräyspisteverkostoa. Tutkimuksessa tarkasteltiin kustannuksia aluekeräyspisteiden astioiden uusinnasta tai korjauksesta kuivajätteen loppusijoitukseen tai energiahyötykäyttöön asti. Merkittävimmät kustannukset aiheutuivat kuivajätteen loppusijoituksesta, energiahyötykäytöstä sekä keräyksestä. Kustannusten näkökulmasta keräyksen rooli oli siis suurempi. Työn lopussa annettiin vielä vinkkejä, joiden avulla jätehuoltoyritykset voivat tehdä jätekuljetushankintoja ympäristönäkökohdat huomioiden. Usein selkein tapa huomioida ympäristönäkökohdat kuljetuskilpailutuksissa on asettaa riittävän tiukkoja pakollisia vaatimuksia, jolloin voi valita hinnaltaan halvimman vaihtoehdon. Kuljetuspalvelun hankinnassa tulee huomioida ainakin energiankulutus, hiilidioksidi-, typenoksidi-, hiilivety- ja hiukkaspäästöt. Lainsäädäntö ei määrää vähimmäistasoja, vaan hankintaa tehdessä kannattaa kartoittaa markkinatilanne, jotta vaatimukset osaa asettaa oikealle tasolle. Markkinoille kannattaa myös tiedottaa tulevaisuuden tarpeista ja suunnitelmista. Suuria hankintakokonaisuuksia suositellaan pilkottavan pienempiin osiin, jotta pienet ja keskisuuret yritykset pystyvät myös osallistumaan tarjouskilpailuihin. Kannustus innovaatioiden huomioimiseen hankinnoissa on lisääntynyt myös jätehuollon alalla. Selvitettyjen kasvihuonekaasupäästöjen perusteella oli merkille pantavaa, miten suuri vaikutus polttolaitoksen valinnalla oli kasvihuonekaasupäästöihin. Oleellista onkin huomioida ympäristönäkökohdat myös energiahyötykäyttökohdetta valittaessa.
Resumo:
Food production account for significant share of global environmental impacts. Impacts are global warming, fresh water use, land use and some non-renewable substance consumption like phosphorous fertilizers. Because of non-sustainable food production, the world is heading to different crises. Both food- and freshwater crises and also land area and phosphorous fertilizer shortages are one of many challenges to overcome in near future. The major protein sources production amounts, their impacts on environment and uses are show in this thesis. In this thesis, a more sustainable than conventional way of biomass production for food use is introduced. These alternative production methods are photobioreactor process and syngas-based bioreactor process. The processes’ energy consumption and major inputs are viewed. Their environmental impacts are estimated. These estimations are the compared to conventional protein production’s impacts. The outcome of the research is that, the alternative methods can be more sustainable solutions for food production than conventional production. However, more research is needed to verify the exact impacts. Photobioreactor is more sustainable process than syngas-based bioreactor process, but it is more location depended and uses more land area than syngas-based process. In addition, the technology behind syngas-based application is still developing and it can be more efficient in the future.
Resumo:
The drug discovery process is facing new challenges in the evaluation process of the lead compounds as the number of new compounds synthesized is increasing. The potentiality of test compounds is most frequently assayed through the binding of the test compound to the target molecule or receptor, or measuring functional secondary effects caused by the test compound in the target model cells, tissues or organism. Modern homogeneous high-throughput-screening (HTS) assays for purified estrogen receptors (ER) utilize various luminescence based detection methods. Fluorescence polarization (FP) is a standard method for ER ligand binding assay. It was used to demonstrate the performance of two-photon excitation of fluorescence (TPFE) vs. the conventional one-photon excitation method. As result, the TPFE method showed improved dynamics and was found to be comparable with the conventional method. It also held potential for efficient miniaturization. Other luminescence based ER assays utilize energy transfer from a long-lifetime luminescent label e.g. lanthanide chelates (Eu, Tb) to a prompt luminescent label, the signal being read in a time-resolved mode. As an alternative to this method, a new single-label (Eu) time-resolved detection method was developed, based on the quenching of the label by a soluble quencher molecule when displaced from the receptor to the solution phase by an unlabeled competing ligand. The new method was paralleled with the standard FP method. It was shown to yield comparable results with the FP method and found to hold a significantly higher signal-tobackground ratio than FP. Cell-based functional assays for determining the extent of cell surface adhesion molecule (CAM) expression combined with microscopy analysis of the target molecules would provide improved information content, compared to an expression level assay alone. In this work, immune response was simulated by exposing endothelial cells to cytokine stimulation and the resulting increase in the level of adhesion molecule expression was analyzed on fixed cells by means of immunocytochemistry utilizing specific long-lifetime luminophore labeled antibodies against chosen adhesion molecules. Results showed that the method was capable of use in amulti-parametric assay for protein expression levels of several CAMs simultaneously, combined with analysis of the cellular localization of the chosen adhesion molecules through time-resolved luminescence microscopy inspection.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
Frequency converters are widely used in the industry to enable better controllability and efficiency of variable speed AC motor drives. Despite these advantages, certain challenges concerning the inverter and motor interfacing have been present for decades. As insulated gate bipolar transistors entered the market, the inverter output voltage transition rate significantly increased compared with their predecessors. Inverters operate based on pulse width modulation of the output voltage, and the steep voltage edge fed by the inverter produces a motor terminal overvoltage. The overvoltage causes extra stress to the motor insulation, which may lead to a prematuremotor failure. The overvoltage is not generated by the inverter alone, but also by the sum effect of the motor cable length and the impedance mismatch between the cable and the motor. Many solutions have been shown to limit the overvoltage, and the mainstream products focus on passive filters. This doctoral thesis studies an alternative methodology for motor overvoltage reduction. The focus is on minimization of the passive filter dimensions, physical and electrical, or better yet, on operation without any filter. This is achieved by additional inverter control and modulation. The studied methods are implemented on different inverter topologies, varying in nominal voltage and current.For two-level inverters, the studied method is termed active du/dt. It consists of a small output LC filter, which is controlled by an independent modulator. The overvoltage is limited by a reduced voltage transition rate. For multilevel inverters, an overvoltage mitigation method operating without a passive filter, called edge modulation, is implemented. The method uses the capability of the inverter to produce two switching operations in the same direction to cancel the oscillating voltages of opposite phases. For parallel inverters, two methods are studied. They are both intended for two-level inverters, but the first uses individual motor cables from each inverter while the other topology applies output inductors. The overvoltage is reduced by interleaving the switching operations to produce a similar oscillation accumulation as with the edge modulation. The implementation of these methods is discussed in detail, and the necessary modifications to the control system of the inverter are presented. Each method is experimentally verified by operating industrial frequency converters with the modified control. All the methods are found feasible, and they provide sufficient overvoltage protection. The limitations and challenges brought about by the methods are discussed.
Resumo:
Knowledge of the behaviour of cellulose, hemicelluloses, and lignin during wood and pulp processing is essential for understanding and controlling the processes. Determination of monosaccharide composition gives information about the structural polysaccharide composition of wood material and helps when determining the quality of fibrous products. In addition, monitoring of the acidic degradation products gives information of the extent of degradation of lignin and polysaccharides. This work describes two capillary electrophoretic methods developed for the analysis of monosaccharides and for the determination of aliphatic carboxylic acids from alkaline oxidation solutions of lignin and wood. Capillary electrophoresis (CE), in its many variants is an alternative separation technique to chromatographic methods. In capillary zone electrophoresis (CZE) the fused silica capillary is filled with an electrolyte solution. An applied voltage generates a field across the capillary. The movement of the ions under electric field is based on the charge and hydrodynamic radius of ions. Carbohydrates contain hydroxyl groups that are ionised only in strongly alkaline conditions. After ionisation, the structures are suitable for electrophoretic analysis and identification through either indirect UV detection or electrochemical detection. The current work presents a new capillary zone electrophoretic method, relying on in-capillary reaction and direct UV detection at the wavelength of 270 nm. The method has been used for the simultaneous separation of neutral carbohydrates, including mono- and disaccharides and sugar alcohols. The in-capillary reaction produces negatively charged and UV-absorbing compounds. The optimised method was applied to real samples. The methodology is fast since no other sample preparation, except dilution, is required. A new method for aliphatic carboxylic acids in highly alkaline process liquids was developed. The goal was to develop a method for the simultaneous analysis of the dicarboxylic acids, hydroxy acids and volatile acids that are oxidation and degradation products of lignin and wood polysaccharides. The CZE method was applied to three process cases. First, the fate of lignin under alkaline oxidation conditions was monitored by determining the level of carboxylic acids from process solutions. In the second application, the degradation of spruce wood using alkaline and catalysed alkaline oxidation were compared by determining carboxylic acids from the process solutions. In addition, the effectiveness of membrane filtration and preparative liquid chromatography in the enrichment of hydroxy acids from black liquor was evaluated, by analysing the effluents with capillary electrophoresis.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
SUMMARY Organizational creativity – hegemonic and alternative discourses Over the course of recent developments in the societal and business environment, the concept of creativity has been brought into new arenas. The rise of ‘creative industries’ and the idea of creativity as a form of capital have attracted the interests of business and management professionals – as well as academics. As the notion of creativity has been adopted in the organization studies literature, the concept of organizational creativity has been introduced to refer to creativity that takes place in an organizational context. This doctoral thesis focuses on organizational creativity, and its purpose is to explore and problematize the hegemonic organizational creativity discourse and to provide alternative viewpoints for theorizing about creativity in organizations. Taking a discourse theory approach, this thesis, first, provides an outline of the currently predominant, i.e. hegemonic, discourse on organizational creativity, which is explored regarding themes, perspectives, methods and paradigms. Second, this thesis consists of five studies that act as illustrations of certain alternative viewpoints. Through these exemplary studies, this thesis sheds light on the limitations and taken-for-granted aspects of the hegemonic discourse and discusses what these alternative viewpoints could offer for the understanding of and theorizing for organizational creativity. This study leans on an assumption that the development of organizational creativity knowledge and the related discourse is not inevitable or progressive but rather contingent. The organizational creativity discourse has developed in a certain direction, meaning that some themes, perspectives, and methods, as well as assumptions, values, and objectives, have gained a hegemonic position over others, and are therefore often taken for granted and considered valid and relevant. The hegemonization of certain aspects, however, contributes to the marginalization of others. The thesis concludes that the hegemonic discourse on organizational creativity is based on an extensive coverage of certain themes and perspectives, such as those focusing on individual cognitive processes, motivation, or organizational climate and their relation to creativity, to name a few. The limited focus on some themes and the confinement to certain prevalent perspectives, however, results in the marginalization of other themes and perspectives. The negative, often unintended, consequences, implications, and side effects of creativity, the factors that might hinder or prevent creativity, and a deeper inquiry into the ontology and epistemology of creativity have attracted relatively marginal interest. The material embeddedness of organizational creativity, in other words, the physical organizational environment as well as the human body and its non-cognitive resources, has largely been overlooked in the hegemonic discourse, although thereare studies in this area that give reason to believe that they might prove relevant for the understanding of creativity. The hegemonic discourse is based on an individual-centered understanding of creativity which overattributes creativity to an individual and his/her cognitive capabilities, while simultaneously neglecting how, for instance, the physical environment, artifacts, social dynamics and interactions condition organizational creativity. Due to historical reasons, quantitative as well as qualitative yet functionally- oriented studies have predominated the organizational creativity discourse, although studies falling into the interpretationist paradigm have gradually become more popular. The two radical paradigms, as well as methodological and analytical approaches typical of radical research, can be considered to hold a marginal position in the field of organizational creativity. The hegemonic organizational creativity discourse has provided extensive findings related to many aspects of organizational creativity, although the con- ceptualizations and understandings of organizational creativity in the hegemonic discourse are also in many respects limited and one-sided. The hegemonic discourse is based on an assumption that creativity is desirable, good, necessary, or even obligatory, and should be encouraged and nourished. The conceptualiza- tions of creativity favor the kind of creativity which is useful, valuable and can be harnessed for productivity. The current conceptualization is limited to the type of creativity that is acceptable and fits the managerial ideology, and washes out any risky, seemingly useless, or negative aspects of creativity. It also limits the possible meanings and representations that ‘creativity’ has in the respective discourse, excluding many meanings of creativity encountered in other discourses. The excessive focus on creativity that is good, positive, productive and fits the managerial agenda while ignoring other forms and aspects of creativity, however, contributes to the dilution of the notion. Practices aimed at encouraging the kind of creativity may actually entail a risk of fostering moderate alterations rather than more radical novelty, as well as management and organizational practices which limit creative endeavors, rather than increase their likelihood. The thesis concludes that although not often given the space and attention they deserve, there are alternative conceptualizations and understandings of organizational creativity which embrace a broader notion of creativity. The inability to accommodate the ‘other’ understandings and viewpoints within the organizational creativity discourse runs a risk of misrepresenting the complex and many-sided phenomenon of creativity in organizational context. Keywords: Organizational creativity, creativity, organization studies, discourse theory, hegemony
Resumo:
Pairs trading is an algorithmic trading strategy that is based on the historical co-movement of two separate assets and trades are executed on the basis of degree of relative mispricing. The purpose of this study is to explore one new and alternative copula-based method for pairs trading. The objective is to find out whether the copula method generates more trading opportunities and higher profits than the more traditional distance and cointegration methods applied extensively in previous empirical studies. Methods are compared by selecting top five pairs from stocks of the large and medium-sized companies in the Finnish stock market. The research period includes years 2006-2015. All the methods are proven to be profitable and the Finnish stock market suitable for pairs trading. However, copula method doesn’t generate more trading opportunities or higher profits than the other methods. It seems that the limitations of the more traditional methods are not too restrictive for this particular sample data.
Resumo:
The future of paying in the age of digitalization is a topic that includes varied visions. This master’s thesis explores images of the future of paying in the Single Euro Payment Area (SEPA) up to 2020 and 2025 through the views of experts specialized in paying. This study was commissioned by a credit management company in order to obtain more detailed information about the future of paying. Specifically, this thesis investigates what could be the most used payment methods in the future, what items could work as a medium of exchange in 2020 and how will they evolve towards the year 2025. Changing consumer behavior, trends connected to payment methods, security and private issues of new cashless payment methods were also part of this study. In the empirical part of the study the experts’ ideas about probable and preferable future images of paying were investigated through a two-round Disaggregative Delphi method. The questionnaire included numeric statements and open questions. Three alternative future images were created with the help of cluster analysis: “Unsurprising Future”, “Technology Driven Future” and “The Age of the Customer”. The plausible images had similarities and differences, which were reflected to the previous studies in the literature review. The study’s findings were formed based on the images of futures’ similarities and to the open questions answers that were received from the questionnaire. The main conclusion of the study was that development of technology will unify and diversify SEPA; the trend in 2020 seems to be towards more cashless payment methods but their usage depends on the countries’ financial possibilities and customer preferences. Mobile payments, cards and cash will be the main payment methods but the banks will have competitors from outside the financial sector. Wearable payment methods and NFC technology are seen as widely growing trends but subcutaneous payment devices will likely keep their niche position until 2025. In the meantime, security and private issues are seen to increase because of identity thefts and various frauds. Simultaneously, privacy will lose its meaning to younger consumers who are used to sharing their transaction and personal data with third parties in order to get access to attractive services. Easier access to consumers’ transaction data will probably open the door for hackers and cause new risks in paying processes. There exist many roads to future, and this study was not an attempt to give any complete answers about it even if some plausible assumptions about the future’s course were provided.