46 resultados para Reading strategies and techniques
Resumo:
Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.
Resumo:
We have investigated Russian children’s reading acquisition during an intermediate period in their development: after literacy onset, but before they have acquired well-developed decoding skills. The results of our study suggest that Russian first graders rely primarily on phonemes and syllables as reading grain-size units. Phonemic awareness seems to have reached the metalinguistic level more rapidly than syllabic awareness after the onset of reading instruction, the reversal which is typical for the initial stages of formal reading instruction creating external demand for phonemic awareness. Another reason might be the inherent instability of syllabic boundaries in Russian. We have shown that body-coda is a more natural representation of subsyllabic structure in Russian than onset-rime. We also found that Russian children displayed variability of syllable onset and offset decisions which can be attributed to the lack of congruence between syllabic and morphemic word division in Russian. We suggest that fuzziness of syllable boundary decisions is a sign of the transitional nature of this stage in the reading development and it indicates progress towards an awareness of morphologically determined closed syllables. Our study also showed that orthographic complexity exerts an influence on reading in Russian from the very start of reading acquisition. Besides, we found that Russian first graders experience fluency difficulties in reading orthographically simple words and nonwords of two and more syllables. The transition from monosyllabic to bisyllabic lexical items constitutes a certain threshold, for which the syllabic structure seemed to be of no difference. When we compared the outcomes of the Russian children with the ones produced by speakers of other languages, we discovered that in the tasks which could be performed with the help of alphabetic recoding Russian children’s accuracy was comparable to that of children learning to read in relatively shallow orthographies. In tasks where this approach works only partially, Russian children demonstrated accuracy results similar to those in deeper orthographies. This pattern of moderate results in accuracy and excellent performance in terms of reaction times is an indication that children apply phonological recoding as their dominant strategy to various reading tasks and are only beginning to develop suitable multiple strategies in dealing with orthographically complex material. The development of these strategies is not completed during Grade 1 and the shift towards diversification of strategies apparently continues in Grade 2.
Resumo:
The general aim of the thesis was to study university students’ learning from the perspective of regulation of learning and text processing. The data were collected from the two academic disciplines of medical and teacher education, which share the features of highly scheduled study, a multidisciplinary character, a complex relationship between theory and practice and a professional nature. Contemporary information society poses new challenges for learning, as it is not possible to learn all the information needed in a profession during a study programme. Therefore, it is increasingly important to learn how to think and learn independently, how to recognise gaps in and update one’s knowledge and how to deal with the huge amount of constantly changing information. In other words, it is critical to regulate one’s learning and to process text effectively. The thesis comprises five sub-studies that employed cross-sectional, longitudinal and experimental designs and multiple methods, from surveys to eye tracking. Study I examined the connections between students’ study orientations and the ways they regulate their learning. In total, 410 second-, fourth- and sixth-year medical students from two Finnish medical schools participated in the study by completing a questionnaire measuring both general study orientations and regulation strategies. The students were generally deeply oriented towards their studies. However, they regulated their studying externally. Several interesting and theoretically reasonable connections between the variables were found. For instance, self-regulation was positively correlated with deep orientation and achievement orientation and was negatively correlated with non-commitment. However, external regulation was likewise positively correlated with deep orientation and achievement orientation but also with surface orientation and systematic orientation. It is argued that external regulation might function as an effective coping strategy in the cognitively loaded medical curriculum. Study II focused on medical students’ regulation of learning and their conceptions of the learning environment in an innovative medical course where traditional lectures were combined wth problem-based learning (PBL) group work. First-year medical and dental students (N = 153) completed a questionnaire assessing their regulation strategies of learning and views about the PBL group work. The results indicated that external regulation and self-regulation of the learning content were the most typical regulation strategies among the participants. In line with previous studies, self-regulation wasconnected with study success. Strictly organised PBL sessions were not considered as useful as lectures, although the students’ views of the teacher/tutor and the group were mainly positive. Therefore, developers of teaching methods are challenged to think of new solutions that facilitate reflection of one’s learning and that improve the development of self-regulation. In Study III, a person-centred approach to studying regulation strategies was employed, in contrast to the traditional variable-centred approach used in Study I and Study II. The aim of Study III was to identify different regulation strategy profiles among medical students (N = 162) across time and to examine to what extent these profiles predict study success in preclinical studies. Four regulation strategy profiles were identified, and connections with study success were found. Students with the lowest self-regulation and with an increasing lack of regulation performed worse than the other groups. As the person-centred approach enables us to individualise students with diverse regulation patterns, it could be used in supporting student learning and in facilitating the early diagnosis of learning difficulties. In Study IV, 91 student teachers participated in a pre-test/post-test design where they answered open-ended questions about a complex science concept both before and after reading either a traditional, expository science text or a refutational text that prompted the reader to change his/her beliefs according to scientific beliefs about the phenomenon. The student teachers completed a questionnaire concerning their regulation and processing strategies. The results showed that the students’ understanding improved after text reading intervention and that refutational text promoted understanding better than the traditional text. Additionally, regulation and processing strategies were found to be connected with understanding the science phenomenon. A weak trend showed that weaker learners would benefit more from the refutational text. It seems that learners with effective learning strategies are able to pick out the relevant content regardless of the text type, whereas weaker learners might benefit from refutational parts that contrast the most typical misconceptions with scientific views. The purpose of Study V was to use eye tracking to determine how third-year medical studets (n = 39) and internal medicine residents (n = 13) read and solve patient case texts. The results revealed differences between medical students and residents in processing patient case texts; compared to the students, the residents were more accurate in their diagnoses and processed the texts significantly faster and with a lower number of fixations. Different reading patterns were also found. The observed differences between medical students and residents in processing patient case texts could be used in medical education to model expert reasoning and to teach how a good medical text should be constructed. The main findings of the thesis indicate that even among very selected student populations, such as high-achieving medical students or student teachers, there seems to be a lot of variation in regulation strategies of learning and text processing. As these learning strategies are related to successful studying, students enter educational programmes with rather different chances of managing and achieving success. Further, the ways of engaging in learning seldom centre on a single strategy or approach; rather, students seem to combine several strategies to a certain degree. Sometimes, it can be a matter of perspective of which way of learning can be considered best; therefore, the reality of studying in higher education is often more complicated than the simplistic view of self-regulation as a good quality and external regulation as a harmful quality. The beginning of university studies may be stressful for many, as the gap between high school and university studies is huge and those strategies that were adequate during high school might not work as well in higher education. Therefore, it is important to map students’ learning strategies and to encourage them to engage in using high-quality learning strategies from the beginning. Instead of separate courses on learning skills, the integration of these skills into course contents should be considered. Furthermore, learning complex scientific phenomena could be facilitated by paying attention to high-quality learning materials and texts and other support from the learning environment also in the university. Eye tracking seems to have great potential in evaluating performance and growing diagnostic expertise in text processing, although more research using texts as stimulus is needed. Both medical and teacher education programmes and the professions themselves are challenging in terms of their multidisciplinary nature and increasing amounts of information and therefore require good lifelong learning skills during the study period and later in work life.
Resumo:
The driving forces of technology and globalization continuously transform the business landscape in a way which undermines the existing strategies and innovations of organizations. The challenge for organizations is to establish such conditions where they are able to create new knowledge for innovative business ideas in interaction between other organizations and individuals. Innovation processes continuously need new external stimulations and seek new ideas, new information and knowledge locating more and more outside traditional organizational boundaries. In several studies, the early phases of the innovation process have been considered as the most critical ones. During these phases, the innovation process can emerge or conclude. External knowledge acquirement and utilization are noticed to be important at this stage of the innovation process giving information about the development of future markets and needs for new innovative businessideas. To make it possible, new methods and approaches to manage proactive knowledge creation and sharing activities are needed. In this study, knowledge creation and sharing in the early phases of the innovation process has been studied, and the understanding of knowledge management in the innovation process in an open and collaborative context advanced. Furthermore, the innovation management methods in this study are combined in a novel way to establish an open innovation process and tested in real-life cases. For these purposes two complementary and sequentially applied group work methods - the heuristic scenario method and the idea generation process - are examined by focusing the research on the support of the open knowledge creation and sharing process. The research objective of this thesis concerns two doctrines: the innovation management including the knowledge management, and the futures research concerning the scenario paradigm. This thesis also applies the group decision support system (GDSS) in the idea generation process to utilize the converged knowledge during the scenario process.
Resumo:
Kassanhallintakirjallisuus on pitkälti normatiivista tai yksittäisiä kohteita ja niiden kassanhallinnanosa-alueita tarkastelevaa case-tutkimusta. Sen sijaan kassanhallintaa laajalla tutkimuskohdejoukolla strategia- ja järjestelmävalintojen näkökulmasta tarkastelevia tutkimuksia on tehty vain vähän. Tämä suomalaista kuntakenttää tarkastelevaeksploratiivinen tutkimus antaa kuvan rakenne-, strategia- ja järjestelmävalinnoista, joita kunnat ovat painottaneet kassanhallinnassaan vuosina 2000 - 2002. Tutkimuksen metodologisena viitekehyksenä käytetty kontingenssilähestymistapaan pohjautuva konfiguratiivinen systeemimalli mahdollisti suuren tutkimuskohdejoukonstrategia- ja järjestelmäkäytäntöjen erojen kvantitatiivisen analysoinnin. Ryhmittelyanalyysin avulla tutkimusdatasta muodostui neljä strategia- ja järjestelmäpainotuksiltaan toisistaan eroavaa kuntaryhmää, ja tutkimustulokset osoittivat kuntien kassanhallintakäytäntöjen olevan hyvin samankaltaisia yksityissektorin vastaaviin käytäntöihin verrattuna; myös julkissektorin kassanhallinnassa painotetaan kustannustehokkuutta. Kustannustehokkuusstrategian rinnalla vastaajakunnat painottivat sijoitus-, lainanhoito- ja riskienhallintastrategioita sekä em. strategioiden toteuttamista tukevia rakenne- ja järjestelmävalintoja. Myös pienempienkuntien havaittiin tukeutuneen samoihin strategia- ja järjestelmäpainotuksiin kuin isommat kunnat, vaikka esim. järjestelmien käytännön tietohallintaratkaisuissa saattaa esiintyä kuntakoosta johtuvia eroja. Lisäksi joustavuusstrategian painoarvo osana kuntien kassanhallintastrategioita oli suuri. Tämä on johdonmukaista, sillä kassapositioiden ennakoimattomat muutokset edellyttävät nopeaa päätöksentekoa. Kustannustehokkuusajattelulla, kassanhoitokokonaisuuden ymmärtämisellä ja uusien kassanhoitotekniikoiden sekä rahoitusinstrumenttien selektiivisellä käytöllä on mahdollista vaikuttaa kuntien rahoituksenhoidon nettokustannuksiin.
Establishing intercompany relationships: Motives and methods for successful collaborative engagement
Resumo:
This study explores the early phases of intercompany relationship building, which is a very important topic for purchasing and business development practitioners as well as for companies' upper management. There is a lot ofevidence that a proper engagement with markets increases a company's potential for achieving business success. Taking full advantage of the market possibilities requires, however, a holistic view of managing related decision-making chain. Most literature as well as the business processes of companies are lacking this holism. Typically they observe the process from the perspective of individual stages and thus lead to discontinuity and sub-optimization. This study contains a comprehensive introduction to and evaluation of literature related to various steps of the decision-making process. It is studied from a holistic perspective ofdetermining a company's vertical integration position within its demand/ supplynetwork context; translating the vertical integration objectives to feasible strategies and objectives; and operationalizing the decisions made through engagement with collaborative intercompany relationships. The empirical part of the research has been conducted in two sections. First the phenomenon of intercompany engagement is studied using two complementary case studies. Secondly a survey hasbeen conducted among the purchasing and business development managers of several electronics manufacturing companies, to analyze the processes, decision-makingcriteria and success factors of engagement for collaboration. The aim has been to identify the reasons why companies and their management act the way they do. As a combination of theoretical and empirical research an analysis has been produced of what would be an ideal way of engaging with markets. Based on the respective findings the study concludes by proposing a holistic framework for successful engagement. The evidence presented throughout the study demonstrates clear gaps, discontinuities and limitations in both current research and in practical purchasing decision-making chains. The most significant discontinuity is the identified disconnection between the supplier selection process and related criteria and the relationship success factors.
Resumo:
Diplomityön tavoitteena on tutkia ja kehittää menetelmä tuotekehitysprojektin ajalliselle ennustamiselle tuotteen siirtyessä tuotekehityksestä massatuotantoon. Ajallisen ennustamisen merkitys korostuu mitä lähemmäksi uuden tuotteen massatuotannon aloittaminen (ramp-up) tulee, koska strategiset päätökset koskien mm. uusia tuotantolinjoja, materiaalien- ja komponenttien tilaamisia sekä vahvistus asiakastoimitusten aloittamista täytyy tehdä jo paljon aikaisemmin.Työ aloitetaan tutkimalla rinnakkaista insinöörityötä (concurrent engineering) sekä suoritusten mittaamista (performance measurement), joiden sisältämistä ajattelumalleista, työkaluista ja tekniikoista hahmottuivat ajallisen ennustettavuuden onnistumisen edellytykset. Näitä olivat suunnitellun tuotteen ja tuotekehitysprosessin laatu sekä resurssien ja tiimien kompetenssit. Toisaalta ajalliseen ennustettavuuteen vaikuttavat myös projektien riippuvuudet ulkoisista toimittajista ja heidän aikatauluistaan.Teoreettisena viitekehyksenä käytetään Bradford L. Goldense:n luomaa mallia tuotekehityksen proaktiiviseksi mittaamiseksi sekä sovelletaan W. Edward Deming:in jatkuvan parantamisen silmukkaa. Työssä kehitetään Ramp-up Predictability konsepti, joka koostuu keskipitkän ja pitkän aikavälin ennustamisesta. Työhön ei kuulunut mallin käyttöönotto ja seuranta.Toimenpide ehdotuksena esitetään lisätutkimusta mittareiden keskinäisestä korrelaatioista ja niiden luotettavuudesta sekä mallien tarjoamista mahdollisuuksista muille tulosyksiköille.
Resumo:
Tutkielman päätavoitteena oli tutkia miten projektisalkun hallinnalla voidaan tukea organisaation strategista ohjausta ja liiketoimintaa. Tämän lisäksi avaintavoitteina oli kuvata projektisalkun hallinnan nykytilanne kohdeyrityksessä, paljastaa erityisiä kehitystarpeita ja lopulta luoda kohdeyrityksen projektisalkunhallinnalle tavoitetila. Kirjallisuuskatsauksessa pohdittiin projektisalkun hallinnan roolia ja tavoitteita, projektisalkun hallinnassa käyttävää prosessia, sekä menetelmiä ja tekniikoita, joilla salkkua hallitaan. Työn empiirisessä osassa syvennyttiin tutkimaan projektinsalkun hallintaan liittyviä erityispiirteitä kohdeyrityksessä. Tutkimustulosten huolellinen analysointi osoitti, että aikaisempi kirjallisuus ei riittävästi huomioi kokonaisvaltaisen, integroidun lähestymistavan tarvetta ja viestinnän tärkeyttä projektisalkun hallinnassa. Tutkimuksen johtopäätöksinä luotiin uusi integroitu projektisalkun hallintamalli ja määriteltiin kohdeyritykselle projektisalkun hallinnan tavoitetila sekä ne kehitysaskeleet, joita yrityksen tulisi lähitulevaisuudessa ottaa.
Resumo:
Vaatimusmäärittely on tärkeä osa ohjelmistotuotantoa. Vaatimusten jäljitettävyys on osa vaatimustenhallinta prosessia. Jäljitettävyystieto helpottaa vaatimusten hallintaa läpi koko tuotekehitys projektin. Hyvin usein vaatimusten jäljitettävyyttä ei kuitenkaan ole toteutettu ohjelmistokehitysprojekteissa. Työn tavoitteena oli selvittää vaatimusten jäljitettävyyden tärkeyttä ohjelmistotuotannossa sekä kuinka jäljitettävyys voitaisiin toteuttaa ohjelmistokehitysprojekteissa. Vaatimusten jäljitettävyyttä sekä eri tekniikoita sen toteuttamiseksi on tutkittu kirjallisuuden avulla. Yrityksen vaatimusten jäljitettävyyden nykytilaa on selvitetty tutkimalla olemassa olevaa prosessimallia sekä todellisia tuotekehitysprojekteja. Tuloksena esitettiin perusteluja, miksi jäljitettävyystieto pitäisi sisällyttää ohjelmistokehitysprojekteihin sekä menetelmiä, kuinka jäljitettävyystieto voidaan toteuttaa projekteissa kustannustehokkaasti. Työssä on esitetty strategiavaihtoehto ja menetelmät jäljitettävyyden toteuttamiseksi. Pienillä korjauksilla jäljitettävyys pystytään toteuttamaan kevyellä tasolla. Suurin parannusehdotus prosessimalliin on jäljitettävyysmatriisien luominen. Matriisien avulla pystytään projekteissa toteuttamaan jäljitettävyys sekä eteen- että taaksepäin. Vaatimustenhallintatyökalu helpottaisi jäljitettävyystiedon ylläpitoa.
Resumo:
We expose the ubiquitous interaction between an information screen and its’ viewers mobile devices, highlights the communication vulnerabilities, suggest mitigation strategies and finally implement these strategies to secure the communication. The screen infers information preferences’ of viewers within its vicinity transparently from their mobile devices over Bluetooth. Backend processing then retrieves up-to-date versions of preferred information from content providers. Retrieved content such as sporting news, weather forecasts, advertisements, stock markets and aviation schedules, are systematically displayed on the screen. To maximise users’ benefit, experience and acceptance, the service is provided with no user interaction at the screen and securely upholding preferences privacy and viewers anonymity. Compelled by the personal nature of mobile devices, their contents privacy, preferences confidentiality, and vulnerabilities imposed by screen, the service’s security is fortified. Fortification is predominantly through efficient cryptographic algorithms inspired by elliptic curves cryptosystems, access control and anonymity mechanisms. These mechanisms are demonstrated to attain set objectives within reasonable performance.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.
Resumo:
Engelskans dominerande roll som internationellt språk och andra globaliseringstrender påverkar också Svenskfinland. Dessa trender påverkar i sin tur förutsättningarna för lärande och undervisning i engelska som främmande språk, det vill säga undervisningsmålen, de förväntade elev- och lärarroller, materialens ändamålsenlighet, lärares och elevers initiala erfarenheter av engelska och engelskspråkiga länder. Denna studie undersöker förutsättningarna för lärande och professionell utveckling i det svenskspråkiga nybörjarklassrummet i engelska som främmande språk. Utgångsläget för 351 nybörjare i engelska som främmande språk och 19 av deras lärare beskrivs och analyseras. Resultaten tyder på att engelska håller på att bli ett andraspråk snarare än ett traditionellt främmande språk för många unga elever. Dessa elever har också goda förutsättningar att lära sig engelska utanför skolan. Sådan var dock inte situationen för alla elever, vilket tyder på att det finns en anmärkningsvärd heterogenitet och även regional variation i det finlandssvenska klassrummet i engelska som främmande språk. Lärarresultaten tyder på att vissa lärare har klarat av att på ett konstruktivt sätt att tackla de förutsättningar de möter. Andra lärare uttrycker frustration över sin arbetssituation, läroplanen, undervisningsmaterialen och andra aktörer som kommer är av betydelse för skolmiljön. Studien påvisar att förutsättningarna för lärande och undervisning i engelska som främmande språk varierar i Svenskfinland. För att stöda elevers och lärares utveckling föreslås att dialogen mellan aktörer på olika nivå i samhället bör förbättras och systematiseras.
Resumo:
This study focuses on the integration of eco-innovation principles into strategy and policy at the regional level. The importance of regions as a level for integrating eco-innovative programs and activities served as the point of interest for this study. Eco-innovative activities and technologies are seen as means to meet sustainable development objective of improving regions’ quality of life. This study is conducted to get an in-depth understanding and learning about eco-innovation at regional level, and to know the basic concepts that are important in integrating eco-innovation principles into regional policy. Other specific objectives of this study are to know how eco-innovation are developed and practiced in the regions of the EU, and to analyze the main characteristic features of an eco-innovation model that is specifically developed at Päijät-Häme Region in Finland. Paijät-Häme Region is noted for its successful eco-innovation strategies and programs, hence, taken as casework in this study. Both primary (interviews) and secondary data (publicly available documents) are utilized in this study. The study shows that eco-innovation plays an important role in regional strategy as reviewed based on the experience of other regions in the EU. This is because of its localized nature which makes it easier to facilitate in a regional setting. Since regional authorities and policy-makers are normally focused on solving its localized environmental problems, eco-innovation principles can easily be integrated into regional strategy. The case study highlights Päijät-Häme Region’s eco-innovation strategies and projects which are characterized by strong connection of knowledge-producing institutions. Policy instruments supporting eco-innovation (e.g. environmental technologies) are very much focused on clean technologies, hence, justifying the formation of cleantech clusters and business parks in Päijät-Häme Region. A newly conceptualized SAMPO model of eco-innovation has been developed in Päijät-Häme Region to better capture the region’s characteristics and to eventually replace the current model employed by the Päijät-Häme Regional Authority. The SAMPO model is still under construction, however, review of its principles points to some of its three important spearheads – practice-based innovation, design (eco-design) and clean technology or environmental technology (environment).
Resumo:
Novel biomaterials are needed to fill the demand of tailored bone substitutes required by an ever‐expanding array of surgical procedures and techniques. Wood, a natural fiber composite, modified with heat treatment to alter its composition, may provide a novel approach to the further development of hierarchically structured biomaterials. The suitability of wood as a model biomaterial as well as the effects of heat treatment on the osteoconductivity of wood was studied by placing untreated and heat‐treated (at 220 C , 200 degrees and 140 degrees for 2 h) birch implants (size 4 x 7mm) into drill cavities in the distal femur of rabbits. The follow‐up period was 4, 8 and 20 weeks in all in vivo experiments. The flexural properties of wood as well as dimensional changes and hydroxyl apatite formation on the surface of wood (untreated, 140 degrees C and 200 degrees C heat‐treated wood) were tested using 3‐point bending and compression tests and immersion in simulated body fluid. The effect of premeasurement grinding and the effect of heat treatment on the surface roughness and contour of wood were tested with contact stylus and non‐contact profilometry. The effects of heat treatment of wood on its interactions with biological fluids was assessed using two different test media and real human blood in liquid penetration tests. The results of the in vivo experiments showed implanted wood to be well tolerated, with no implants rejected due to foreign body reactions. Heat treatment had significant effects on the biocompatibility of wood, allowing host bone to grow into tight contact with the implant, with occasional bone ingrowth into the channels of the wood implant. The results of the liquid immersion experiments showed hydroxyl apatite formation only in the most extensively heat‐treated wood specimens, which supported the results of the in vivo experiments. Parallel conclusions could be drawn based on the results of the liquid penetration test where human blood had the most favorable interaction with the most extensively heat‐treated wood of the compared materials (untreated, 140 degrees C and 200 degrees C heat‐treated wood). The increasing biocompatibility was inferred to result mainly from changes in the chemical composition of wood induced by the heat treatment, namely the altered arrangement and concentrations of functional chemical groups. However, the influence of microscopic changes in the cell walls, surface roughness and contour cannot be totally excluded. The heat treatment was hypothesized to produce a functional change in the liquid distribution within wood, which could have biological relevance. It was concluded that the highly evolved hierarchical anatomy of wood could yield information for the future development of bulk bone substitutes according to the ideology of bioinspiration. Furthermore, the results of the biomechanical tests established that heat treatment alters various biologically relevant mechanical properties of wood, thus expanding the possibilities of wood as a model material, which could include e.g. scaffold applications, bulk bone applications and serving as a tool for both mechanical testing and for further development of synthetic fiber reinforced composites.
Resumo:
More and more innovations currently being commercialized exhibit network effects, in other words, the value of using the product increases as more and more people use the same or compatible products. Although this phenomenon has been the subject of much theoretical debate in economics, marketing researchers have been slow to respond to the growing importance of network effects in new product success. Despite an increase in interest in recent years, there is no comprehensive view on the phenomenon and, therefore, there is currently incomplete understanding of the dimensions it incorporates. Furthermore, there is wide dispersion in operationalization, in other words, the measurement of network effects, and currently available approaches have various shortcomings that limit their applicability, especially in marketing research. Consequently, little is known today about how these products fare on the marketplace and how they should be introduced in order to maximize their chances of success. Hence, the motivation for this study was driven by the need to increase our knowledge and understanding of the nature of network effects as a phenomenon, and of their role in the commercial success of new products. This thesis consists of two parts. The first part comprises a theoretical overview of the relevant literature, and presents the conclusions of the entire study. The second part comprises five complementary, empirical research publications. Quantitative research methods and two sets of quantitative data are utilized. The results of the study suggest that there is a need to update both the conceptualization and the operationalization of the phenomenon of network effects. Furthermore, there is a need for an augmented view on customers’ perceived value in the context of network effects, given that the nature of value composition has major implications for the viability of such products in the marketplace. The role of network effects in new product performance is not as straightforward as suggested in the existing theoretical literature. The overwhelming result of this study is that network effects do not directly influence product success, but rather enhance or suppress the influence of product introduction strategies. The major contribution of this study is in conceptualizing the phenomenon of network effects more comprehensively than has been attempted thus far. The study gives an augmented view of the nature of customer value in network markets, which helps in explaining why some products thrive on these markets whereas others never catch on. Second, the study discusses shortcomings in prior literature in the way it has operationalized network effects, suggesting that these limitations can be overcome in the research design. Third, the study provides some much-needed empirical evidence on how network effects, product introduction strategies, and new product performance are associated. In general terms, this thesis adds to our knowledge of how firms can successfully leverage network effects in product commercialization in order to improve market performance.