904 resultados para Context data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Master’s thesis was to study the antecedents of customer satisfaction and behavioral intentions and their relative relationships in the sports sponsorship context. The possible antecedents under investigation in the current research are service value and service quality. As the academic background in the sports sponsorship literature is still rather modest there was a need for further empirical testing. The theoretical part of the research builds on the existing services marketing literature with sports sponsorship and business-to-business contexts in mind. The empirical study focused on the case company Liiga-SaiPa Oy. The data for the empirical analysis was collected via quantitative online survey. The total sample consisted of 357 the case company’s business customers and a total of 80 usable responses were collected. The data was analyzed by using statistical analysis software, SPSS. According to the results of the empirical analysis the most important antecedent of behavioral intentions in the underlying context is customer satisfaction. Also service value was found to have a direct and positive relationship with behavioral intentions. Moreover no indirect relationships through satisfaction were found between service quality and service value and behavioral intentions. However both constructs of service value and service quality were diagnosed to have a direct and positive effect on customer satisfaction. Service quality was also found to be a direct antecedent of service value with other service value benefits. However a contradicting finding with the current literature was, that service value sacrifices were not found to have a significant relationship with overall service value perceptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have shown that saccadic eye responses but not manual responses were sensitive to the kind of warning signal used, with visual onsets producing longer saccadic latencies compared to visual offsets. The aim of the present study was to determine the effects of distinct warning signals on manual latencies and to test the premise that the onset interference, in fact, does not occur for manual responses. A second objective was to determine if the magnitude of the warning effects could be modulated by contextual procedures. Three experimental conditions based on the kind of warning signal used (visual onset, visual offset and auditory warning) were run in two different contexts (blocked and non-blocked). Eighteen participants were asked to respond to the imperative stimulus that would occur some milliseconds (0, 250, 500 or 750 ms) after the warning signal. The experiment consisted in three experimental sessions of 240 trials, where all the variables were counterbalanced. The data showed that visual onsets produced longer manual latencies than visual offsets in the non-blocked context (275 vs 261 ms; P < 0.001). This interference was obtained, however, only for short intervals between the warning and the stimulus, and was abolished when the blocked context was used (256 vs 255 ms; P = 0.789). These results are discussed in terms of bottom-up and top-down interactions, mainly those related to the role of attentional processing in canceling out competitive interactions and suppressive influences of a distractor on the relevant stimulus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral study conducts an empirical analysis of the impact of Word-of-Mouth (WOM) on marketing-relevant outcomes such as attitudes and consumer choice, during a high-involvement and complex service decision. Due to its importance to decisionmaking, WOM has attracted interest from academia and practitioners for decades. Consumers are known to discuss products and services with one another. These discussions help consumers to form an evaluative opinion, as WOM reduces perceived risk, simplifies complexity, and increases the confidence of consumers in decisionmaking. These discussions are also highly impactful as WOM is a trustworthy source of information, since it is independent from the company or brand. In responding to the calls for more research on what happens after WOM information is received, and how it affects marketing-relevant outcomes, this dissertation extends prior WOM literature by investigating how consumers process information in a highinvolvement service domain, in particular higher-education. Further, the dissertation studies how the form of WOM influences consumer choice. The research contributes to WOM and services marketing literature by developing and empirically testing a framework for information processing and studying the long-term effects of WOM. The results of the dissertation are presented in five research publications. The publications are based on longitudinal data. The research leads to the development of a proposed theoretical framework for the processing of WOM, based on theories from social psychology. The framework is specifically focused on service decisions, as it takes into account evaluation difficulty through the complex nature of choice criteria associated with service purchase decisions. Further, other gaps in current WOM literature are taken into account by, for example, examining how the source of WOM and service values affects the processing mechanism. The research also provides implications for managers aiming to trigger favorable WOM through marketing efforts, such as advertising and testimonials. The results provide suggestions on how to design these marketing efforts by taking into account the mechanism through which information is processed, or the form of social influence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marketing has changed because of digitalization. Marketing is moving towards digital channels and more companies are transitioning from “pushing” advertising messages to “pull” marketing, that attracts audience with the content that interests and benefits the audience. This kind of marketing is called content marketing or “inbound” marketing. This study focuses on how marketing communications agencies utilize digital content marketing and what are the best practices with the selected digital content marketing channels. In this study, those channels include blogs, Facebook, Twitter, and LinkedIn. The qualitative research method was utilized in order to examine the phenomenon of digital content marketing in-depth. The chosen data collecting method was semi-structured interviewing. A total of seven marketing communications agencies, who currently utilize digital content marketing, were selected as case companies and interviewed. All the case companies are from the marketing communications industry because that industry can be assumed to be well adapted to digital content marketing techniques. There is a research gap about digital content marketing in the B2B context, which increases the novelty value of this research. The study examines what is digital content marketing, why B2B companies use digital content marketing, and how should digital content marketing be conducted through blogs and social media. The informants perceived digital marketing to be a fundamental part of their all marketing. They conduct digital content marketing for the following reasons: to increase sales, to improve their brand image and to demonstrate their own skills. Concrete results of digital content marketing for the case companies include sales leads, new clients, better brand image, and that recruiting is easier. The most important success factors with blogs and social media are the following: 1) Audience-centric thinking. All content planning should start from figuring out which themes interests the target audience. Social media channel choices should be based on where the target audience can be reached. 2) Companies should not talk only about themselves. Instead, content is made about themes that interests the target audience. On social media channels, only a fragment of all shared content is about the company. Rather, most of the shared content is industry-specific content that helps the potential client.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2002, The Ontario Federation of School Athletic Associations (OFSAA) identified that in providing extracurricular sport programs schools are faced with the 'new realities' of the education system. Although research has been conducted exploring the pressures impacting the provision of extracurricular school sport (Donnelly, Mcloy, Petherick, & Safai, 2000), few studies within the field have focused on understanding extracurricular school sport from an organizational level. The focus of this study was to examine the organizational design (structure, systems, and values) of the extracurricular sport department within three Ontario high schools, as well as to understand the context within which the departments exist. A qualitative multiple case study design was adopted and three public high schools were selected from one district school board in Ontario to represent the cases under investigation. Interviews, observations and documents were used to analyze the extracurricular sport department design of each case and to better understand the context within which the departments exist. As the result of the analysis of the structure, systems and values of each case, two designs emerged- Design KT1 and Design KT2. Differences in the characteristics of design archetype KT1 and KT2 centered on the design dimension of values, and therefore this study identified that contrasting organizational values reflect differences in design types. The characteristics of the Kitchen Table archetype were found to be transferable to the sub-sector of extracurricular school sport, and therefore this research provides a springboard for further research in organizational design within the education sector of extracurricular high school sport. Interconnections were found between the data associated with the external and internal contexts within which the extracurricular sport departments exist. The analysis of the internal context indicated the important role played by organizational members in shaping the context within which the departments exist. The analysis of the external context highlighted the institutional pressures that were present within the education environment. Both political and cultural expectations related to the role of extracurricular sport within schools were visible and were subsequently used by the high schools to create legitimacy and prestige, and to access resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to examine processes and interactions that characterized positive developmental experiences in sport. A highly competitive and reputable U-17 girls' soccer team was chosen for the study through purposeful sampling, providing an information rich case from which data could be derived (Patton, 2002). Seventeen players and three coaches participated in this study. Based on an ethnographic methodology data were collected via observations and both informal and formal semi-structured interviews. Tlie data were coded according to the three procedures outlined by Seidel and Kelle (1995): a) noticing relevant phenomena, b) collecting examples of those phenomena, and c) analyzing those phenomena in order to find commonalities, differences, patterns and structures. Significant events and underlying themes were recounted chronologically through a collection of vignettes, aimed to provide a contextual lens for the reader. Results revolved around two prominent themes: Teamwork and leadership. These were closely related concepts that required players to demonstrate a wide range of developmental skills for the team to move collectively towards their end goal. Furthermore, teamwork and leadership experiences took both desirable and undesirable forms. For example, at the beginning of the season competition existed amongst the players at the expense of teamwork and leadership. As the season progressed the pursuit of a shared goal allowed the players to view each other as collaborators and teamwork and leadership skills became increasingly evident. At times, however, success on the field was prioritized above maintaining relationships off the field, requiring the coaches to intervene and re-establish equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research presented is a qualitative case study of educators’ experiences in integrating living skills in the context of health and physical education (HPE). In using semi-structured interviews the study investigated HPE educators’ experiences and revealed their insights relative to three major themes; professional practice, challenges and support systems. Professional practice experiences detailed the use of progressive lesson planning, reflective and engaging activities, explicit student centered pedagogy as well as holistic teaching philosophies. Even further, the limited knowledge and awareness of living skills, conflicting teaching philosophies, competitive environments between subject areas and lack of time and accessibility were four major challenges that emerged throughout the data. Major supportive roles for HPE educators in the integration process included other educators, consultants, school administration, public health, parents, community programs and professional organizations. The study provides valuable discussion and suggestions for improvement of pedagogical practices in teaching living skills in the HPE setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La version intégrale de cette thèse est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (http://www.bib.umontreal.ca/MU).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous développons dans cette thèse, des méthodes de bootstrap pour les données financières de hautes fréquences. Les deux premiers essais focalisent sur les méthodes de bootstrap appliquées à l’approche de "pré-moyennement" et robustes à la présence d’erreurs de microstructure. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. En se basant sur cette ap- proche d’estimation de la volatilité intégrée en présence d’erreurs de microstructure, nous développons plusieurs méthodes de bootstrap qui préservent la structure de dépendance et l’hétérogénéité dans la moyenne des données originelles. Le troisième essai développe une méthode de bootstrap sous l’hypothèse de Gaussianité locale des données financières de hautes fréquences. Le premier chapitre est intitulé: "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". Nous proposons dans ce chapitre, des méthodes de bootstrap robustes à la présence d’erreurs de microstructure. Particulièrement nous nous sommes focalisés sur la volatilité réalisée utilisant des rendements "pré-moyennés" proposés par Podolskij et Vetter (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à hautes fréquences consécutifs qui ne se chevauchent pas. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. Le non-chevauchement des blocs fait que les rendements "pré-moyennés" sont asymptotiquement indépendants, mais possiblement hétéroscédastiques. Ce qui motive l’application du wild bootstrap dans ce contexte. Nous montrons la validité théorique du bootstrap pour construire des intervalles de type percentile et percentile-t. Les simulations Monte Carlo montrent que le bootstrap peut améliorer les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques, pourvu que le choix de la variable externe soit fait de façon appropriée. Nous illustrons ces méthodes en utilisant des données financières réelles. Le deuxième chapitre est intitulé : "Bootstrapping pre-averaged realized volatility under market microstructure noise". Nous développons dans ce chapitre une méthode de bootstrap par bloc basée sur l’approche "pré-moyennement" de Jacod et al. (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à haute fréquences consécutifs qui se chevauchent. Le chevauchement des blocs induit une forte dépendance dans la structure des rendements "pré-moyennés". En effet les rendements "pré-moyennés" sont m-dépendant avec m qui croît à une vitesse plus faible que la taille d’échantillon n. Ceci motive l’application d’un bootstrap par bloc spécifique. Nous montrons que le bloc bootstrap suggéré par Bühlmann et Künsch (1995) n’est valide que lorsque la volatilité est constante. Ceci est dû à l’hétérogénéité dans la moyenne des rendements "pré-moyennés" au carré lorsque la volatilité est stochastique. Nous proposons donc une nouvelle procédure de bootstrap qui combine le wild bootstrap et le bootstrap par bloc, de telle sorte que la dépendance sérielle des rendements "pré-moyennés" est préservée à l’intérieur des blocs et la condition d’homogénéité nécessaire pour la validité du bootstrap est respectée. Sous des conditions de taille de bloc, nous montrons que cette méthode est convergente. Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques. Nous illustrons cette méthode en utilisant des données financières réelles. Le troisième chapitre est intitulé: "Bootstrapping realized covolatility measures under local Gaussianity assumption". Dans ce chapitre nous montrons, comment et dans quelle mesure on peut approximer les distributions des estimateurs de mesures de co-volatilité sous l’hypothèse de Gaussianité locale des rendements. En particulier nous proposons une nouvelle méthode de bootstrap sous ces hypothèses. Nous nous sommes focalisés sur la volatilité réalisée et sur le beta réalisé. Nous montrons que la nouvelle méthode de bootstrap appliquée au beta réalisé était capable de répliquer les cummulants au deuxième ordre, tandis qu’il procurait une amélioration au troisième degré lorsqu’elle est appliquée à la volatilité réalisée. Ces résultats améliorent donc les résultats existants dans cette littérature, notamment ceux de Gonçalves et Meddahi (2009) et de Dovonon, Gonçalves et Meddahi (2013). Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques et les résultats de bootstrap existants. Nous illustrons cette méthode en utilisant des données financières réelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente thèse se base sur les principes de la théorisation ancrée (Strauss & Corbin, 1998) afin de répondre au manque de documentation concernant les stratégies adoptées par des « agents intermédiaires » pour promouvoir l’utilisation des connaissances issues de la recherche auprès des intervenants en éducation. Le terme « agent intermédiaire » réfère aux personnes qui sont positionnées à l’interface entre les producteurs et les utilisateurs des connaissances scientifiques et qui encouragent et soutiennent les intervenants scolaires dans l’application des connaissances scientifiques dans leur pratique. L’étude s’inscrit dans le cadre d’un projet du ministère de l’Éducation, du Loisir et du Sport du Québec visant à améliorer la réussite scolaire des élèves du secondaire provenant de milieux défavorisés. Des agents intermédiaires de différents niveaux du système éducatif ayant obtenu le mandat de transférer des connaissances issues de la recherche auprès des intervenants scolaires dans les écoles visées par le projet ont été sollicités pour participer à l’étude. Une stratégie d’échantillonnage de type « boule-de-neige » (Biernacki & Waldorf, 1981; Patton, 1990) a été employée afin d’identifier les personnes reconnues par leurs pairs pour la qualité du soutien offert aux intervenants scolaires quant à l’utilisation de la recherche dans leur pratique. Seize entrevues semi-structurées ont été réalisées. L’analyse des données permet de proposer un modèle d’intervention en transfert de connaissances composé de 32 stratégies d’influence, regroupées en 6 composantes d’intervention, soit : relationnelle, cognitive, politique, facilitatrice, évaluative, de même que de soutien et de suivi continu. Les résultats suggèrent que les stratégies d’ordre relationnelle, cognitive et politique sont interdépendantes et permettent d’établir un climat favorable dans lequel les agents peuvent exercer une plus grande influence sur l’appropriation du processus de l’utilisation des connaissances des intervenants scolaire. Ils montrent en outre que la composante de soutien et de suivi continu est importante pour maintenir les changements quant à l’utilisation de la recherche dans la pratique chez les intervenants scolaires. Les implications théoriques qui découlent du modèle, ainsi que les explications des mécanismes impliqués dans les différentes composantes, sont mises en perspective tant avec la documentation scientifique en transfert de connaissances dans les secteurs de la santé et de l’éducation, qu’avec les travaux provenant de disciplines connexes (notamment la psychologie). Enfin, des pistes d’action pour la pratique sont proposées.