900 resultados para Multi-level Analysis
Resumo:
Tutkimuksen tavoitteena oli osaamisen johtamisen holistinen tarkastelu pk-yrityksen kansainvälisen kasvun elinkaarella. Osaamisen johtamisen malli sisältää osaamisen, tiedon, eriasteiset ja hierarkisesti eritasoiset liittoumat ja verkostot sekä internetin tarjonnan organisaation oppimisen ohella. Pk-yrityksen syntyminen, kasvu kotimarkkinoilla ja kansainvälinen kasvu voivat perustua kansainvälisyyteen. Holistinen kansainvälistyminen voi alkaa mistä funktiosta tai prosessista tahansa ja ulottua eriasteisena muihin funktioihin ja/tai prosesseihin. Kansainvälistyminen ja kansainvälinen toiminta voi katketa kokonaan tai funktioitain ja alkaa uudelleen. Tutkimuksen holistisella mallilla analysoitiin tapausyritykset hyödyntämällä osa-aluekronologiaa. Tuloksilla tarkennettiin mallia. Tarkennettu malli korostaa kasvu ja/tai kansainvälistymisärsykkeen/impulssin syntyvän sekä kotiettä ulkomaisilta kysyntä- ja/tai tarjontamarkkinoilta sekä yrityksen sisä- että ulkopuolelta tai olevan piilevänä omistajayrittäjässä. Holistisuus korostaa mallin kaikissa osissa ennen alkua (pre-start) vaiheen asemaa, päättymätöntä uudistumisprosessia. Osaamisen johtamisessa korostuu proaktiivisuus. Jokainen uudistamisprosessi edellyttää olevan osaamisen vahvistamista, ja organisaation ulkopuolisen osaamis- ja tietopääoman hyödyntämistä. Osaamisen johtaminen sisältää varsinaisen ja laajennetun organisaation yksilöiden, ryhmien ja yksiköiden visio- ja strategiaperusteisen johtamisen. Tämä edellyttää sitouttamista, motivointia ja omaehtoisuuden sekä organisaation sisäisen ja ulkoisen kulttuurietäisyyden hyödyntämistä. Yksilön fokus osaamisen johtamisen näkökulmasta on tehtävään sitoutuminen ja itsensä toteuttaminen. Fokukset ovat proaktiiviisia vaihdellen funktioiden ja prosessien elinkaarilla dynaamisesti. Tutkimus myötävaikuttaa omistajayrittäjän roolin korostamiseen, kasvu- ja kansainvälistymisimpulssin ja ennen alkua (pre start) vaiheen holistisuuden ymmärtämiseen laaja-alaisesti. Kasvu- ja kansainvälistymisimpulsseja/ärsykkeitä ja niistä seuraavia pre start vaiheita/osa-alueita on pk-yrityksen elinkaarella (1+n) kappaletta; (n: 0- oo). Malli myötävaikuttaa ymmärtämään aineettomia osaamis- ja tieto-optioita sekä korostaa transnationalpiirteitä kansainvälistä kasvua vahvistavina.
Resumo:
Organizational creativity is increasingly important for organizations aiming to survive and thrive in complex and unexpectedly changing environments. It is precondition of innovation and a driver of an organization’s performance success. Whereas innovation research increasingly promotes high-involvement and participatory innovation, the models of organizational creativity are still mainly based on an individual-creativity view. Likewise, the definitions of organizational creativity and innovation are somewhat equal, and they are used as interchangeable constructs, while on the other hand they are seen as different constructs. Creativity is seen as generation of novel and useful ideas, whereas innovation is seen as the implementation of these ideas. The research streams of innovation and organizational creativity seem to be advancing somewhat separately, although together they could provide many synergy advantages. Thereby, this study addresses three main research gaps. First, as the knowledge and knowing is being increasingly expertized and distributed in organizations, the conceptualization of organizational creativity needs to face that perspective, rather than relying on the individual-creativity view. Thus, the conceptualization of organizational creativity needs clarification, especially as an organizational-level phenomenon (i.e., creativity by an organization). Second, approaches to consciously build organizational creativity to increase the capacity of an organization to demonstrate novelty in its knowledgeable actions are rare. The current creativity techniques are mainly based on individual-creativity views, and they mainly focus on the occasional problem-solving cases among a limited number of individuals, whereas, the development of collective creativity and creativity by the organization lacks approaches. Third, in terms of organizational creativity as a collective phenomenon, the engagement, contributions, and participation of organizational members into activities of common meaning creation are more important than the individualcreativity skills. Therefore, the development approaches to foster creativity as social, emerging, embodied, and collective creativity are needed to complement the current creativity techniques. To address these gaps, the study takes a multiparadigm perspective to face the following three objectives. The first objective of this study is to clarify and extend the conceptualization of organizational creativity. The second is to study the development of organizational creativity. The third is to explore how an improvisational theater based approach fosters organizational creativity. The study consists of two parts comprising the introductory part (part I) and six publications (part II). Each publication addresses the research questions of the thesis through detailed subquestions. The study makes three main contributions to the research of organizational creativity. First, it contributes toward the conceptualization of organizational creativity by extending the current view of organizational creativity. This study views organizational creativity as a multilevel construct constituting both of individual and collective (group and organizational) creativity. In contrast to current views of organizational creativity, this study bases on organizational (collective) knowledge that is based on and demonstrated through the knowledgeable actions of an organization as a whole. The study defines organizational creativity as an overall ability of an organization to demonstrate novelty in its knowledgeable actions (through what it does and how it does what it does).Second, this study contributes toward the development of organizational creativity as multi-level phenomena, introducing developmental approaches that face two or more of these levels simultaneously. More specifically, the study presents the cross-level approaches to building organizational creativity, by using an approach based in improvisational theater and considering assessment of organizational renewal capability. Third, the study contributes on development of organizational creativity using an improvisational theater based approach as twofold meaning. First, it fosters individual and collective creativity simultaneously and builds space for creativity to occur. Second, it models collective and distributed creativity processes, thereby, contributing to the conceptualization of organizational creativity.
Resumo:
Context: BL Lacs are the most numerous extragalactic objects which are detected in Very High Energy (VHE) gamma-rays band. They are a subclass of blazars. Large flux variability amplitude, sometimes happens in very short time scale, is a common characteristic of them. Significant optical polarization is another main characteristics of BL Lacs. BL Lacs' spectra have a continuous and featureless Spectral Energy Distribution (SED) which have two peaks. Among 1442 BL Lacs in the Roma-BZB catalogue, only 51 are detected in VHE gamma-rays band. BL Lacs are most numerous (more than 50% of 514 objects) objects among the sources that are detected above 10 GeV by FERMI-LAT. Therefore, many BL Lacs are expected to be discovered in VHE gamma-rays band. However, due to the limitation on current and near future technology of Imaging Air Cherenkov Telescope, astronomers are forced to predict whether an object emits VHE gamma-rays or not. Some VHE gamma-ray prediction methods are already introduced but still are not confirmed. Cross band correlations are the building blocks of introducing VHE gamma-rays prediction method. Aims: We will attempt to investigate cross band correlations between flux energy density, luminosity and spectral index of the sample. Also, we will check whether recently discovered MAGIC J2001+435 is a typical BL Lac. Methods: We select a sample of 42 TeV BL Lacs and collect 20 of their properties within five energy bands from literature and Tuorla blazar monitoring program database. All of the data are synchronized to be comparable to each other. Finally, we choose 55 pair of datasets for cross band correlations finding and investigating whether there is any correlation between each pair. For MAGIC J2001+435 we analyze the publicly available SWIFT-XRT data, and use the still unpublished VHE gamma-rays data from MAGIC collaboration. The results are compared to the other sources of the sample. Results: Low state luminosity of multiple detected VHE gamma-rays is strongly correlated luminosities in all other bands. However, the high state does not show such strong correlations. VHE gamma-rays single detected sources have similar behaviour to the low state of multiple detected ones. Finally, MAGIC J2001+435 is a typical TeV BL Lac. However, for some of the properties this source is located at the edge of the whole sample (e.g. in terms of X-rays flux). Keywords: BL Lac(s), Population study, Correlations finding, Multi wavelengths analysis, VHE gamma-rays, gamma-rays, X-rays, Optical, Radio
Resumo:
Tämä työ tutkii ja tarkastelee transitio-kokeilua ravinnetaloudessa. Transitio-kokeilu on toimintatutkimusprojekti, joka toteutetaan systeemisen muutoksen ajattelun mukaisesti alhaalta ylöspäin. Ravinnetalous määritetään tarkemmin työn kautta sekä analysoidaan monitaso-perspektiivin näkökulmasta. Ravinnetalous on terminä varsin tuntematon ja tarvitsee enemmän tunnettavuutta laajemman yleisön edessä. Transitio-areenan ja transitio-visioiden kehittäminen ovat työn keskipisteessä, koska ne ovat tärkeimpiä vaiheita transition alkuvaiheessa. Joukko sidosryhmätoimijoita osallistuu transitio areenaan sekä visioiden jatkokehittelyyn. Visio(t) luodaan ensisijaisesti backcasting-menetelmällä, jota myös täydennetään tavanomaisella ennustamisella. Backcasting- menetelmä on osin osallistava ja siinä käytetään ravinteiden planeettarajoja kvantitatiivisina pääperiaatteina, minkä tuloksena myös visiot ovat osin kvantitatiivisia. Transitio areenan kokoaminen ja fasilitointi aiheuttavat hankalia kysymyksiä, jotka tarvitsevat jatko-tutkimusta. Alhaalta-ylöspäin organisoitu transitio-arena houkuttelee niche-toimijoita, mutta epäonnistuu sitouttamaan julkisen vallan toimijoita. Toimintamallin voimasuhteet, politiikka ja transition vakiinnuttaminen tulisivat olla jatko-toimenpiteinä niin tutkimuksessa kuin toiminnassakin.
Resumo:
The purpose of this study was to find out how a software company can successfully expand business to the Danish software market through distribution channel. The study was commissioned by a Finnish software company and it was conducted using a qualitative research method by analyzing external and internal business environment, and interviewing Danish ICT organizations and M-Files personnel. Interviews were semi-structured interviews, which were designed to collect comprehensive information on the existing ICT and software market in Denmark. The research used three external and internal analyzing frameworks; PEST analysis (market level), Porter´s Five Force analysis (industry level competition) and SWOT analysis (company level). Distribution channels theory was a base to understand why and what kind of distribution channels the case company uses, and what kind of channels target markets companies’ uses. Channel strategy and design were integrated to the industry level analysis. The empirical findings revealed that Denmark has very business friendly ICT environment. Several organizations have ranked Denmark´s information and communication technology as the best in the world. Denmark’s ICT and software market are relatively small, compared to many other countries in Europe. Danish software market is centralized. Largest software clusters are in the largest cities; Copenhagen, Aarhus, Odense and Aalborg. From these clusters, software companies can most likely find suitable resellers. The following growing trends are clearly seen in the software market: mobile and wireless applications, outsourcing, security solutions, cloud computing, social business solutions and e-business solutions. When expanding software business to the Danish market, it is important to take into account these trends. In Denmark distribution channels varies depending on the product or service. For many, a natural distribution channel is a local partner or internet. In the public sector solutions are purchased through a public procurement process. In the private sector the buying process is more straight forwarded. Danish companies are buying software from reliable suppliers. This means that they usually buy software direct from big software vendors or local partners. Some customers prefer to use professional consulting companies. These consulting companies can strongly influence on the selection of the supplier and products, and in this light, consulting companies can be important partners for software companies. Even though the competition is fierce in ECM and DMS solutions, Danish market offers opportunities for foreign companies. Penetration to the Danish market through reseller channel requires advanced solutions and objective selection criteria for channel partners. Based on the findings, Danish companies are interested in advanced and efficient software solutions. Interest towards M-Files solutions was clearly seen and the company has excellent opportunity to expand business to the Danish market through reseller channel. Since the research explored the Danish ICT and software market, the results of the study may offer valuable information also to the other software companies which are expanding their business to the Danish market.
Resumo:
Traditionally metacognition has been theorised, methodologically studied and empirically tested from the standpoint mainly of individuals and their learning contexts. In this dissertation the emergence of metacognition is analysed more broadly. The aim of the dissertation was to explore socially shared metacognitive regulation (SSMR) as part of collaborative learning processes taking place in student dyads and small learning groups. The specific aims were to extend the concept of individual metacognition to SSMR, to develop methods to capture and analyse SSMR and to validate the usefulness of the concept of SSMR in two different learning contexts; in face-to-face student dyads solving mathematical word problems and also in small groups taking part in inquiry-based science learning in an asynchronous computer-supported collaborative learning (CSCL) environment. This dissertation is comprised of four studies. In Study I, the main aim was to explore if and how metacognition emerges during problem solving in student dyads and then to develop a method for analysing the social level of awareness, monitoring, and regulatory processes emerging during the problem solving. Two dyads comprised of 10-year-old students who were high-achieving especially in mathematical word problem solving and reading comprehension were involved in the study. An in-depth case analysis was conducted. Data consisted of over 16 (30–45 minutes) videotaped and transcribed face-to-face sessions. The dyads solved altogether 151 mathematical word problems of different difficulty levels in a game-format learning environment. The interaction flowchart was used in the analysis to uncover socially shared metacognition. Interviews (also stimulated recall interviews) were conducted in order to obtain further information about socially shared metacognition. The findings showed the emergence of metacognition in a collaborative learning context in a way that cannot solely be explained by individual conception. The concept of socially-shared metacognition (SSMR) was proposed. The results highlighted the emergence of socially shared metacognition specifically in problems where dyads encountered challenges. Small verbal and nonverbal signals between students also triggered the emergence of socially shared metacognition. Additionally, one dyad implemented a system whereby they shared metacognitive regulation based on their strengths in learning. Overall, the findings suggested that in order to discover patterns of socially shared metacognition, it is important to investigate metacognition over time. However, it was concluded that more research on socially shared metacognition, from larger data sets, is needed. These findings formed the basis of the second study. In Study II, the specific aim was to investigate whether socially shared metacognition can be reliably identified from a large dataset of collaborative face-to-face mathematical word problem solving sessions by student dyads. We specifically examined different difficulty levels of tasks as well as the function and focus of socially shared metacognition. Furthermore, the presence of observable metacognitive experiences at the beginning of socially shared metacognition was explored. Four dyads participated in the study. Each dyad was comprised of high-achieving 10-year-old students, ranked in the top 11% of their fourth grade peers (n=393). Dyads were from the same data set as in Study I. The dyads worked face-to-face in a computer-supported, game-format learning environment. Problem-solving processes for 251 tasks at three difficulty levels taking place during 56 (30–45 minutes) lessons were video-taped and analysed. Baseline data for this study were 14 675 turns of transcribed verbal and nonverbal behaviours observed in four study dyads. The micro-level analysis illustrated how participants moved between different channels of communication (individual and interpersonal). The unit of analysis was a set of turns, referred to as an ‘episode’. The results indicated that socially shared metacognition and its function and focus, as well as the appearance of metacognitive experiences can be defined in a reliable way from a larger data set by independent coders. A comparison of the different difficulty levels of the problems suggested that in order to trigger socially shared metacognition in small groups, the problems should be more difficult, as opposed to moderately difficult or easy. Although socially shared metacognition was found in collaborative face-to-face problem solving among high-achieving student dyads, more research is needed in different contexts. This consideration created the basis of the research on socially shared metacognition in Studies III and IV. In Study III, the aim was to expand the research on SSMR from face-to-face mathematical problem solving in student dyads to inquiry-based science learning among small groups in an asynchronous computer-supported collaborative learning (CSCL) environment. The specific aims were to investigate SSMR’s evolvement and functions in a CSCL environment and to explore how SSMR emerges at different phases of the inquiry process. Finally, individual student participation in SSMR during the process was studied. An in-depth explanatory case study of one small group of four girls aged 12 years was carried out. The girls attended a class that has an entrance examination and conducts a language-enriched curriculum. The small group solved complex science problems in an asynchronous CSCL environment, participating in research-like processes of inquiry during 22 lessons (á 45–minute). Students’ network discussion were recorded in written notes (N=640) which were used as study data. A set of notes, referred to here as a ‘thread’, was used as the unit of analysis. The inter-coder agreement was regarded as substantial. The results indicated that SSMR emerges in a small group’s asynchronous CSCL inquiry process in the science domain. Hence, the results of Study III were in line with the previous Study I and Study II and revealed that metacognition cannot be reduced to the individual level alone. The findings also confirm that SSMR should be examined as a process, since SSMR can evolve during different phases and that different SSMR threads overlapped and intertwined. Although the classification of SSMR’s functions was applicable in the context of CSCL in a small group, the dominant function was different in the asynchronous CSCL inquiry in the small group in a science activity than in mathematical word problem solving among student dyads (Study II). Further, the use of different analytical methods provided complementary findings about students’ participation in SSMR. The findings suggest that it is not enough to code just a single written note or simply to examine who has the largest number of notes in the SSMR thread but also to examine the connections between the notes. As the findings of the present study are based on an in-depth analysis of a single small group, further cases were examined in Study IV, as well as looking at the SSMR’s focus, which was also studied in a face-to-face context. In Study IV, the general aim was to investigate the emergence of SSMR with a larger data set from an asynchronous CSCL inquiry process in small student groups carrying out science activities. The specific aims were to study the emergence of SSMR in the different phases of the process, students’ participation in SSMR, and the relation of SSMR’s focus to the quality of outcomes, which was not explored in previous studies. The participants were 12-year-old students from the same class as in Study III. Five small groups consisting of four students and one of five students (N=25) were involved in the study. The small groups solved ill-defined science problems in an asynchronous CSCL environment, participating in research-like processes of inquiry over a total period of 22 hours. Written notes (N=4088) detailed the network discussions of the small groups and these constituted the study data. With these notes, SSMR threads were explored. As in Study III, the thread was used as the unit of analysis. In total, 332 notes were classified as forming 41 SSMR threads. Inter-coder agreement was assessed by three coders in the different phases of the analysis and found to be reliable. Multiple methods of analysis were used. Results showed that SSMR emerged in all the asynchronous CSCL inquiry processes in the small groups. However, the findings did not reveal any significantly changing trend in the emergence of SSMR during the process. As a main trend, the number of notes included in SSMR threads differed significantly in different phases of the process and small groups differed from each other. Although student participation was seen as highly dispersed between the students, there were differences between students and small groups. Furthermore, the findings indicated that the amount of SSMR during the process or participation structure did not explain the differences in the quality of outcomes for the groups. Rather, when SSMRs were focused on understanding and procedural matters, it was associated with achieving high quality learning outcomes. In turn, when SSMRs were focused on incidental and procedural matters, it was associated with low level learning outcomes. Hence, the findings imply that the focus of any emerging SSMR is crucial to the quality of the learning outcomes. Moreover, the findings encourage the use of multiple research methods for studying SSMR. In total, the four studies convincingly indicate that a phenomenon of socially shared metacognitive regulation also exists. This means that it was possible to define the concept of SSMR theoretically, to investigate it methodologically and to validate it empirically in two different learning contexts across dyads and small groups. In-depth micro-level case analysis in Studies I and III showed the possibility to capture and analyse in detail SSMR during the collaborative process, while in Studies II and IV, the analysis validated the emergence of SSMR in larger data sets. Hence, validation was tested both between two environments and within the same environments with further cases. As a part of this dissertation, SSMR’s detailed functions and foci were revealed. Moreover, the findings showed the important role of observable metacognitive experiences as the starting point of SSMRs. It was apparent that problems dealt with by the groups should be rather difficult if SSMR is to be made clearly visible. Further, individual students’ participation was found to differ between students and groups. The multiple research methods employed revealed supplementary findings regarding SSMR. Finally, when SSMR was focused on understanding and procedural matters, this was seen to lead to higher quality learning outcomes. Socially shared metacognition regulation should therefore be taken into consideration in students’ collaborative learning at school similarly to how an individual’s metacognition is taken into account in individual learning.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
Increasing amount of renewable energy source based electricity production has set high load control requirements for power grid balance markets. The essential grid balance between electricity consumption and generation is currently hard to achieve economically with new-generation solutions. Therefore conventional combustion power generation will be examined in this thesis as a solution to the foregoing issue. Circulating fluidized bed (CFB) technology is known to have sufficient scale to acts as a large grid balancing unit. Although the load change rate of the CFB unit is known to be moderately high, supplementary repowering solution will be evaluated in this thesis for load change maximization. The repowering heat duty is delivered to the CFB feed water preheating section by smaller gas turbine (GT) unit. Consequently, steam extraction preheating may be decreased and large amount of the gas turbine exhaust heat may be utilized in the CFB process to reach maximum plant electrical efficiency. Earlier study of the repowering has focused on the efficiency improvements and retrofitting to maximize plant electrical output. This study however presents the CFB load change improvement possibilities achieved with supplementary GT heat. The repowering study is prefaced with literature and theory review for both of the processes to maximize accuracy of the research. Both dynamic and steady-state simulations accomplished with APROS simulation tool will be used to evaluate repowering effects to the CFB unit operation. Eventually, a conceptual level analysis is completed to compare repowered plant performance to the state-of-the-art CFB performance. Based on the performed simulations, considerably good improvements to the CFB process parameters are achieved with repowering. Consequently, the results show possibilities to higher ramp rate values achieved with repowered CFB technology. This enables better plant suitability to the grid balance markets.
Resumo:
Objective: Overuse injuries in violinists are a problem that has been primarily analyzed through the use of questionnaires. Simultaneous 3D motion analysis and EMG to measure muscle activity has been suggested as a quantitative technique to explore this problem by identifying movement patterns and muscular demands which may predispose violinists to overuse injuries. This multi-disciplinary analysis technique has, so far, had limited use in the music world. The purpose of this study was to use it to characterize the demands of a violin bowing task. Subjects: Twelve injury-free violinists volunteered for the study. The subjects were assigned to a novice or expert group based on playing experience, as determined by questionnaire. Design and Settings: Muscle activity and movement patterns were assessed while violinists played five bowing cycles (one bowing cycle = one down-bow + one up-bow) on each string (G, D, A, E), at a pulse of 4 beats per bow and 100 beats per minute. Measurements: An upper extremity model created using coordinate data from markers placed on the right acromion process, lateral epicondyle of the humerus and ulnar styloid was used to determine minimum and maximum joint angles, ranges of motion (ROM) and angular velocities at the shoulder and elbow of the bowing arm. Muscle activity in right anterior deltoid, biceps brachii and triceps brachii was assessed during maximal voluntary contractions (MVC) and during the playing task. Data were analysed for significant differences across the strings and between experience groups. Results: Elbow flexion/extension ROM was similar across strings for both groups. Shoulder flexion/extension ROM increaslarger for the experts. Angular velocity changes mirrored changes in ROM. Deltoid was the most active of the muscles assessed (20% MVC) and displayed a pattern of constant activation to maintain shoulder abduction. Biceps and triceps were less active (4 - 12% MVC) and showed a more periodic 'on and off pattern. Novices' muscle activity was higher in all cases. Experts' muscle activity showed a consistent pattern across strings, whereas the novices were more irregular. The agonist-antagonist roles of biceps and triceps during the bowing motion were clearly defined in the expert group, but not as apparent in the novice group. Conclusions: Bowing movement appears to be controlled by the shoulder rather than the elbow as shoulder ROM changed across strings while elbow ROM remained the same. Shoulder injuries are probably due to repetition as the muscle activity required for the movement is small. Experts require a smaller amount of muscle activity to perform the movement, possibly due to more efficient muscle activation patterns as a result of practice. This quantitative multidisciplinary approach to analysing violinists' movements can contribute to fuller understanding of both playing demands and injury mechanisms .
Resumo:
The purpose of my research was to develop and refine pedagogic approaches, and establish fitness baselines to adapt fitness and conditioning programs for Moderate-functioning ASD individuals. I conducted a seven-week study with two teens and two trainers. The trainers implemented individualized fitness and conditioning programs that I developed. I conducted pre and post fitness baselines for each teen, a pre and post study interview with the trainers, and recorded semi-structured observations during each session. I used multi-level, within-case and across case analyses, working inductively and deductively. My findings indicated that fundamental movement concepts can be used to establish fitness baselines and develop individualized fitness programs. I tracked and evaluated progressions and improvements using conventional measurements applied to unconventional movements. This process contributed to understanding and making relevant modifications to activities as effective pedagogic strategies for my trainers. Further research should investigate fitness and conditioning programs with lower functioning ASD individuals.
Resumo:
Alan Garcia, l’actuel président du Pérou, est un des politiciens les plus controversés dans l’histoire péruvienne. Le succès de sa carrière comme candidat est fort opposé aux résultats catastrophiques de sa première gestion présidentielle. Dans la culture populaire, les compétences discursives de Garcia, ainsi que le contraste entre son succès et ses pauvres performances en tant que président, l’ont élevé au rang de mythe. Ce travail de recherche présente une analyse pragmatique linguistique des stratégies discursives utilisées par le président Garcia dans son deuxième mandat (2001-2006). L’analyse sera centrée sur le rapport établi par Steven Pinker (2007) entre politesse positive et solidarité communale. Les travaux de Brown et Levinson (1978, 1987) et d’Alan Fiske (1991) sont notre base théorique. L’exclusion sociale d’une partie de la population électorale péruvienne, selon le point de vue de Vergara (2007), est l’élément clé pour mieux comprendre le succès de la stratégie discursive de Garcia. Vegara présente une analyse diachronique multi-variable de la situation politique péruvienne pour expliquer la rationalité de la population électorale péruvienne. À partir de cet encadrement théorique, nous procéderons à l’analyse lexicométrique qui nous permettra d’identifier les stratégies discursives utilisées dans le corpus des discours de Garcia qui a été choisi pour l’analyse. D’après le schéma de Pinker, les données obtenues seront classifiées selon la définition de politesse positive de Brown et Levinson. Finalement, nous évaluerons le rapport entre les résultats classifiés et le modèle de solidarité communale de Fiske. L’objectif est de démontrer que le style discursif de Garcia est structuré à partir d’une rationalité dont l’objet est de fermer la brèche sociale entre le politicien et l’électorat.
Resumo:
Objectif : Déterminer la fiabilité et la précision d’un prototype d’appareil non invasif de mesure de glucose dans le tissu interstitiel, le PGS (Photonic Glucose Sensor), en utilisant des clamps glycémiques multi-étagés. Méthodes : Le PGS a été évalué chez 13 sujets avec diabète de type 1. Deux PGS étaient testés par sujet, un sur chacun des triceps, pour évaluer la sensibilité, la spécificité, la reproductibilité et la précision comparativement à la technique de référence (le Beckman®). Chaque sujet était soumis à un clamp de glucose multi-étagé de 8 heures aux concentrations de 3, 5, 8 et 12 mmol/L, de 2 heures chacun. Résultats : La corrélation entre le PGS et le Beckman® était de 0,70. Pour la détection des hypoglycémies, la sensibilité était de 63,4%, la spécificité de 91,6%, la valeur prédictive positive (VPP) 71,8% et la valeur prédictive négative (VPN) 88,2%. Pour la détection de l’hyperglycémie, la sensibilité était de 64,7% et la spécificité de 92%, la VPP 70,8% et la VPN : 89,7%. La courbe ROC (Receiver Operating Characteristics) démontrait une précision de 0,86 pour l’hypoglycémie et de 0,87 pour l’hyperglycémie. La reproductibilité selon la « Clark Error Grid » était de 88% (A+B). Conclusion : La performance du PGS était comparable, sinon meilleure que les autres appareils sur le marché(Freestyle® Navigator, Medtronic Guardian® RT, Dexcom® STS-7) avec l’avantage qu’il n’y a pas d’aiguille. Il s’agit donc d’un appareil avec beaucoup de potentiel comme outil pour faciliter le monitoring au cours du traitement intensif du diabète. Mot clés : Diabète, diabète de type 1, PGS (Photonic Glucose Sensor), mesure continue de glucose, courbe ROC, « Clark Error Grid».
Resumo:
La thèse comporte trois essais en microéconomie appliquée. En utilisant des modèles d’apprentissage (learning) et d’externalité de réseau, elle étudie le comportement des agents économiques dans différentes situations. Le premier essai de la thèse se penche sur la question de l’utilisation des ressources naturelles en situation d’incertitude et d’apprentissage (learning). Plusieurs auteurs ont abordé le sujet, mais ici, nous étudions un modèle d’apprentissage dans lequel les agents qui consomment la ressource ne formulent pas les mêmes croyances a priori. Le deuxième essai aborde le problème générique auquel fait face, par exemple, un fonds de recherche désirant choisir les meilleurs parmi plusieurs chercheurs de différentes générations et de différentes expériences. Le troisième essai étudie un modèle particulier d’organisation d’entreprise dénommé le marketing multiniveau (multi-level marketing). Le premier chapitre est intitulé "Renewable Resource Consumption in a Learning Environment with Heterogeneous beliefs". Nous y avons utilisé un modèle d’apprentissage avec croyances hétérogènes pour étudier l’exploitation d’une ressource naturelle en situation d’incertitude. Il faut distinguer ici deux types d’apprentissage : le adaptive learning et le learning proprement dit. Ces deux termes ont été empruntés à Koulovatianos et al (2009). Nous avons montré que, en comparaison avec le adaptive learning, le learning a un impact négatif sur la consommation totale par tous les exploitants de la ressource. Mais individuellement certains exploitants peuvent consommer plus la ressource en learning qu’en adaptive learning. En effet, en learning, les consommateurs font face à deux types d’incitations à ne pas consommer la ressource (et donc à investir) : l’incitation propre qui a toujours un effet négatif sur la consommation de la ressource et l’incitation hétérogène dont l’effet peut être positif ou négatif. L’effet global du learning sur la consommation individuelle dépend donc du signe et de l’ampleur de l’incitation hétérogène. Par ailleurs, en utilisant les variations absolues et relatives de la consommation suite à un changement des croyances, il ressort que les exploitants ont tendance à converger vers une décision commune. Le second chapitre est intitulé "A Perpetual Search for Talent across Overlapping Generations". Avec un modèle dynamique à générations imbriquées, nous avons étudié iv comment un Fonds de recherche devra procéder pour sélectionner les meilleurs chercheurs à financer. Les chercheurs n’ont pas la même "ancienneté" dans l’activité de recherche. Pour une décision optimale, le Fonds de recherche doit se baser à la fois sur l’ancienneté et les travaux passés des chercheurs ayant soumis une demande de subvention de recherche. Il doit être plus favorable aux jeunes chercheurs quant aux exigences à satisfaire pour être financé. Ce travail est également une contribution à l’analyse des Bandit Problems. Ici, au lieu de tenter de calculer un indice, nous proposons de classer et d’éliminer progressivement les chercheurs en les comparant deux à deux. Le troisième chapitre est intitulé "Paradox about the Multi-Level Marketing (MLM)". Depuis quelques décennies, on rencontre de plus en plus une forme particulière d’entreprises dans lesquelles le produit est commercialisé par le biais de distributeurs. Chaque distributeur peut vendre le produit et/ou recruter d’autres distributeurs pour l’entreprise. Il réalise des profits sur ses propres ventes et reçoit aussi des commissions sur la vente des distributeurs qu’il aura recrutés. Il s’agit du marketing multi-niveau (multi-level marketing, MLM). La structure de ces types d’entreprise est souvent qualifiée par certaines critiques de système pyramidal, d’escroquerie et donc insoutenable. Mais les promoteurs des marketing multi-niveau rejettent ces allégations en avançant que le but des MLMs est de vendre et non de recruter. Les gains et les règles de jeu sont tels que les distributeurs ont plus incitation à vendre le produit qu’à recruter. Toutefois, si cette argumentation des promoteurs de MLMs est valide, un paradoxe apparaît. Pourquoi un distributeur qui désire vraiment vendre le produit et réaliser un gain recruterait-il d’autres individus qui viendront opérer sur le même marché que lui? Comment comprendre le fait qu’un agent puisse recruter des personnes qui pourraient devenir ses concurrents, alors qu’il est déjà établi que tout entrepreneur évite et même combat la concurrence. C’est à ce type de question que s’intéresse ce chapitre. Pour expliquer ce paradoxe, nous avons utilisé la structure intrinsèque des organisations MLM. En réalité, pour être capable de bien vendre, le distributeur devra recruter. Les commissions perçues avec le recrutement donnent un pouvoir de vente en ce sens qu’elles permettent au recruteur d’être capable de proposer un prix compétitif pour le produit qu’il désire vendre. Par ailleurs, les MLMs ont une structure semblable à celle des multi-sided markets au sens de Rochet et Tirole (2003, 2006) et Weyl (2010). Le recrutement a un effet externe sur la vente et la vente a un effet externe sur le recrutement, et tout cela est géré par le promoteur de l’organisation. Ainsi, si le promoteur ne tient pas compte de ces externalités dans la fixation des différentes commissions, les agents peuvent se tourner plus ou moins vers le recrutement.
Resumo:
Cette recherche porte sur le changement social dans la période postsocialiste à Cluj-« Napoca », une ville transylvaine de Roumanie. En mobilisant une approche en termes de rapports sociaux à l’espace, l’étude explore les principes de différenciation tant spatialement que socialement. Les concepts d’« espace public » et de « lieu » ont permis une analyse aux multiples facettes menée selon quatre axes : matérialité et la visibilité des espaces, sphère publique-politique, vie sociale publique, investissements et appropriations individuelles. La thèse examine ainsi les activités qui se déroulent dans les places publiques centrales, les investissements spatiaux, les rituels quotidiens et les manifestations contestataires, les multiples attachements ethniques et religieux des habitants. L’ethnographie des places publiques centrales de Cluj-« Napoca » a mis en évidence une « faible classification des espaces » centraux de la ville, traduite par une grande diversité sociale. Les marques ethnicisantes parsemées à Cluj-« Napoca » renvoient aux groupes ethniques, mais aussi à d’autres enjeux qui relèvent du processus de restructuration du champ politique dans le postsocialisme. Dans le même registre, les stratégies de type ethnique sont mobilisées pour désigner de nouveaux critères de différenciation sociale et pour redéfinir d’anciennes catégories sociales. Oublis, silences et exigences d’esthétisation reflètent des demandes implicites des habitants pour redéfinir les cadres de la politique. Finalement, la thèse montre comment l’espace public à Cluj-« Napoca » pendant la période postsocialiste relève d’un processus continuel de diversification sociale et d’invention des Autres par d’incessantes mises à distance. L’espace public n’est pas la recherche de ce que pourrait constituer le vivre ensemble, mais la quête de ce qui nous menace et qu’il faut mettre à distance.
Resumo:
Objectifs : Cette thèse porte sur l’association entre les caractéristiques socioenvironnementales des voisinages (milieux locaux) et la prévalence des limitations d’activités (ou handicap) dans la population québécoise. Elle a trois objectifs principaux : (1) clarifier les enjeux conceptuels et méthodologiques relatifs à l’étude des déterminants socioenvironnementaux des limitations d’activités; (2) décrire les contributions respectives de la composition socioéconomique des voisinages et de facteurs contextuels à la variabilité locale de la prévalence des limitations d’activités; (3) évaluer la présence d’interactions entre la santé fonctionnelle des personnes (incapacité) et des caractéristiques des voisinages en lien avec la prévalence des limitations d’activités. Méthodes : Une analyse de la littérature scientifique a été effectuée en lien avec le premier objectif de la thèse. En lien avec le deuxième objectif, des données pour le Québec du recensement canadien de 2001 (échantillon de 20% de la population) ont été utilisées pour estimer l’association entre la prévalence des limitations d’activités et des caractéristiques des voisinages : classification urbain-rural, composition socioéconomique (défavorisation matérielle et sociale) et facteurs contextuels (qualité des habitations, stabilité résidentielle et utilisation des transports actifs et collectifs). En lien avec le troisième objectif, des données pour la population urbaine du Québec issues de l’Enquête sur la santé dans les collectivités canadiennes (2003, 2005 et 2007/2008) ont permis de tester la présence d’interaction entre la santé fonctionnelle des personnes et des caractéristiques des voisinages (défavorisation matérielle et sociale, qualité des habitations, stabilité résidentielle et densité des services). Pour les analyses associées aux deux derniers objectifs, l’analyse des corrélats de la prévalence des limitations d’activités a été effectuée à l’aide de régressions logistiques multiniveaux. Résultats : Différents éléments conceptuels et opérationnels limitent la possibilité de faire une synthèse des analyses épidémiologiques portant sur les influences socioenvironnementales sur les limitations d’activités. Les résultats des analyses empiriques suggèrent que : (1) la variation géographique de la prévalence des limitations d’activités s’explique en grande partie par la composition socioéconomique des voisinages; (2) des facteurs contextuels sont associés à cette variation géographique; (3) les mesures relatives d’inégalités sous-estiment les disparités contextuelles dans la distribution des nombres absolus de personnes ayant une limitation d’activités; et (4) l’association entre la prévalence des limitations d’activités et la défavorisation sociale pourrait varier selon la santé fonctionnelle des personnes. Conclusions : Différentes caractéristiques socioenvironnementales sont potentiellement associées aux variations géographiques des limitations d’activités au Québec. Le développement d’indicateurs socioenvironnementaux favoriserait une connaissance plus précise de l’influence de ces caractéristiques socioenvironnementales sur les limitations d’activités et des mécanismes par lesquels s’exerce cette influence. L’établissement d’un système national de surveillance des aménagements territoriaux est proposé afin de soutenir la recherche et la prise de décision. Des indicateurs locaux d’accessibilité aux transports, aux espaces publics ainsi qu’aux services de proximité devraient être priorisés. Ces aspects de l’aménagement du territoire sont susceptibles de rejoindre plusieurs enjeux de santé publique et ils ont comme autre avantage d’être inclus dans différentes orientations québécoises ciblant le vieillissement en santé et la réduction des limitations d’activités.