874 resultados para strategic performance measurement


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, technological advancements in microelectronics and sensor technologies have revolutionized the field of electrical engineering. New manufacturing techniques have enabled a higher level of integration that has combined sensors and electronics into compact and inexpensive systems. Previously, the challenge in measurements was to understand the operation of the electronics and sensors, but this has now changed. Nowadays, the challenge in measurement instrumentation lies in mastering the whole system, not just the electronics. To address this issue, this doctoral dissertation studies whether it would be beneficial to consider a measurement system as a whole from the physical phenomena to the digital recording device, where each piece of the measurement system affects the system performance, rather than as a system consisting of small independent parts such as a sensor or an amplifier that could be designed separately. The objective of this doctoral dissertation is to describe in depth the development of the measurement system taking into account the challenges caused by the electrical and mechanical requirements and the measurement environment. The work is done as an empirical case study in two example applications that are both intended for scientific studies. The cases are a light sensitive biological sensor used in imaging and a gas electron multiplier detector for particle physics. The study showed that in these two cases there were a number of different parts of the measurement system that interacted with each other. Without considering these interactions, the reliability of the measurement may be compromised, which may lead to wrong conclusions about the measurement. For this reason it is beneficial to conceptualize the measurement system as a whole from the physical phenomena to the digital recording device where each piece of the measurement system affects the system performance. The results work as examples of how a measurement system can be successfully constructed to support a study of sensors and electronics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Corporate social responsibility or CSR is today a widely recognized concept which is receiving in- creasing popularity extremely rapidly, especially in the business world. The pressure on companies to carry out their business practices in ethical manners, which promote the wellbeing of the environment and society, is coming from all directions and all stakeholders. Alstom, a French multinational conglomerate operating in the rail transport and energy industry, is no exception to this norm. This company, which will be used as the case example in this thesis, is being brought to bay in terms of engaging in CSR practices and practicing business with high ethics. It is surely not a negatively conceived phenomenon that CSR is being put on a pedestal – quite the opposite. Instead of corporations practicing CSR only to meet their stakeholder requirements through practicing window dressing, many corporations actually strive to benefit from the practice of corporate social business. In addition to bringing benefit to externals a corporation such as Alstom itself can benefit from being involved in CSR. The purpose of this thesis is to evaluate the current strategic values and the future perspectives of CSR at Alstom and moreover the added value which the practice of CSR could bring Alstom as a business. A set of perspectives from a futures studies viewpoint is looked at, with critical examination of the company’s current corporate practices as well as the CSR related studies and theories written for corporations. Through this, some solutions and practices will be suggested to Alstom in order for it to fully utilize the potential of corporate social business and the value it can bring in the most probable futures that the company is expected to face. By utilizing the Soft Systems Methodology (SSM), a method mainly used in organizations to solve problematic issues in management and policy contexts, a process is developed to see what improvements could be of help in improving Alstom and its way towards involving CSR in its business practices even more than it currently does. Alstom is already deeply involved in the practicing of CSR and its vision has a strong emphasis on this popular concept of today. In order to stay in the game and to use CSR as a competitive advantage to the company, Alstom ought to embed corporate social practices even deeper in its organizational culture by using them as a tool to reduce risk and costs, increasing employee commitment and customer loyalty and to attract socially responsible investors, just to name a few. CSR as a concept is seen to have great potential in the future, an opportunity Alstom will not miss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exchange traded fund (ETF) is a financial instrument that tracks some predetermined index. Since their initial establishment in 1993, ETFs have grown in importance in the field of passive investing. The main reason for the growth of the ETF industry is that ETFs combine benefits of stock investing and mutual fund investing. Although ETFs resemble mutual funds in many ways, also many differences occur. In addition, ETFs not only differ from mutual funds but also differ among each other. ETFs can be divided into two categories, i.e. market capitalisation ETFs and fundamental (or strategic) ETFs, and further into subcategories depending on their fundament basis. ETFs are a useful tool for diversification especially for a long-term investor. Although the economic importance of ETFs has risen drastically during the past 25 years, the differences and risk-return characteristics of fundamental ETFs have yet been rather unstudied area. In effect, no previous research on market capitalisation and fundamental ETFs was found during the research process. For its part, this thesis seeks to fill this research gap. The studied data consist of 50 market capitalisation ETFs and 50 fundamental ETFs. The fundaments, on which the indices that the fundamental ETFs track, were not limited nor segregated into subsections. The two types of ETFs were studied at an aggregate level as two different research groups. The dataset ranges from June 2006 to December 2014 with 103 monthly observations. The data was gathered using Bloomberg Terminal. The analysis was conducted as an econometric performance analysis. In addition to other econometric measures, the methods that were used in the performance analysis included modified Value-at-Risk, modified Sharpe ratio and Treynor ratio. The results supported the hypothesis that passive market capitalisation ETFs outperform active fundamental ETFs in terms of risk-adjusted returns, though the difference is rather small. Nevertheless, when taking into account the higher overall trading costs of the fundamental ETFs, the underperformance gap widens. According to the research results, market capitalisation ETFs are a recommendable diversification instrument for a long-term investor. In addition to better risk-adjusted returns, passive ETFs are more transparent and the bases of their underlying indices are simpler than those of fundamental ETFs. ETFs are still a young financial innovation and hence data is scarcely available. On future research, it would be valuable to research the differences in risk-adjusted returns also between the subsections of fundamental ETFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unexpected changes in cash flow have started to occur more frequently after the financial crisis. The capital structures of companies have also changed, and financial flexibility as well as flexible asset management have therefore become more important. This thesis aims at presenting financial working capital management, a part of flexible asset management, as a possibility to gain financial flexibility and survive the changes. This thesis operates in the interface of corporate finance, strategic management and management accounting, and it has two main objectives: to examine financial working capital management and to develop measures of financial working capital. The research in this thesis has been conducted using archival research and design science. Qualitative comparative analysis and model building are used to formulate tools and strategies for financial working capital management. The tools are tested with simulations, case studies and statistical analysis. The empirical data is collected from companies listed in the Helsinki Stock Exchange. The results of this thesis indicate that there are several possible financial working capital management strategies. FOCAL matrix is created in the thesis to assist in the selection of a strategy. The results also imply that profitability can be improved by reducing financial working capital, which creates a need to change the financial working capital management strategy. Financial flow cycle, and its modification, is developed in this thesis to measure financial working capital. Financial working capital as a concept is presented in this thesis with an orientation towards the management view. New dimensions have also been produced to financial management and working capital management, while providing a holistic approach to financial flexibility. Financial working capital management strategies are presented to managers and practical tools are provided for decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers have widely recognised and accepted that firm performance is increasingly related to knowledge-based issues. Two separately developed literature streams, intellectual capital (IC) and knowledge management (KM), have been established as the key discussions related to knowledge-based competitive advantage of the firm. Intellectual capital has provided evidence on the strategic key intangible resources of the firm, which could be deployed to create competitive advantage. Knowledge management, in turn, has focused on the managerial processes and practices which can be used to leverage IC to create competitive advantage. Despite extensive literature on both issues, some notable research gaps remain to be closed. In effect, one major gap within the knowledge management research is the lack of understanding related to its influence on firm performance, while IC researchers have articulated a need to utilise more finegrained conceptual models to better understand the key strategic value-creating resources of the firm. In this dissertation, IC is regarded as the entire intellectual capacity, knowledge and competences of the firm that can be leveraged to achieve sustained competitive advantage. KM practices are defined as organisational and managerial activities that enable the firm to leverage its IC to create value. The objective of this dissertation is to answer the research question: “What is the relationship between intellectual capital, knowledge management practices and firm performance?” Five publications have addressed the research question using different approaches. The first two publications were systematic literature reviews of the extant empirical IC and KM research, which established the current state of understanding regarding the relationship between IC, KM practices and firm performance. Publications III and IV were empirical research articles that assessed the developed conceptual model related to IC, KM practices and firm performance. Finally, Publication V was among the first research papers to merge IC and KM disciplines in order to find out which configurations could yield organisational benefits in terms of innovation and market performance outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to monitor and evaluate the consequences of ongoing behaviors and coordinate behavioral adjustments seems to rely on networks including the anterior cingulate cortex (ACC) and phasic changes in dopamine activity. Activity (and presumably functional maturation) of the ACC may be indirectly measured using the error-related negativity (ERN), an event-related potential (ERP) component that is hypothesized to reflect activity of the automatic response monitoring system. To date, no studies have examined the measurement reliability of the ERN as a trait-like measure of response monitoring, its development in mid- and late- adolescence as well as its relation to risk-taking and empathic ability, two traits linked to dopaminergic and ACC activity. Utilizing a large sample of 15- and 18-year-old males, the present study examined the test-retest reliability of the ERN, age-related changes in the ERN and other components of the ERP associated with error monitoring (the Pe and CRN), and the relations of the error-related ERP components to personality traits of risk propensity and empathy. Results indicated good test-retest reliability of the ERN providing important validation of the ERN as a stable and possibly trait-like electrophysiological correlate of performance monitoring. Ofthe three components, only the ERN was of greater amplitude for the older adolescents suggesting that its ACC network is functionally late to mature, due to either structural or neurochemical changes with age. Finally, the ERN was smaller for those with high risk propensity and low empathy, while other components associated with error monitoring were not, which suggests that poor ACe function may be associated with the desire to engage in risky behaviors and the ERN may be influenced by the extent of individuals' concern with the outcome of events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of imagery and video feedback programs has become an important tool for aiding athletes in achieving peak performance (Halliwell, 1990). The purpose of the study was to determine the effect of strategic imagery training and video feedback on immediate performance. Participants were two university goaltenders. An alternating treatment design (ATD; Barlow & Hayes, 1979; Tawney & Gast, 1984) was employed. The strategies were investigated using three plays originating from the right side by a right-handed shooting defenceman from the blueline. The baseline condition consisted of six practices and was used to establish a stable and "ideal" measure of performance. The intervention conditions included alternating the use of strategic imagery (Cognitive general; Paivio, 1985) and video feedback. Both participants demonstrated an increase in the frequency of Cognitive general use. Specific and global performance measures were assessed to determine the relative effectiveness of the interventions. Poor inter-rater reliability resulted in the elimination of specific performance measures. Consequently, only the global measure (i.e., save percentage) was used in subsequent analyses. Visual inspection of participant save percentage was conducted to determine the benefits of the intervention. Strategic imagery training resulted in performance improvements for both participants. Video feedback facilitated performance for Participant 2, but not Participant 1. Results are discussed with respect to imagery and video interventions and the challenges associated with applied research. KEYWORDS: imagery, video, goaltenders, alternating treatment design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study investigates the usefulness of a multi-method approach to the measurement of reading motivation and achievement. A sample of 127 elementary and middle-school children aged 10 to 14 responded to measures of motivation, attributions, and achievement both longitudinally and in a challenging reading context. Novel measures of motivation and attributions were constructed, validated, and utilized to examine the relationship between ~ motivation, attributions, and achievement over a one-year period (Study I). The impact of classroom contexts and instructional practices was also explored through a study of the influence of topic interest and challenge on motivation, attributions, and persistence (Study II), as well as through interviews with children regarding motivation and reading in the classroom (Study III). Creation and validation of novel measures of motivation and attributions supported the use of a self-report measure of motivation in situation-specific contexts, and confirmed a three-factor structure of attributions for reading performance in both hypothetical and situation-specific contexts. A one-year follow up study of children's motivation and reading achievement demonstrated declines in all components of motivation beginning at age 10 through 12, and particularly strong decreases in motivation with the transition to middle school. Past perceived competence for reading predicted current achievement after controlling for past achievement, and showed the strongest relationships with reading-related skills in both elementary and middle school. Motivation and attributions were strongly related, and children with higher motivation Fulmer III displayed more adaptive attributions for reading success and failure. In the context of a developmentally inappropriate challenging reading task, children's motivation for reading, especially in terms of perceived competence, was threatened. However, interest in the story buffered some ofthe negative impacts of challenge, sustaining children's motivation, adaptive attributions, and reading persistence. Finally, children's responses during interviews outlined several emotions, perceptions, and aspects of reading tasks and contexts that influence reading motivation and achievement. Findings revealed that children with comparable motivation and achievement profiles respond in a similar way to particular reading situations, such as excessive challenge, but also that motivation is dynamic and individualistic and can change over time and across contexts. Overall, the present study outlines the importance of motivation and adaptive attributions for reading success, and the necessity of integrating various methodologies to study the dynamic construct of achievement motivation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

. The influence of vine water status was studied in commercial vineyard blocks of Vilis vinifera L. cv. Cabernet Franc in Niagara Peninsula, Ontario from 2005 to 2007. Vine performance, fruit composition and vine size of non-irrigated grapevines were compared within ten vineyard blocks containing different soil and vine water status. Results showed that within each vineyard block water status zones could be identified on GIS-generated maps using leaf water potential and soil moisture measurements. Some yield and fruit composition variables correlated with the intensity of vine water status. Chemical and descriptive sensory analysis was performed on nine (2005) and eight (2006) pairs of experimental wines to illustrate differences between wines made from high and low water status winegrapes at each vineyard block. Twelve trained judges evaluated six aroma and flavor (red fruit, black cherry, black current, black pepper, bell pepper, and green bean), thr~e mouthfeel (astringency, bitterness and acidity) sensory attributes as well as color intensity. Each pair of high and low water status wine was compared using t-test. In 2005, low water status (L WS) wines from Buis, Harbour Estate, Henry of Pelham (HOP), and Vieni had higher color intensity; those form Chateau des Charmes (CDC) had high black cherry flavor; those at RiefEstates were high in red fruit flavor and at those from George site was high in red fruit aroma. In 2006, low water status (L WS) wines from George, Cave Spring and Morrison sites were high in color intensity. L WS wines from CDC, George and Morrison were more intense in black cherry aroma; LWS wines from Hernder site were high in red fruit aroma and flavor. No significant differences were found from one year to the next between the wines produced from the same vineyard, indicating that the attributes of these wines were maintained almost constant despite markedly different conditions in 2005 and 2006 vintages. Partial ii Least Square (PLS) analysis showed that leaf \}' was associated with red fruit aroma and flavor, berry and wine color intensity, total phenols, Brix and anthocyanins while soil moisture was explained with acidity, green bean aroma and flavor as well as bell pepper aroma and flavor. In another study chemical and descriptive sensory analysis was conducted on nine (2005) and eight (2006) medium water status (MWS) experimental wines to illustrate differences that might support the sub-appellation system in Niagara. The judges evaluated the same aroma, flavor, and mouthfeel sensory attributes as well as color intensity. Data were analyzed using analysis of variance (ANOVA), principal component analysis (PCA) and discriminate analysis (DA). ANOV A of sensory data showed regional differences for all sensory attributes. In 2005, wines from CDC, HOP, and Hemder sites showed highest. r ed fruit aroma and flavor. Lakeshore and Niagara River sites (Harbour, Reif, George, and Buis) wines showed higher bell pepper and green bean aroma and flavor due to proximity to the large bodies of water and less heat unit accumulation. In 2006, all sensory attributes except black pepper aroma were different. PCA revealed that wines from HOP and CDC sites were higher in red fruit, black currant and black cherry aroma and flavor as well as black pepper flavor, while wines from Hemder, Morrison and George sites were high in green bean aroma and flavor. ANOV A of chemical data in 2005 indicated that hue, color intensity, and titratable acidity (TA) were different across the sites, while in 2006, hue, color intensity and ethanol were different across the sites. These data indicate that there is the likelihood of substantial chemical and sensory differences between clusters of sub-appellations within the Niagara Peninsula iii

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’hypertrophie du ventricule gauche (HVG) est un processus adaptif et compensatoire qui se développe conséquemment à l’hypertension artérielle pour s’opposer à l’élévation chronique de la pression artérielle. L’HVG est caractérisée par une hypertrophie des cardiomyocytes suite à l’augmentation de la synthèse d’ADN, une prolifération des fibroblastes, une augmentation du dépôt de collagène et une altération de la matrice extracellulaire (MEC). Ces changements génèrent des troubles de relaxation et mènent au dysfonctionnement diastolique, ce qui diminue la performance cardiaque. La suractivité du système nerveux sympathique (SNS) joue un rôle essentiel dans le développement de l’hypertension artérielle et de l’HVG à cause de la libération excessive des catécholamines et de leurs effets sur la sécrétion des cytokines pro-inflammatoires et sur les différentes voies de signalisation hypertrophiques et prolifératives. Le traitement antihypertenseur avec de la moxonidine, un composé sympatholytique d’action centrale, permet une régression de l’HVG suite à une réduction soutenue de la synthèse d'ADN et d’une stimulation transitoire de la fragmentation de l'ADN qui se produit au début du traitement. En raison de l’interaction entre l’HVG, les cytokines inflammatoires, le SNS et leurs effets sur les protéines de signalisation hypertrophiques, l’objectif de cette étude est de détecter dans un modèle animal d’hypertension artérielle et d’HVG, les différentes voies de signalisation associées à la régression de l’HVG et à la performance cardiaque. Des rats spontanément hypertendus (SHR, 12 semaines) ont reçu de la moxonidine à 0, 100 et 400 µg/kg/h, pour une période de 1 et 4 semaines, via des mini-pompes osmotiques implantées d’une façon sous-cutanée. Après 4 semaines de traitement, la performance cardiaque a été mesurée par écho-doppler. Les rats ont ensuite été euthanasiés, le sang a été recueilli pour mesurer les concentrations des cytokines plasmatiques et les cœurs ont été prélevés pour la détermination histologique du dépôt de collagène et de l'expression des protéines de signalisation dans le ventricule gauche. Le traitement de 4 semaines n’a eu aucun effet sur les paramètres systoliques mais a permis d’améliorer les paramètres diastoliques ainsi que la performance cardiaque globale. Par rapport au véhicule, la moxonidine (400 µg/kg/h) a permis d’augmenter transitoirement la concentration plasmatique de l’IL-1β après une semaine et de réduire la masse ventriculaire gauche. De même, on a observé une diminution du dépôt de collagène et des concentrations plasmatiques des cytokines IL-6 et TNF-α, ainsi qu’une diminution de la phosphorylation de p38 et d’Akt dans le ventricule gauche après 1 et 4 semaines de traitement, et cela avec une réduction de la pression artérielle et de la fréquence cardiaque. Fait intéressant, les effets anti-hypertrophiques, anti-fibrotiques et anti-inflammatoires de la moxonidine ont pu être observés avec la dose sous-hypotensive (100 µg/kg/h). Ces résultats suggèrent des effets cardiovasculaires bénéfiques de la moxonidine associés à une amélioration de la performance cardiaque, une régulation de l'inflammation en diminuant les niveaux plasmatiques des cytokines pro-inflammatoires ainsi qu’en inhibant la MAPK p38 et Akt, et nous permettent de suggérer que, outre l'inhibition du SNS, moxonidine peut agir sur des sites périphériques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’objet de cette thèse est l’élaboration d’un modèle logique de mesure du maintien des valeurs, ainsi que son opérationnalisation afin d’entreprendre l’évaluation de la performance des systèmes de santé. Le maintien des valeurs est l’une des quatre fonctions de la théorie de l’action sociale de T.Parsons permettant d’analyser les systèmes d’action. Les autres fonctions sont l’adaptation, la production et l’atteinte des buts. Cette théorie est la base du modèle EGIPSS (évaluation globale et intégrée de la performance des systèmes de santé), dans lequel cette thèse s’insère. La fonction étudiée correspond, dans l’oeuvre de T.Parsons, au sous-système culturel. Elle renvoie à l’intangible, soit à l’univers symbolique par lequel l’action prend son sens et les fonctions du système s’articulent. Le modèle logique de mesure du maintien des valeurs est structuré autour de deux concepts principaux, les valeurs individuelles et organisationnelles et la qualité de vie au travail. À travers les valeurs individuelles et organisationnelles, nous mesurons la hiérarchie et l’intensité des valeurs, ainsi que le niveau de concordance interindividuelle et le degré de congruence entre les valeurs individuelles et organisationnelles. La qualité de vie au travail est composée de plusieurs concepts permettant d’analyser et d’évaluer l’environnement de travail, le climat organisationnel, la satisfaction au travail, les réactions comportementales et l’état de santé des employés. La mesure de ces différents aspects a donné lieu à la conception de trois questionnaires et de trente indicateurs. Ma thèse présente, donc, chacun des concepts sélectionnés et leurs articulations, ainsi que les outils de mesure qui ont été construits afin d’évaluer la dimension du maintien des valeurs. Enfin, nous exposons un exemple d’opérationnalisation de ce modèle de mesure appliqué à deux hôpitaux dans la province du Mato Grosso du Sud au Brésil. Cette thèse se conclut par une réflexion sur l’utilisation de l’évaluation comme outil de gestion soutenant l’amélioration de la performance et l’imputabilité. Ce projet comportait un double enjeu. Tout d’abord, la conceptualisation de la dimension du maintien des valeurs à partir d’une littérature abondante, mais manquant d’intégration théorique, puis la création d’outils de mesure permettant de saisir autant les aspects objectifs que subjectifs des valeurs et de la qualité de vie au travail. En effet, on trouve dans la littérature de nombreuses disciplines et de multiples courants théoriques tels que la psychologie industrielle et organisationnelle, la sociologie, les sciences infirmières, les théories sur le comportement organisationnel, la théorie des organisations, qui ont conçu des modèles pour analyser et comprendre les perceptions, les attitudes et les comportements humains dans les organisations. Ainsi, l’intérêt scientifique de ce projet découle de la création d’un modèle dynamique et intégrateur offrant une synthèse des différents champs théoriques abordant la question de l’interaction entre les perceptions individuelles et collectives au travail, les conditions objectives de travail et leurs influences sur les attitudes et les comportements au travail. D’autre part, ce projet revêt un intérêt opérationnel puisqu’il vise à fournir aux décideurs du système de santé des connaissances et données concernant un aspect de la performance fortement négligé par la plupart des modèles internationaux d’évaluation de la performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En opération depuis 2008, l’expérience ATLAS est la plus grande de toutes les expériences au LHC. Les détecteurs ATLAS- MPX (MPX) installés dans ATLAS sont basés sur le détecteur au silicium à pixels Medipix2 qui a été développé par la collaboration Medipix au CERN pour faire de l’imagerie en temps réel. Les détecteurs MPX peuvent être utilisés pour mesurer la luminosité. Ils ont été installés à seize différents endroits dans les zones expérimentale et technique d’ATLAS en 2008. Le réseau MPX a recueilli avec succès des données indépendamment de la chaîne d’enregistrement des données ATLAS de 2008 à 2013. Chaque détecteur MPX fournit des mesures de la luminosité intégrée du LHC. Ce mémoire décrit la méthode d’étalonnage de la luminosité absolue mesurée avec les détectors MPX et la performance des détecteurs MPX pour les données de luminosité en 2012. Une constante d’étalonnage de la luminosité a été déterminée. L’étalonnage est basé sur technique de van der Meer (vdM). Cette technique permet la mesure de la taille des deux faisceaux en recouvrement dans le plan vertical et horizontal au point d’interaction d’ATLAS (IP1). La détermination de la luminosité absolue nécessite la connaissance précise de l’intensité des faisceaux et du nombre de trains de particules. Les trois balayages d’étalonnage ont été analysés et les résultats obtenus par les détecteurs MPX ont été comparés aux autres détecteurs d’ATLAS dédiés spécifiquement à la mesure de la luminosité. La luminosité obtenue à partir des balayages vdM a été comparée à la luminosité des collisions proton- proton avant et après les balayages vdM. Le réseau des détecteurs MPX donne des informations fiables pour la détermination de la luminosité de l’expérience ATLAS sur un large intervalle (luminosité de 5 × 10^29 cm−2 s−1 jusqu’à 7 × 10^33 cm−2 s−1 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aunque el concepto de sabiduría ha sido ampliamente estudiado por expertos de áreas como la filosofía, la religión y la psicología, aún enfrenta limitaciones en cuanto a su definición y evaluación. Por esto, el presente trabajo tiene como objetivo, formular una definición del concepto de sabiduría que permita realizar una propuesta de evaluación del concepto como competencia en los gerentes. Para esto, se realizó un análisis documental de tipo cualitativo. De esta manera, se analizaron diversos textos sobre la historia, las definiciones y las metodologías para evaluar tanto la sabiduría como las competencias; diferenciando la sabiduría de otros constructos y analizando la diferencia entre las competencias generales y las gerenciales para posteriormente, definir la sabiduría como una competencia gerencial. Como resultado de este análisis se generó un prototipo de prueba denominado SAPIENS-O, a través del cuál se busca evaluar la sabiduría como competencia gerencial. Como alcances del instrumento se pueden identificar la posibilidad de medir la sabiduría como competencia en los gerentes, la posibilidad de dar un nuevo panorama a las dificultades teóricas y empíricas sobre la sabiduría y la posibilidad de facilitar el estudio de la sabiduría en ambientes reales, más específicamente en ambientes organizacionales.