876 resultados para Achievable Benchmarks
Resumo:
Tutkimuksen tavoitteena on ollut selvittää fuusiossa tavoiteltujen hyötyjen saavuttamista Kymenlaakson Osuuspankissa. Työssä on tutkittu saavutettuja onnistumisia sekä mahdollisia haasteita ja epäonnistumisia hallituksen, toimitusjohtajan ja hallintoneuvoston puheenjohtajan näkökulmaa painottaen. Tutkimusmenetelmänä on käytetty henkilökohtaisia puolistrukturoituja haastatteluja. Haastattelujen avulla on saavutettu kokonaiskuva paitsi onnistumisista ja haasteista, myös näihin asioihin taustalla vaikuttavista tekijöistä, joita fuusioprosessissa olisi hyvä ottaa aina huomioon. Tuloksista selvisi, että tavoitteissa on onnistuttu pääosin hyvin ottaen huomioon fuusion toteutumisajankohta ja taloudellinen toimintaympäristö. Kuitenkin oli myös selvästi nähtävissä, että paljon on edelleen monella osa-alueella saavutettavissa. Tätä selittää vielä varsin lyhyt toimintahistoria fuusioitumisen jälkeen. Kohdatuissa haasteissa on nähtävissä selkeästi myös aiemmissa tutkimuksissa havaittuja haasteita ja vastoinkäymisiä. Nämä ovat haastavia ottaa kokonaisvaltaisesti huomioon, sillä fuusioprosessi on monisyinen tapahtuma, jossa mukana on useita eri tahoja ja toteuttajia. Tuloksissa on kuitenkin tullut ilmi se, että vaikka haasteisiin ei välttämättä olisi täysin ennalta kyetty varautumaan, on näihin niitä kohdatessa vastattu hyvin nopealla reagoinnilla ja tämä on osittain vahvistanut saavutettuja tuloksia.
Resumo:
According to the International Atomic Energy Agency (IAEA), a relatively significant number of radiological accidents have occurred in recent years mainly because of the practices referred to as potentially high-risk activities, such as radiotherapy, large irradiators and industrial radiography, especially in gammagraphy assays. In some instances, severe injuries have occurred in exposed persons due to high radiation doses. In industrial radiography, 80 cases involving a total of 120 radiation workers, 110 members of the public including 12 deaths have been recorded up to 2014. Radiological accidents in industrial practices in Brazil have mainly resulted in development of cutaneous radiation syndrome (CRS) in hands and fingers. Brazilian data include 5 serious cases related to industrial gammagraphy, affecting 7 radiation workers and 19 members of the public; however, none of them were fatal. Some methods of reconstructive dosimetry have been used to estimate the radiation dose to assist in prescribing medical treatment. The type and development of cutaneous manifestations in the exposed areas of a person is the first achievable gross dose estimation. This review article presents the state-of-the-art reconstructive dosimetry methods enabling estimation of local radiation doses and provides guidelines for medical handling of the exposed individuals. The review also presents the Chilean and Brazilian radiological accident cases to highlight the importance of reconstructive dosimetry.
Resumo:
Sales and operations research publications have increased significantly in the last decades. The concept of sales and operations planning (S&OP) has gained increased recognition and has been put forward as the area within Supply Chain Management (SCM). Development of S&OP is based on the need for determining future actions, both for sales and operations, since off-shoring, outsourcing, complex supply chains and extended lead times make challenges for responding to changes in the marketplace when they occur. Order intake of the case company has grown rapidly during the last years. Along with the growth, new challenges considering data management and information flow have arisen due to increasing customer orders. To manage these challenges, case company has implemented S&OP process, though initial process is in early stage and due to this, the process is not managing the increased customer orders adequately. Thesis objective is to explore extensively the S&OP process content of the case company and give further recommendations. Objectives are categorized into six different groups, to clarify the purpose of this thesis. Qualitative research methods used are active participant observation, qualitative interviews, enquiry, education, and a workshop. It is notable that demand planning was felt as cumbersome, so it is typically the biggest challenge in S&OP process. More proactive the sales forecasting can be, more expanded the time horizon of operational planning will turn out. S&OP process is 60 percent change management, 30 percent process development and 10 percent technology. The change management and continuous improvement can sometimes be arduous and set as secondary. It is important that different people are required to improve the process and the process is constantly evaluated. As well as, process governance is substantially in a central role and it has to be managed consciously. Generally, S&OP process was seen important and all the stakeholders were committed to the process. Particular sections were experienced more important than others, depending on the stakeholders’ point of views. Recommendations to objective groups are evaluated by the achievable benefit and resource requirement. The urgent and easily implemented improvement recommendations should be executed firstly. Next steps are to develop more coherent process structure and refine cost awareness. Afterwards demand planning, supply planning, and reporting should be developed more profoundly. For last, information technology system should be implemented to support the process phases.
Resumo:
The construction of offshore structures, equipment and devices requires a high level of mechanical reliability in terms of strength, toughness and ductility. One major site for mechanical failure, the weld joint region, needs particularly careful examination, and weld joint quality has become a major focus of research in recent times. Underwater welding carried out offshore faces specific challenges affecting the mechanical reliability of constructions completed underwater. The focus of this thesis is on improvement of weld quality of underwater welding using control theory. This research work identifies ways of optimizing the welding process parameters of flux cored arc welding (FCAW) during underwater welding so as to achieve desired weld bead geometry when welding in a water environment. The weld bead geometry has no known linear relationship with the welding process parameters, which makes it difficult to determine a satisfactory weld quality. However, good weld bead geometry is achievable by controlling the welding process parameters. The doctoral dissertation comprises two sections. The first part introduces the topic of the research, discusses the mechanisms of underwater welding and examines the effect of the water environment on the weld quality of wet welding. The second part comprises four research papers examining different aspects of underwater wet welding and its control and optimization. Issues considered include the effects of welding process parameters on weld bead geometry, optimization of FCAW process parameters, and design of a control system for the purpose of achieving a desired bead geometry that can ensure a high level of mechanical reliability in welded joints of offshore structures. Artificial neural network systems and a fuzzy logic controller, which are incorporated in the control system design, and a hybrid of fuzzy and PID controllers are the major control dynamics used. This study contributes to knowledge of possible solutions for achieving similar high weld quality in underwater wet welding as found with welding in air. The study shows that carefully selected steels with very low carbon equivalent and proper control of the welding process parameters are essential in achieving good weld quality. The study provides a platform for further research in underwater welding. It promotes increased awareness of the need to improve the quality of underwater welding for offshore industries and thus minimize the risk of structural defects resulting from poor weld quality.
Resumo:
Optical microscopy is living its renaissance. The diffraction limit, although still physically true, plays a minor role in the achievable resolution in far-field fluorescence microscopy. Super-resolution techniques enable fluorescence microscopy at nearly molecular resolution. Modern (super-resolution) microscopy methods rely strongly on software. Software tools are needed all the way from data acquisition, data storage, image reconstruction, restoration and alignment, to quantitative image analysis and image visualization. These tools play a key role in all aspects of microscopy today – and their importance in the coming years is certainly going to increase, when microscopy little-by-little transitions from single cells into more complex and even living model systems. In this thesis, a series of bioimage informatics software tools are introduced for STED super-resolution microscopy. Tomographic reconstruction software, coupled with a novel image acquisition method STED< is shown to enable axial (3D) super-resolution imaging in a standard 2D-STED microscope. Software tools are introduced for STED super-resolution correlative imaging with transmission electron microscopes or atomic force microscopes. A novel method for automatically ranking image quality within microscope image datasets is introduced, and it is utilized to for example select the best images in a STED microscope image dataset.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
The rhythm in the fall of inequality in Brazil is acceptable? Evidences of the historical and international context. The following study uses two approaches to answer the question of whether inequality in Brazil is falling fast enough. The first is to compare the variation of the Gini coefficient in Brazil with what was observed in several countries that today belong to the OCDE (United Kingdom, United States, Netherlands, Sweden, France, Norway, and Spain) while these same countries built their social welfare systems during the last century. The second approach is to calculate for how much Brazil must keep up the fall in the Gini coefficient to attain the same levels of inequality of three OCDE countries that can be used as a reference: Mexico, the United States, and Canada. The data indicate that the Gini coefficient in Brazil is falling 0.7 point per year and that this is superior to the rhythm of all the OCDE countries analyzed while they built their welfare systems but Spain, whose Gini fell 0.9 point per year during the 1950s. The time needed to attain various benchmarks in inequality are: 6 years to Mexico, 12 to the United States and 24 to Canadian inequality levels. The general conclusion is that the speed with which inequality is falling is adequate, but the challenge will be to keep inequality falling at the same rate for another two or three decades.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
This study investigates instructors’ perceptions of reading instruction and difficulties among Language Instruction for Newcomers to Canada (LINC) Level 1-3 learners. Statistics Canada reports that 60% of immigrants possess inadequate literacy skills. Newcomers are placed in classes using the Canadian Language Benchmarks but large, mixed-level classes create little opportunity for individualized instruction, leading some clients to demonstrate little change in their reading benchmarks. Data were collected (via demographic questionnaires, semi-structured interviews, teaching plans, and field study notes) to create a case study of five LINC instructors’ perceptions of why some clients do not progress through the LINC reading levels as expected and how their previous experiences relate to those within the LINC program. Qualitative analyses of the data revealed three primary themes: client/instructor background and classroom needs, reading, strategies, methods and challenges, and assessment expectations and progress, each containing a number of subthemes. A comparison between the themes and literature demonstrated six areas for discussion: (a) some clients, specifically refugees, require more time to progress to higher benchmarks; (b) clients’ level of prior education can be indicative of their literacy skills; (c) clients with literacy needs should be separated and placed into literacy-specific classes; (d) evidence-based approaches to reading instruction were not always evident in participants’ responses, demonstrating a lack of knowledge about these approaches; (e) first language literacy influences second language reading acquisition through a transfer of skills; and (f) collaboration in the classroom supports learning by extending clients’ capabilities. These points form the basis of recommendations about how reading instruction might be improved for such clients.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
Le cancer du poumon a une incidence et une létalité parmi les plus hautes de tous les cancers diagnostiqués au Canada. En considérant la gravité du pronostic et des symptômes de la maladie, l’accès au traitement dans les plus brefs de délais est essentiel. Malgré l’engagement du gouvernement fédéral et les gouvernements provinciaux de réduire les délais de temps d’attente, des balises pour les temps d’attente pour le traitement d’un cancer ne sont toujours pas établis. En outre, le compte-rendu des indicateurs des temps d’attente n’est pas uniforme à travers les provinces. Une des solutions proposées pour la réduction des temps d’attente pour le traitement du cancer est les équipes interdisciplinaires. J’ai complété un audit du programme interdisciplinaire traitant le cancer du poumon à l’Hôpital général juif (l’HGJ) de 2004 à 2007. Les objectifs primaires de l’étude étaient : (1) de faire un audit de la performance de l’équipe interdisciplinaire à l’HGJ en ce qui concerne les temps d’attente pour les intervalles critiques et les sous-groupes de patients ; (2) de comparer les temps d’attente dans la trajectoire clinique des patients traités à l’HGJ avec les balises qui existent ; (3) de déterminer les facteurs associés aux délais plus longs dans cette population. Un objectif secondaire de l’étude était de suggérer des mesures visant à réduire les temps d’attente. Le service clinique à l’HGJ a été évalué selon les balises proposées par le British Thoracic Society, Cancer Care Ontario, et la balise pan-canadienne pour la radiothérapie. Les patients de l’HGJ ont subi un délai médian de 9 jours pour l’intervalle «Ready to treat to first treatment», et un délai médian de 30 jours pour l’intervalle entre le premier contact avec l’hôpital et le premier traitement. Les patients âgés de plus de 65 ans, les patients avec une capacité physique diminuée, et les patients avec un stade de tumeur limité étaient plus à risque d’échouer les balises pour les temps d’attente.
Resumo:
Durant la dernière décennie, les développements technologiques en radiothérapie ont transformé considérablement les techniques de traitement. Les nouveaux faisceaux non standard améliorent la conformité de la dose aux volumes cibles, mais également complexifient les procédures dosimétriques. Puisque des études récentes ont démontré l’invalidité de ces protocoles actuels avec les faisceaux non standard, un nouveau protocole applicable à la dosimétrie de référence de ces faisceaux est en préparation par l’IAEA-AAPM. Le but premier de cette étude est de caractériser les facteurs responsables des corrections non unitaires en dosimétrie des faisceaux non standard, et ainsi fournir des solutions conceptuelles afin de minimiser l’ordre de grandeur des corrections proposées dans le nouveau formalisme de l’IAEA-AAPM. Le deuxième but de l’étude est de construire des méthodes servant à estimer les incertitudes d’une manière exacte en dosimétrie non standard, et d’évaluer les niveaux d’incertitudes réalistes pouvant être obtenus dans des situations cliniques. Les résultats de l’étude démontrent que de rapporter la dose au volume sensible de la chambre remplie d’eau réduit la correction d’environ la moitié sous de hauts gradients de dose. Une relation théorique entre le facteur de correction de champs non standard idéaux et le facteur de gradient du champ de référence est obtenue. En dosimétrie par film radiochromique, des niveaux d’incertitude de l’ordre de 0.3% sont obtenus par l’application d’une procédure stricte, ce qui démontre un intérêt potentiel pour les mesures de faisceaux non standard. Les résultats suggèrent également que les incertitudes expérimentales des faisceaux non standard doivent être considérées sérieusement, que ce soit durant les procédures quotidiennes de vérification ou durant les procédures de calibration. De plus, ces incertitudes pourraient être un facteur limitatif dans la nouvelle génération de protocoles.
Resumo:
Ce mémoire de maîtrise porte sur la contribution que pourrait apporter l’Enseignement à distance par radio (EADR) dans la réduction du taux d’échec aux examens du baccalauréat 1ère partie en Haïti. De manière spécifique, nous souhaitions élaborer un programme d’ « Enseignement à distance par radio » (EADR) afin d’aider l’ensemble des candidats bacheliers des classes de Rhéto à préparer les examens du bac 1ère partie. En Haïti, l’Enseignement à distance (EAD) est pratiquement absent. De nombreux pays ayant eu des situations similaires à Haïti ont mis en place des programmes de « Formations ouvertes et à distance » (FOAD) en complémentarité avec le mode d’enseignement classique ou en mode présentiel. La complexité de la situation exige d’envisager diverses pistes pour sortir le système éducatif haïtien de cette léthargie. Si l’on ne peut considérer l’EAD comme l’unique perspective, il est par contre pertinent et souhaitable de l’envisager comme une alternative non négligeable (Lubérisse, 2003). En nous appuyant sur les principales conditions d’efficacité des FOAD définies par Karsenti (2003), les principes théoriques de type R & D (Van der Maren, 2003?), le cadre théorique de la méthodologie de l’évaluation des besoins de Chagnon et Paquette (Institut universitaire des Centres Jeunesses de Montréal, 2005) et de Roegiers, Wouters & Gerard (1992), nous avons élaboré les grandes lignes générales de ce programme d’EADR et les avons soumises à douze (12) acteurs clés du système éducatif haïtien (4 élèves, 3 parents, 2 enseignants et 3 spécialistes de la radio ou de l’EADR). Cette ébauche de programme comportait principalement les objectifs du programme d’EADR, la méthode pédagogique, le contenu et les conditions de mise en œuvre. Des données recueillies par le biais d’entrevues individuelles il ressort que l’implantation d’un programme d’EADR peut être bénéfique au système éducatif haïtien, particulièrement sur les résultats d’examens officiels du baccalauréat 1ère partie. Les douze participants à notre recherche, croient que l’objectif poursuivi par le programme d’EADR est tout à fait réalisable et important pour le public cible. Tout en notant l’aspect positif du programme d’EADR proposé, les participants réclament certaines modifications quant à la méthode pédagogique, au contenu et aux conditions de mise en œuvre. Ainsi, les recommandations faites par les douze acteurs clés du système éducatif, jointes à notre recension nous ont permis d’élaborer un nouveau programme d’EADR revu et corrigé qui pourra servir de base à une évaluation plus large.
Émiliano Renaud (1875-1932) : premier pianiste-virtuose du Québec : interprète-pédagogue-compositeur
Resumo:
La version intégrale de ce mémoire est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l'Université de Montréal (http://www.bib.umontreal.ca/MU).
Resumo:
Analyser le code permet de vérifier ses fonctionnalités, détecter des bogues ou améliorer sa performance. L’analyse du code peut être statique ou dynamique. Des approches combinants les deux analyses sont plus appropriées pour les applications de taille industrielle où l’utilisation individuelle de chaque approche ne peut fournir les résultats souhaités. Les approches combinées appliquent l’analyse dynamique pour déterminer les portions à problèmes dans le code et effectuent par la suite une analyse statique concentrée sur les parties identifiées. Toutefois les outils d’analyse dynamique existants génèrent des données imprécises ou incomplètes, ou aboutissent en un ralentissement inacceptable du temps d’exécution. Lors de ce travail, nous nous intéressons à la génération de graphes d’appels dynamiques complets ainsi que d’autres informations nécessaires à la détection des portions à problèmes dans le code. Pour ceci, nous faisons usage de la technique d’instrumentation dynamique du bytecode Java pour extraire l’information sur les sites d’appels, les sites de création d’objets et construire le graphe d’appel dynamique du programme. Nous démontrons qu’il est possible de profiler dynamiquement une exécution complète d’une application à temps d’exécution non triviale, et d’extraire la totalité de l’information à un coup raisonnable. Des mesures de performance de notre profileur sur trois séries de benchmarks à charges de travail diverses nous ont permis de constater que la moyenne du coût de profilage se situe entre 2.01 et 6.42. Notre outil de génération de graphes dynamiques complets, nommé dyko, constitue également une plateforme extensible pour l’ajout de nouvelles approches d’instrumentation. Nous avons testé une nouvelle technique d’instrumentation des sites de création d’objets qui consiste à adapter les modifications apportées par l’instrumentation au bytecode de chaque méthode. Nous avons aussi testé l’impact de la résolution des sites d’appels sur la performance générale du profileur.