932 resultados para HIGH-QUALITY-FACTOR


Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The Advisa MRI system is designed to safely undergo magnetic resonance imaging (MRI). Its influence on image quality is not well known. OBJECTIVE: To evaluate cardiac magnetic resonance (CMR) image quality and to characterize myocardial contraction patterns by using the Advisa MRI system. METHODS: In this international trial with 35 participating centers, an Advisa MRI system was implanted in 263 patients. Of those, 177 were randomized to the MRI group and 150 underwent MRI scans at the 9-12-week visit. Left ventricular (LV) and right ventricular (RV) cine long-axis steady-state free precession MR images were graded for quality. Signal loss along the implantable pulse generator and leads was measured. The tagging CMR data quality was assessed as the percentage of trackable tagging points on complementary spatial modulation of magnetization acquisitions (n=16) and segmental circumferential fiber shortening was quantified. RESULTS: Of all cine long-axis steady-state free precession acquisitions, 95% of LV and 98% of RV acquisitions were of diagnostic quality, with 84% and 93%, respectively, being of good or excellent quality. Tagging points were trackable from systole into early diastole (360-648 ms after the R-wave) in all segments. During RV pacing, tagging demonstrated a dyssynchronous contraction pattern, which was not observed in nonpaced (n = 4) and right atrial-paced (n = 8) patients. CONCLUSIONS: In the Advisa MRI study, high-quality CMR images for the assessment of cardiac anatomy and function were obtained in most patients with an implantable pacing system. In addition, this study demonstrated the feasibility of acquiring tagging data to study the LV function during pacing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High reflectivity to laser light, alloying element evaporation during high power laser welding makes aluminium alloys highly susceptibility to weld defects such as porosity, cracking and undercutting. The dynamic behaviour of the keyhole, due to fluctuating plasma above the keyhole and the vaporization ofthe alloying elements with in the keyhole, is the key problem to be solved for the improvement of the weld quality and stabilization of the keyhole dynamics isperhaps the single most important development that can broaden the application of laser welding of aluminium alloys. In laser welding, the shielding gas is commonly used to stabilize the welding process, to improve the welded joint features and to protect the welded seam from oxidation. The chemicalcomposition of the shielding gas is a key factor in achieving the final qualityof the welded joints. Wide range of shielding gases varying from the pure gasesto complex mixtures based on helium, argon, nitrogen and carbon dioxide are commercially available. These gas mixtures should be considered in terms of their suitability during laser welding of aluminium alloys to produce quality welds. The main objective of the present work is to study the effect of the shielding gascomposition during laser welding of aluminium alloys. Aluminium alloy A15754 was welded using 3kW Nd-YAG laser (continuous wave mode). The alloy samples were butt welded with different shielding gases (pure and mixture of gases) so that high quality welds with high joint efficiencies could be produced. It was observed that the chemical composition of the gases influenced the final weld quality and properties. In general, the mixture gases, in correct proportions, enabled better utilisation of the properties of the mixing gases, stabilized the welding process and produced better weld quality compared to the pure shielding gases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkimuksessa tarkastellaan julkisesti noteerattujen pankkien riskienhallintaraportoinnin nykykäytäntöä Puolassa. Tutkimus jakaantuu kahteen osaan: Tutkimuksen ensimmäisessä osassa esitellään pankkitoimintaa, pankkitoiminnan riskejä ja riskienhallintaa. Pankkitoiminnan riskejä ovat luotto- ja markkinariskit, lisäksi puhutaan operatiivisista ja ympäristöriskeistä. Tutkimuksen toisessa osassa selvitetään ja kuvataan sitä, millaista on tutkimuksen kohdeyritysten riskienhallinta ja riskienhallintaraportointi, tutkimuksen tarkoituksena on myös verrata pankkien riskienhallintaraportointia keskenään. Tutkimuksen kohteena on 13 Varsovan pörssissä listattua pankkia. Tutkimusaineistona käytetään näiden pankkien vuoden 2005 vuosikertomuksia. Kysymyksessä on laadullinen tapaustutkimus, jolle on tyypillistä kuvaileva ja selittävä tutkimusote. Aineiston analyysimenetelmänä on käytetty pattern matchingiä, jonka avulla tutkitaan aineistosta löytyviä riskienhallintaraportoinnin osatekijöitä/indikaattoreita ja verrataan niitä oletettuihin malleihin. Tutkittujen riskienhallintaraporttien perusteella voidaan todeta, että pankkitoiminnan ydinriskeistä: luotto-, korko-, valuutta- ja likviditeettiriskeistä raportoidaan hyvin. Sen sijaan puutteita löytyy operatiivisten ja ympäristöriskien raportoinnista. Suurin osa pankeista raportoi operatiivisista riskeistä, mutta raportointi on pintapuolista ja analysointi puuttuu. Ympäristöriskeistä raportointi ei ole yleistä. Raportoinnin laajuus ja informatiivisuus vaihtelevat pankkien kesken: Suuret, kansainväliset pankkikonsernit raportoivat riskeistään laajasti ja informatiivisesti, kun taas pienemmillä, kansallisilla pankeilla raportointi jää usean pankin kohdalla suppeaksi. Syitä raportoinnin eroille on monia: Yksi syistä on IFRS-standardien vakiintumaton käyttö pienimmillä, kansallisilla pankeilla verrattuna kansainvälisiin pankkikonserneihin. Kansainvälisillä pankkikonserneilla on paremmat valmiudet raportoida riskienhallinnastaan verrattuna pienimpiin pankkeihin, jotka julkaisivat tilinpäätöksensä ensimmäistä kertaa IFRS-standardien mukaisesti vuonna 2005. Yhtenä selittävänä tekijänäraportoinnin eroille voidaan myös mainita omistuspohja: sijoittajainformaation merkitys on korostunut erityisesti organisaatioissa, joissa on laaja, kansainvälinen omistuspohja. Sen sijaan valtio-omisteisessa yrityksessä sijoittajainformaation merkitys on vähäisempi. Myös yrityskulttuuri vaikuttaa siihen, missä laajuudessa, ja mitä tietoa yritys antaa julkisuuteen. Pankit ovat myös tarkkoja maineestaan, mitä tietoa voidaan julkaista ja mitä vaikutuksia tiedon julkaisemisellaon yrityskuvaan. Sen sijaan pankin koolla ei välttämättä ole vaikutusta riskienhallintaraportoinnin laajuuteen ja informatiivisuuteen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: Regarding recent progress, musculoskeletal ultrasound (US) will probably soon be integrated in standard care of patient with rheumatoid arthritis (RA). However, in daily care, quality of US machines and level of experience of sonographers are varied. We conducted a study to assess reproducibility and feasibility of an US scoring for RA, including US devices of different quality and rheumatologist with various levels of expertise in US as it would be in daily care. METHODS: The Swiss Sonography in Arthritis and Rheumatism (SONAR) group has developed a semi-quantitative score using OMERACT criteria for synovitis and erosion in RA. The score was taught to 108 rheumatologists trained in US. One year after the last workshop, 19 rheumatologists participated in the study. Scans were performed on 6 US machines ranging from low to high quality, each with a different patient. Weighted kappa was calculated for each pair of readers. RESULTS: Overall, the agreement was fair to moderate. Quality of device, experience of the sonographers and practice of the score before the study improved substantially the agreement. Agreement assessed on higher quality machine, among sonographers with good experience in US increased to substantial (median kappa for B-mode and Doppler: 0.64 and 0.41 for erosion). CONCLUSIONS: This study demonstrated feasibility and reproducibility of the Swiss US SONAR score for RA. Our results confirmed importance of the quality of US machine and the training of sonographers for the implementation of US scoring in the routine daily care of RA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Viime vuosien nopea kehitys on kiihdyttänyt uusien lääkkeiden kehittämisprosessia. Kombinatorinen kemia on tehnyt mahdolliseksi syntetisoida suuria kokoelmia rakenteeltaan toisistaan poikkeavia molekyylejä, nk. kombinatorisia kirjastoja, biologista seulontaa varten. Siinä molekyylien rakenteeseen liittyvä aktiivisuus tutkitaan useilla erilaisilla biologisilla testeillä mahdollisten "osumien" löytämiseksi, joista osasta saatetaan myöhemmin kehittää uusia lääkeaineita. Jotta biologisten tutkimusten tulokset olisivat luotettavia, on syntetisoitujen komponenttien oltava mahdollisimman puhtaita. Tämän vuoksi tarvitaan HTP-puhdistusta korkealaatuisten komponenttien ja luotettavan biologisen tiedon takaamiseksi. Jatkuvasti kasvavat tuotantovaatimukset ovat johtaneet näiden puhdistustekniikoiden automatisointiin ja rinnakkaistamiseen. Preparatiivinen LC/MS soveltuu kombinatoristen kirjastojen nopeaan ja tehokkaaseen puhdistamiseen. Monet tekijät, esimerkiksi erotuskolonnin ominaisuudet sekä virtausgradientti, vaikuttavat preparatiivisen LC/MS puhdistusprosessin tehokkuuteen. Nämä parametrit on optimoitava parhaan tuloksen saamiseksi. Tässä työssä tutkittiin emäksisiä komponentteja erilaisissa virtausolosuhteissa. Menetelmä kombinatoristen kirjastojen puhtaustason määrittämiseksi LC/MS-puhdistuksen jälkeen optimoitiin ja määritettiin puhtaus joillekin komponenteille eri kirjastoista ennen puhdistusta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli pyrkiä selvittämään niitä kriteereitä, joiden perusteella tilintarkastajat arvioivat sisäisen tarkastuksen laatua, ennen luottamuspäätöksensä tekemistä hyväksikäyttää sisäisen tarkastuksen tekemää työtä. Tämä toteutettiin kyselytutkimuksen perusteella. Tutkimuksessa pyrittiin myös hahmottamaan sisäisen tarkastuksen laatua Suomessa. Sisäisen tarkastuksen laadun arviointiin vaikuttavien tekijöiden sekä laadun kartoittamisen tarkoitus on selventää sisäisen tarkastuksen laadun arviointiprosessia sekä auttaa laadun jatkuvaa kehittämistä. Laadunarviointiin vaikuttavia tekijöitä analysoitiin kolmen ammatti-suosituksista ilmenevän kriteerin, objektiivisuuden, pätevyyden ja työn laadun, valossa. Tilintarkastajien mukaan näiden kriteerien vaikutus laatuarvioon on käytännössä sama. Kriteerit jaettiinkin osatekijöihin, joiden merkitysten kautta kriteerien todellisia vaikutuksia analysoitiin. Tilintarkastajat näyttävät painottavan eniten käytännön toimintaan liittyviä tekijöitä, kuten rajatonta tietojensaanti- ja tarkastusoikeutta, selkeää raportointia sekä yritystuntemusta ja ammatillista kokemusta. Objektiivisuus ja pätevyys näyttäisivät näin vaikuttavan laatuarvioon enemmän kuin työn laatu. Sisäisen tarkastuksen laadun tilintarkastajat arvioivat olevan Suomessa melko hyvällä tasolla, etenkin pörssiyhtiöissä.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää, millaisia tekijöitä kuntien on huomioitava hyvinvointi- ja tukipalvelujen hankinnassa. Tutkielmassa tarkastellaan julkisten palveluhankintojen taustalla vaikuttavia tekijöitä ja vertaillaan näitä tekijöitä hankintastrategian liiketaloudelliseen viitekehykseen. Tulosten valossa pohditaan myös hankintastrategian luomisen ajankohtaisuutta julkisissa palveluhankinnoissa. Tutkielman teoreettinen viitekehys perustuu aikaisempiin hankintastrategiaa ja make or buy –päätöstä käsittelevään kirjallisuuteen ja tieteellisiin tutkimuksiin. Tutkimuksen empiirinen osio koostuu kahdesta case – tapauksesta. Tutkimusmenetelmänä käytettiin henkilökohtaisia haastatteluja. Tutkimuksen kohteina olivat Espoon hyvinvointipalvelujen sekä Kotkan tukipalvelujen hankinta ja kilpailuttaminen. Tutkimuksen lähtökohdista ja viitekehyksestä johtuen kohteeksi valittiin sekä hyvinvointi- että tukipalvelujen hankinta. Lisää näkökulmaa tutkimukseen saatiin valitsemalla kohteiksi kaksi eri kuntaa, mikä sopi myös tutkimuksen toimeksiantajan tavoitteisiin. Tulosten perusteella case – tutkimuksen kohteina olevilla kunnilla ei ole hankintastrategiaa ohjaamassa hyvinvointi- ja tukipalvelujen hankintaa. Liiketaloudellisten hankintastrategiaan vaikuttavien tekijöiden katsotaan soveltuvan myös julkisten palvelujen hankintaan. Tutkimuksen kohteiden välisessä vertailussa hankintojen suunnitelmallisuus painottui hankintaprosessin onnistumiseen vaikuttavana tekijänä. Kuntien oman roolin muuttuminen palvelujen tuottajasta niiden järjestäjäksi ja kehittäjäksi edellyttää tiiviimpää yhteistyötä yksityisen sektorin palveluntuottajan kanssa. Molempien palveluntuottajien osaamisen yhdistäminen antaa puitteet laadukkaiden hyvinvointi- ja tukipalvelujen kehittämiseen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: This study analyzes symptom perception by parents and healthcare professionals and the quality of symptom management in a pediatric palliative home care setting and identifies which factors contribute to a high quality of palliative and end-of-life care for children. METHODS: In this retrospective, cross-sectional study, parents were surveyed at the earliest three months after their child's death. All children were cared for by a specialized home pediatric palliative care team that provides a 24/7 medical on-call service. Questionnaires assessed symptom prevalence and intensity during the child's last month of life as perceived by parents, symptom perception, and treatment by medical staff. The responses were correlated with essential palliative care outcome measures (e.g., satisfaction with the care provided, quality-of-life of affected children and parents, and peacefulness of the dying phase). RESULTS: Thirty-eight parent dyads participated (return rate 84%; 35% oncological disorders). According to parental report, dyspnea (61%) and pain (58%) were the dominant symptoms with an overall high symptom load (83%). Pain, agitation, and seizures could be treated more successfully than other symptoms. Successful symptom perception was achieved in most cases and predicted the quality of symptom treatment (R 2, 0.612). Concordant assessment of symptom severity between parents and healthcare professionals (HCPs) improved the satisfaction with the care provided (p = 0.037) as well as the parental quality-of-life (p = 0.041). Even in cases with unsuccessful symptom control, parents were very satisfied with the SHPPC team's care (median 10; numeric rating scale 0-10) and rated the child's death as highly peaceful (median 9). Significance of the results: The quality and the concordance of symptom perception between parents and HCPs essentially influence parental quality-of-life as well as parental satisfaction and constitute a predictive factor for the quality of symptom treatment and palliative care.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coastal birds are an integral part of coastal ecosystems, which nowadays are subject to severe environmental pressures. Effective measures for the management and conservation of seabirds and their habitats call for insight into their population processes and the factors affecting their distribution and abundance. Central to national and international management and conservation measures is the availability of accurate data and information on bird populations, as well as on environmental trends and on measures taken to solve environmental problems. In this thesis I address different aspects of the occurrence, abundance, population trends and breeding success of waterbirds breeding on the Finnish coast of the Baltic Sea, and discuss the implications of the results for seabird monitoring, management and conservation. In addition, I assess the position and prospects of coastal bird monitoring data, in the processing and dissemination of biodiversity data and information in accordance with the Convention on Biological Diversity (CBD) and other national and international commitments. I show that important factors for seabird habitat selection are island area and elevation, water depth, shore openness, and the composition of island cover habitats. Habitat preferences are species-specific, with certain similarities within species groups. The occurrence of the colonial Arctic Tern (Sterna paradisaea) is partly affected by different habitat characteristics than its abundance. Using long-term bird monitoring data, I show that eutrophication and winter severity have reduced the populations of several Finnish seabird species. A major demographic factor through which environmental changes influence bird populations is breeding success. Breeding success can function as a more rapid indicator of sublethal environmental impacts than population trends, particularly for long-lived and slowbreeding species, and should therefore be included in coastal bird monitoring schemes. Among my target species, local breeding success can be shown to affect the populations of the Mallard (Anas platyrhynchos), the Eider (Somateria mollissima) and the Goosander (Mergus merganser) after a time lag corresponding to their species-specific recruitment age. For some of the target species, the number of individuals in late summer can be used as an easier and more cost-effective indicator of breeding success than brood counts. My results highlight that the interpretation and application of habitat and population studies require solid background knowledge of the ecology of the target species. In addition, the special characteristics of coastal birds, their habitats, and coastal bird monitoring data have to be considered in the assessment of their distribution and population trends. According to the results, the relationships between the occurrence, abundance and population trends of coastal birds and environmental factors can be quantitatively assessed using multivariate modelling and model selection. Spatial data sets widely available in Finland can be utilised in the calculation of several variables that are relevant to the habitat selection of Finnish coastal species. Concerning some habitat characteristics field work is still required, due to a lack of remotely sensed data or the low resolution of readily available data in relation to the fine scale of the habitat patches in the archipelago. While long-term data sets exist for water quality and weather, the lack of data concerning for instance the food resources of birds hampers more detailed studies of environmental effects on bird populations. Intensive studies of coastal bird species in different archipelago areas should be encouraged. The provision and free delivery of high-quality coastal data concerning bird populations and their habitats would greatly increase the capability of ecological modelling, as well as the management and conservation of coastal environments and communities. International initiatives that promote open spatial data infrastructures and sharing are therefore highly regarded. To function effectively, international information networks, such as the biodiversity Clearing House Mechanism (CHM) under the CBD, need to be rooted at regional and local levels. Attention should also be paid to the processing of data for higher levels of the information hierarchy, so that data are synthesized and developed into high-quality knowledge applicable to management and conservation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The possibility and the usefulness of applying plasma keyhole welding to structural steels with different compositions and material thicknesses, and in various welding positions has been examinated. Single pass butt welding with I groove in flat, horizontal vertical and vertical positions and root welding with V , Y and U grooves of thick plate material in flat position have been studied and the welds with high quality has been obtained. The technological conditions for successful welding are presented. The single and interactive effects of welding parameters on weld quality, especially on surface weld defects, geometrical form errors, internal defects and mechanical properties (strength, ductility, impact toughness, hardness and bendability) of weld joint, are presented. Welding parameter combinations providing the best quality welds are also presented.