951 resultados para Worst-case execution-time
Resumo:
Tutkielman tarkoituksena on tutkia oppimisprosessin muodostumista ja toteutumista sekä prosessijohtamisen ja prosessien roolia SAP:in käyttöönottoprojektissa. Tutkielman case-organisaationa toimii Kespro Oy ja tutkimuskohteena on SAP:in käyttöönottoprojekti Kespron tukuissa. Tutkimus- ja tiedonkeruumenetelmänä käytetään kvantitatiivista kyselylomaketta. Tutkimuksessa saatiin selkeä kuva prosessijohtamisen ja prosessien roolista muutostilanteessa sekä oppimisprosessin muodostumisesta jatoteutumisesta. Johdon ja esimiesten tuen korostuminen sekä onnistunut strategisten linjausten tiedottaminen tulivat esille käytännön työssä prosessien toimivuuden ja oman työn merkityksen kautta. Tärkeänä haasteena osaamisen varmistamiseksi käyttöönoton yhteydessä oli varmistaa koulutuksen ja käyttäjäohjeiden riittävä taso ja sisältö. Käyttöönoton aikana ja sen jälkeen korostuivat tiedon jakamisessa pääkäyttäjien ja henkilöstön keskinäinen yhteistyö ja avunanto. Tehokas tiedon jakaminen mahdollisti käyttöönoton onnistumisen. Käyttöönottoon vaikutti oleellisesti myös tietotekninen näkökulma: käyttäjien kyky erottaa toisistaan oma osaaminen ja tietotekniset ongelmat. Näiden tekijöiden huomioiminen mahdollisti SAP-toiminnanohjausjärjestelmän sujuvan liikkeellelähdön Kespron tukuissa mahdollisimman vähillä operatiivisilla ongelmilla.
Resumo:
Tässä tutkielmassatarkastellaan maakaasun hinnoittelussa käytettyjen sidonnaisuustekijöiden hintadynamiikkaa ja niiden vaikutusta maakaasun hinnanmuodostukseen. Pääasiallisena tavoitteena on arvioida eri aikasarjamenetelmien soveltuvuutta sidonnaisuustekijöiden ennustamisessa. Tämä toteutettiin analysoimalla eri mallien ja menetelmien ominaisuuksia sekä yhteen sovittamalla nämä eri energiamuotojen hinnanmuodostuksen erityispiirteisiin. Tutkielmassa käytetty lähdeaineisto on saatu Gasum Oy:n tietokannasta. Maakaasun hinnoittelussa käytetään kolmea sidonnaisuustekijää seuraavilla painoarvoilla: raskaspolttoöljy 50%, indeksi E40 30% ja kivihiili 20%. Kivihiilen ja raskaan polttoöljyn hinta-aineisto koostuu verottomista dollarimääräisistä kuukausittaisista keskiarvoista periodilta 1.1.1997 - 31.10.2004. Kotimarkkinoiden perushintaindeksin alaindeksin E40 indeksi-aineisto, joka kuvaa energian tuottajahinnan kehitystä Suomessa ja koostuu tilastokeskuksen julkaisemista kuukausittaisista arvoista periodilta 1.1.2000 - 31.10.2004. Tutkimuksessa tarkasteltujen mallien ennustuskyky osoittautui heikoksi. Kuitenkin tuloksien perusteella voidaan todeta, että lyhyellä aikavälillä EWMA-malli antoi harhattomimman ennusteen. Muut testatuista malleista eivät kyenneet antamaan riittävän luotettavia ja tarkkoja ennusteita. Perinteinen aikasarja-analyysi kykeni tunnistamaan aikasarjojen kausivaihtelut sekä trendit. Lisäksi liukuvan keskiarvon menetelmä osoittautui jossain määrin käyttökelpoiseksi aikasarjojen lyhyen aikavälin trendien identifioinnissa.
Resumo:
Tutkielman päätavoitteena on luoda kokonaisvaltainen mittausjärjestelmä osaksi organisaation strategista ohjausta Teemuaho -konsernille. Mallissa painotetaan johdon strategista ohjausta ja tarkoituksena on löytää strategian toteuttamisen kannalta kriittiset menestystekijät, joiden suorituskykyä mitataan. Kokonaisvaltainen mittausjärjestelmä osana organisaation strategista ohjausta on hyvä apuväline organisaation johdolle talouden ohjaukseen. Yritysten johtajat tarvitsevat selkeän ja helppokäyttöisen työkalun tukemaan omaa toimintaansa ja näyttämään suuntaa, sekä samalla mittausjärjestelmän, joka kertoo miten tavoitteiden saavuttamisessa on onnistuttu. Strategisen ohjauksen avulla yritysten johtajat voivat suunnitella erilaisia tulevaisuudenstrategioita ja arvioida, miten tämän hetkisessä strategiassa on onnistuttu. Strategisten ohjausjärjestelmien avulla yritysten johtajien on mahdollista ohjata toimintaa tehokkaasti strategian osoittamaan suuntaan.
Resumo:
Tutkielman tarkoituksena oli ydinosaamisen ja strategisen osaamisen määritteleminen ja arvioiminen. Tavoitteena oli selventää sekä organisaation johdon ja keskijohdon näkemyksiä siitä mitä strateginen kyvykkyys on sekä tehdä osaamisten arvioinnin yhteydessä perustyötä henkilöstön strategisten osaamisten kehittämiseksi. Case- organisaationa tutkimuksessa oli Finnsteve Oy, Helsingin, Turun ja Kotkansatamissa toimiva Suomen toiseksi suurin satamaoperaattori. Tutkimuksessa käytettiin tiedonkeruumenetelmänä lomakekyselyä yhdis-tettynä kvalitatiiviseen teemahaastatteluun. Ydinosaamisten määrittelyssä käytettiin ryhmätyömenetelmää. Tutkimusongelmia peilattiin sekä resurs-siperusteiseen että ydinosaamisen teorioihin. Tutkimuksessa saatiin selvä kuva siitä, miten ydinosaamisen määrittely käytännössä voidaan yrityksessä toteuttaa. Koska kyseessä oleva yritys tuottaa satamapalveluja, on palvelutapahtuman reagointinopeus, tehok-kuus, joustavuus ja oikea-aikaisen informaation tuottaminen asiakkaille yrityksen kannalta ehdottoman tärkeää. Mikäli yritys haluaa säilyttää maineensa joustavana ja asiakkaiden toiveisiin nopeasti reagoivana yrityksenä sen on kyettävä kannustamaan henkilöstöään innovointiin. Innovoinnit johtavat parhaimmillaan prosessi-innovaatioihin ja sitä kautta sisäisen tehokkuuden paranemiseen jaasiakastyytyväisyyteen.
Resumo:
Henkilöstön kehittäminen ja henkilöstön osaamisen ylläpitäminen muuttuvat yhä tärkeämmiksi tekijöiksi yrityksen menestymiselle. Coaching on yksi uusimmista ja kasvavan suosion saaneista henkilöstön kehittämisen menetelmistä, mutta samalla se on vielä hyvin tuntematon. Tämän tutkimuksen tavoitteena on selvittää mitä coaching on ja millaisia hyötyjä sen avulla voidaan saavuttaa. Tutkimuksen case-yrityksenä toimii Suomen Posti Oyj. Tutkimus on kvalitatiivinen ja se on suoritettu aineistotriangulaatiolla. Tulokset on kerätty kyselylomaketutkimuksesta ja haastatteluiden avulla ja tämän jälkeen aineistoista on tehty yhteenvetoja ja niitä on vertailtu sekä toisiinsa että kirjallisuuslähteisiin. Tulosten perusteella voidaan todeta, että coaching on yksilöllisesti räätälöitävä ja keskustelun kautta etenevä henkilöstön kehittämismuoto. Sen avulla voidaan muun muassa parantaa yksilöiden ammatti-identiteettiä ja tehostaa työskentelytapoja. Yritys puolestaan kokee coachingin edut toiminnan laadun parantumisen, henkilöstönsitouttamisen ja motivoimisen kautta.
Resumo:
Tutkielman päätavoitteena oli selvittää myytävänä olevien pitkäaikaisten omaisuuserien ja lopetettujen toimintojen IFRS-standardien mukainen käsittely tilinpäätöksessä. Tutkielman teoriaosassa selvitettiin asiaa käsittelevän IFRS 5 -standardin sisältö sekä käsiteltiin yleisellä tasolla IFRS-standardien käyttöönottoa ja perusperiaatteita. Tutkimuksen empiirisessä osassa tarkasteltiin standardin soveltamista yhdeksän suomalaisen yrityksen tilinpäätöksessä sekä haastattelemalla kysyttiin kolmen yrityksen kokemuksia standardin soveltamisesta. Empiirinen osa sisältää myös case-yritys SOK-yhtymää koskevan kirjausesimerkin myytävänä olevien pitkäaikaisten omaisuuserien tilinpäätöskäsittelystä. Tutkimus on kvalitatiivinen, normatiivisella tutkimusotteella tehty tapaustutkimus. Tutkimuksessa havaittiin, että pääsääntöisesti standardin edellyttämät tiedot löytyvät tilinpäätöksistä, mutta niiden käsittelyssä on yrityskohtaisia eroja. Standardin soveltaminen koettiin yrityksissä jossain määrin haasteellisena, mutta kokemuksen myötä yrityksille muodostunee yhtenäinen tulkinta standardin sisällöstä ja tietojen esittämistavasta.
Resumo:
Tutkimuksen tavoitteena on analysoida Neste Oilin Suomen vähittäismyynnin prosessit työajankäytön perusteella. Tähän perustuen pyritään löytämääntehostamista vaativia toimintoja ja prosesseja. Tutkimuksen taustalla on toimialalla toteutettu projekti, jossa toimintolaskentaa hyödyntämällä kerättiin lomakkeella työajankäytön tietokanta pullonkaulojen ja resurssien tuhlauksen paikallistamiseksi. Toimintojohtaminen ja pullonkaulateoria toimivat taustalla vaihtoehtoisina työkaluina prosessien ja toimintojen tehostamiseksi. Tämän tutkimuksen mukaan toimialalta löytyi yhdeksän erilaista pääprosessia, jotka jakaantuivat 23 erilaiseksi toimintokokonaisuudeksi, joille henkilötyövuodet kohdistettiin. Näistä asiakkuuksien hallinta sekä yhtenäinen asiakkuuskäsitys yrityksessä ovat niitä keinoja, joita tehostamalla pystytään vastaamaan kilpailuuntoimialalla, joka on yksi maailman kilpailluimmista. Lisäksi kaikkien prosessien riippuvuus tietohallinnosta pakottaa panostamaan sen tehokkuuteen sekä toimivuuteen yhä enemmän resurssien tuhlaamisen välttämiseksi kaikkialla yrityksessä.
Resumo:
Tutkielma on kvalitatiivinen haastattelututkimus, jonka tavoitteena on tarkastella HR-asiantuntijapalveluiden;konsultoinnin, ulkoistamisen ja vuokrajohtajuuden käyttöä sekä mahdollisuuksia PK-sektorilla. Tutkimuksessa perehdytään myös työn toimeksiantajan, case-yritys Virvon liiketoimintaan HR-asiantuntijapalveluiden markkinoilla ja siihen, miten Virvo pystyy vastaamaan näihin haasteisiin. HR-asiantuntijapalveluiden markkinoiden selvittämiseksi on haastateltu neljää HR-asiantuntijaa. Tämän pohjalta on tehty 15 PK-yrityksen toimitusjohtajan haastattelua HR-asiantuntijapalveluiden käytön selvittämiseksi. Tehdyt haastattelut ovat olleet puolistrukturoituja, jolloin on käytetty valmiita kysymyspohjia. Tutkimuksessa on selvinnyt, että HR-asiantuntijamarkkinat ovat vasta muotoutumassa ja että ala on varsin uusi. PK-yritysten HR-toimintojen haasteisiin pystytään parhaiten vastaamaan HR-asiantuntijapalveluilla, jotka ovat selkeitä kokonaisuuksia ja järkevästi hinnoiteltuja. Tärkeinä asioina on nähty esimerkiksi käytännön toteutus sekä lisäajan saaminen varsinaisen liiketoiminnan hoitamiseen. Palveluiden käyttö korostuu erityisesti erilaisissa muutostilanteissa kuten esimerkiksi yrityksen kasvuvaiheessa. Voidaan todeta myös, että liiketoiminnan tila vaikuttaa HR-asiantuntijapalveluiden käyttöön.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
In the era of fast product development and customized product requirements, the concept of product platform has proven its power in practice. The product platform approach has enabledcompanies to increase the speed of product introductions while simultaneously benefit from efficiency and effectiveness in the development and production activities. The product platforms are technological bases, which can be used to develop several derivative products, and hence, the differentiation can be pushed closer to the product introduction. The product platform development has some specific features, which differ somewhat from the product development of single products. The time horizon is longer, since the product platform¿slife cycle is longer than individual product's. The long time-horizon also proposes higher market risks and the use of new technologies increases the technological risks involved. The end-customer interface might be far away, but there is not a lack of needs aimed at the product platforms ¿ in fact, the product platform development is very much balancing between the varying needs set to it by thederivative products. This dissertation concentrated on product platform development from the internal product lines' perspective of a singlecase. Altogether six product platform development factors were identified: 'Strategic and business fit of product platform', 'Project communication and deliverables', 'Cooperation with product platform development', 'Innovativeness of product platform architecture and features', 'Reliability and quality of product platform', and 'Promised schedules and final product platform meeting the needs'. From the six factors, three were found to influence quite strongly the overall satisfaction, namely 'Strategic and business fit of product platform', 'Reliability and quality of product platform', and 'Promised schedules and final product platform meeting the needs'. Hence, these three factors might be the ones a new product platform development unit should concentrate first in order to satisfy their closest customers, the product lines. The 'Project communication and deliverables' and 'Innovativeness of product platform architecture and features' were weaker contributors to the overall satisfaction. Overall, the factors explained quite well the satisfaction of the product lines with product platform development. Along the research, several interesting aspects about the very basic nature of the product platform development were found. The long time horizon of the product platform development caused challenges in the area of strategic fIT - a conflict between the short-term requirements and long term needs. The fact that a product platform was used as basis of several derivative products resulted into varying needs, and hence the match with the needs and the strategies. The opinions, that the releases of the larger product lines were given higher priorities, give an interesting contribution to the strategy theory of powerand politics. The varying needs of the product lines, the strengths of them as well as large number of concurrent releases set requirements to prioritization. Hence, the research showed the complicated nature of the product platform development in the case unIT - the very basic nature of the product platform development might be its strength (gaining efficiency and effectiveness in product development and product launches) but also the biggest challenge (developing products to meet several needs). As a single case study, the results of this research are not directly generalizable to all the product platform development activities. Instead, the research serves best as a starting point for additional research as well as gives some insights about the factors and challengesof one product development unit.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.