750 resultados para Engineering Days
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Even though the research on innovation in services has expanded remarkably especially during the past two decades, there is still a need to increase understanding on the special characteristics of service innovation. In addition to studying innovation in service companies and industries, research has also recently focused more on services in innovation, as especially the significance of so-called knowledge intensive business services (KIBS) for the competitive edge of their clients, othercompanies, regions and even nations has been proved in several previous studies. This study focuses on studying technology-based KIBS firms, and technology andengineering consulting (TEC) sector in particular. These firms have multiple roles in innovation systems, and thus, there is also a need for in-depth studies that increase knowledge about the types and dimensions of service innovations as well as underlying mechanisms and procedures which make the innovations successful. The main aim of this study is to generate new knowledge in the fragmented research field of service innovation management by recognizing the different typesof innovations in TEC services and some of the enablers of and barriers to innovation capacity in the field, especially from the knowledge management perspective. The study also aims to shed light on some of the existing routines and new constructions needed for enhancing service innovation and knowledge processing activities in KIBS companies of the TEC sector. The main samples of data in this research include literature reviews and public data sources, and a qualitative research approach with exploratory case studies conducted with the help of the interviews at technology consulting companies in Singapore in 2006. These complement the qualitative interview data gathered previously in Finland during a larger research project in the years 2004-2005. The data is also supplemented by a survey conducted in Singapore. The respondents for the survey by Tan (2007) were technology consulting companies who operate in the Singapore region. The purpose ofthe quantitative part of the study was to validate and further examine specificaspects such as the influence of knowledge management activities on innovativeness and different types of service innovations, in which the technology consultancies are involved. Singapore is known as a South-east Asian knowledge hub and is thus a significant research area where several multinational knowledge-intensive service firms operate. Typically, the service innovations identified in the studied TEC firms were formed by several dimensions of innovations. In addition to technological aspects, innovations were, for instance, related to new client interfaces and service delivery processes. The main enablers of and barriers to innovation seem to be partly similar in Singaporean firms as compared to the earlier study of Finnish TEC firms. Empirical studies also brought forth the significance of various sources of knowledge and knowledge processing activities as themain driving forces of service innovation in technology-related KIBS firms. A framework was also developed to study the effect of knowledge processing capabilities as well as some moderators on the innovativeness of TEC firms. Especially efficient knowledge acquisition and environmental dynamism seem to influence the innovativeness of TEC firms positively. The results of the study also contributeto the present service innovation literature by focusing more on 'innovation within KIBs' rather than 'innovation through KIBS', which has been the typical viewpoint stressed in the previous literature. Additionally, the study provides several possibilities for further research.
Resumo:
Process development will be largely driven by the main equipment suppliers. The reason for this development is their ambition to supply complete plants or process systems instead of single pieces of equipment. The pulp and paper companies' interest lies in product development, as their main goal is to create winning brands and effective brand management. Design engineering companies will find their niche in detail engineering based on approved process solutions. Their development work will focus on increasing the efficiency of engineering work. Process design is a content-producing profession, which requires certain special characteristics: creativity, carefulness, the ability to work as a member of a design team according to time schedules and fluency in oral as well as written presentation. In the future, process engineers will increasingly need knowledge of chemistry as well as information and automation technology. Process engineering tools are developing rapidly. At the moment, these tools are good enough for static sizing and balancing, but dynamic simulation tools are not yet good enough for the complicated chemical reactions of pulp and paper chemistry. Dynamic simulation and virtual mill models are used as tools for training the operators. Computational fluid dynamics will certainlygain ground in process design.
Resumo:
Requirements-relatedissues have been found the third most important risk factor in software projects and as the biggest reason for software project failures. This is not a surprise since; requirements engineering (RE) practices have been reported deficient inmore than 75% of all; enterprises. A problem analysis on small and low maturitysoftware organizations revealed two; central reasons for not starting process improvement efforts: lack of resources and uncertainty; about process improvementeffort paybacks.; In the constructive part of the study a basic RE method, BaRE, was developed to provide an; easy to adopt way to introduce basic systematic RE practices in small and low maturity; organizations. Based on diffusion of innovations literature, thirteen desirable characteristics; were identified for the solution and the method was implemented in five key components:; requirements document template, requirements development practices, requirements; management practices, tool support for requirements management, and training.; The empirical evaluation of the BaRE method was conducted in three industrial case studies. In; this evaluation, two companies established a completely new RE infrastructure following the; suggested practices while the third company conducted continued requirements document; template development based on the provided template and used it extensively in practice. The; real benefits of the adoption of the method were visible in the companies in four to six months; from the start of the evaluation project, and the two small companies in the project completed; their improvement efforts with an input equal to about one person month. The collected dataon; the case studies indicates that the companies implemented new practices with little adaptations; and little effort. Thus it can be concluded that the constructed BaRE method is indeed easy to; adopt and it can help introduce basic systematic RE practices in small organizations.
Resumo:
The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.
Resumo:
The introduction of affordable, consumer-oriented 3-D printers is a milestone in the current "maker movement," which has been heralded as the next industrial revolution. Combined with free and open sharing of detailed design blueprints and accessible development tools, rapid prototypes of complex products can now be assembled in one's own garage--a game-changer reminiscent of the early days of personal computing. At the same time, 3-D printing has also allowed the scientific and engineering community to build the "little things" that help a lab get up and running much faster and easier than ever before.
Resumo:
A patent foramen ovale (PFO), present in ∼40% of the general population, is a potential source of right-to-left shunt that can impair pulmonary gas exchange efficiency [i.e., increase the alveolar-to-arterial Po2 difference (A-aDO2)]. Prior studies investigating human acclimatization to high-altitude with A-aDO2 as a key parameter have not investigated differences between subjects with (PFO+) or without a PFO (PFO-). We hypothesized that in PFO+ subjects A-aDO2 would not improve (i.e., decrease) after acclimatization to high altitude compared with PFO- subjects. Twenty-one (11 PFO+) healthy sea-level residents were studied at rest and during cycle ergometer exercise at the highest iso-workload achieved at sea level (SL), after acute transport to 5,260 m (ALT1), and again at 5,260 m after 16 days of high-altitude acclimatization (ALT16). In contrast to PFO- subjects, PFO+ subjects had 1) no improvement in A-aDO2 at rest and during exercise at ALT16 compared with ALT1, 2) no significant increase in resting alveolar ventilation, or alveolar Po2, at ALT16 compared with ALT1, and consequently had 3) an increased arterial Pco2 and decreased arterial Po2 and arterial O2 saturation at rest at ALT16. Furthermore, PFO+ subjects had an increased incidence of acute mountain sickness (AMS) at ALT1 concomitant with significantly lower peripheral O2 saturation (SpO2). These data suggest that PFO+ subjects have increased susceptibility to AMS when not taking prophylactic treatments, that right-to-left shunt through a PFO impairs pulmonary gas exchange efficiency even after acclimatization to high altitude, and that PFO+ subjects have blunted ventilatory acclimatization after 16 days at altitude compared with PFO- subjects.
Resumo:
Vaatimusmäärittely on tärkeä vaihe ohjelmistotuotannossa, koska virheelliset ja puutteelliset asiakasvaatimukset vaikuttavat huomattavasti asiakkaan tyytymättömyyteen ohjelmistotuotteessa. Ohjelmistoinsinöörit käyttävät useita erilaisia menetelmiä ja tekniikoita asiakasvaatimusten kartoittamiseen. Erilaisia tekniikoita asiakasvaatimusten keräämiseen on olemassa valtava määrä.Diplomityön tavoitteena oli parantaa asiakasvaatimusten keräämisprosessia ohjelmistoprojekteissa. Asiakasvaatimusten kartoittamiseen käytettävien tekniikoiden arvioinnin perusteella kehitettiin parannettu asiakasvaatimusten keräämisprosessi. Kehitetyn prosessin testaamiseksi ja parantamiseksi järjestettiin ryhmätyöistuntoja liittyen todellisiin ohjelmistokehitysprojekteihin. Tuloksena vaatimusten kerääminen eri sidosryhmiltä nopeutui ja tehostui. Prosessi auttoi muodostamaan yleisen kuvan kehitettävästä ohjelmistosta, prosessin avulla löydettiin paljon ideoita ja prosessi tehosti ideoiden analysointia ja priorisointia. Prosessin suurin kehityskohde oli fasilitaattorin ja osallistujien valmistautumisessa ryhmätyöistuntoihin etukäteen.
Resumo:
Tämän työn tarkoituksena oli tutkia kuinka organisaation kyvykkyyksiä voidaan mitata engineering- ja konsultointialalla käyttämällä ns. kyvykkyysauditointimenetelmää. Päämotiivit aineettoman omaisuuden mittaamiseksi tunnistettiin kirjallisuuskatsauksen pohjalta. Erilaisten menetelmien etuja ja haittoja tutkittiin, jotta kyvykkyysauditoinnin suorittamiseen liittyvät haasteet ja vaatimukset tulisivat tunnistetuiksi. Kyvykkyysauditoinnin rakentaminen vaati teollisuudenalan erityispiirteiden tunnistamista. Niiksi havaittiin tietointensiivisyys ja projektikeskeisyys. Auditoinnin implementaatioprosessi koostui neljästä osasta, joista kolmen ensimmäisen suorittamiseen case-yritys antoi merkittävän panoksensa. Kriittisten menestystekijöiden selvittämisen jälkeen voitiin niihin vaikuttavat organisaation kyvykkyydet tunnistaa ja arviointi suorittaa. Arvioinnit kerättiin sisäisiltä ja ulkoisilta arvioijilta, ja ne muodostivat pohjan analyysille, joka selvitti yrityksen kehittämistarpeita. Kyvykkyysauditoinnin hyödyiksi laskettiin kasvanut tietämys yrityksen vahvuuksista ja heikkouksista sekä mahdollisuus tarkkailla säännöllisesti sen kokonaissuorituskykyä ja parantaa sitä.
Resumo:
Diplomityön tarkoituksena oli luoda ja kehittää kaksi asiakastyytyväisyysmallia asiakastyytyväisyyden mittaamisen aloittamiseksi ja toteuttamiseksi kohdeyrityksessä. Työ pohjautuu nykyisten tyytyväisyysprosessien analysointiin sekä työn teoriaosaan, joka käsittelee yksityiskohtaisesti niitä asioita, joita asiakastyytyväisyyden mittaamisessa ja prosessissa tulisi huomioida. Työssä tehdyn mallien tarkoituksen on auttaa kohdeyritystä hyödyntämään asiakastyytyväisyysmittauksen tuloksia paremmin liiketoiminnassa, sekä asiakkaiden keskuudessa. Työn yhtenä tavoitteena oli myös sopivan mittaustyökalun löytäminen ja suositteleminen kohdeyritykselle.Teorian ja analysoinnin pohjalta luotiin molemmat asiakastyytyväisyysmallit vastamaan kohdeyksiköiden tarpeita. Kun ulkoiset seikat, kuten mittaustavat, mittausinstrumentit, kyselylomakkeet ja vastaajaryhmät oli määritelty, keskityttiin tulosten analysointiin ja hyödyntämiseen, mikä korostui asiakassuuntautuneessa organisaatiossa. Työssä pohdittiin myös yhtenäisen asiakastyytyväisyysprosessin merkitystä ja etuja kohdeyrityksessä.
Resumo:
Vaatimusmäärittelyn tavoitteena on luoda halutun järjestelmän kokonaisen, yhtenäisen vaatimusluettelon vaatimusten määrittämiseksi käsitteellisellä tasolla. Liiketoimintaprosessien mallintaminen on varsin hyödyllinen vaatimusmäärittelyn varhaisissa vaiheissa. Tämä työ tutkii liiketoimintaprosessien mallintamista tietojärjestelmien kehittämistä varten. Nykyään on olemassa erilaisia liiketoimintaprosessien mallintamiseen tarkoitettuja tekniikoita. Tämä työ tarkastaa liiketoimintaprosessien mallintamisen periaatteet ja näkökohdat sekä eri mallinnustekniikoita. Uusi menetelmä, joka on suunniteltu erityisesti pienille ja keskisuurille ohjelmistoprojekteille, on kehitetty prosessinäkökohtien ja UML-kaavioiden perusteella.
Resumo:
The concentration and ratio of terpenoids in the headspace volatile blend of plants have a fundamental role in the communication of plants and insects. The sesquiterpene (E)-nerolidol is one of the important volatiles with effect on beneficial carnivores for biologic pest management in the field. To optimize de novo biosynthesis and reliable and uniform emission of (E)-nerolidol, we engineered different steps of the (E)-nerolidol biosynthesis pathway in Arabidopsis thaliana. Introduction of a mitochondrial nerolidol synthase gene mediates de novo emission of (E)-nerolidol and linalool. Co-expression of the mitochondrial FPS1 and cytosolic HMGR1 increased the number of emitting transgenic plants (incidence rate) and the emission rate of both volatiles. No association between the emission rate of transgenic volatiles and their growth inhibitory effect could be established. (E)-Nerolidol was to a large extent metabolized to non-volatile conjugates.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Työn tavoitteena oli kehittää tutkittavan insinööriyksikön projektien kustannusestimointiprosessia, siten että yksikön johdolla olisi tulevaisuudessa käytettävänään tarkempaa kustannustietoa. Jotta tämä olisi mahdollista, ensin täytyi selvittää yksikön toimintatavat, projektien kustannusrakenteet sekä kustannusatribuutit. Tämän teki mahdolliseksi projektien kustannushistoriatiedon tutkiminen sekä asiantuntijoiden haastattelu. Työn tuloksena syntyi kohdeyksikön muiden prosessien kanssa yhteensopiva kustannusestimointiprosessi sekä –malli.Kustannusestimointimenetelmän ja –mallin perustana on kustannusatribuutit, jotka määritellään erikseen tutkittavassa ympäristössä. Kustannusatribuutit löydetään historiatietoa tutkimalla, eli analysoimalla jo päättyneitä projekteja, projektien kustannusrakenteita sekä tekijöitä, jotka ovat vaikuttaneet kustannusten syntyyn. Tämän jälkeen kustannusatribuuteille täytyy määritellä painoarvot sekä painoarvojen vaihteluvälit. Estimointimallin tarkuutta voidaan parantaa mallin kalibroinnilla. Olen käyttänyt Goal – Question – Metric (GQM) –menetelmää tutkimuksen kehyksenä.
Resumo:
Taking the maximum advantage of technological innovations and the investment in them is of key importance for businesses. The IT industry offers a wide range of innovative high-technology solutions to manage information processing and distribution. However for end-user businesses to make informed decisions in this area is challenging. The aim of this research is to identify the key differences in principal solutions, and what the selection criteria should be for those involved. Existing methodologies for software development are classified, and some key criteria are described to help IT system developers and users determine what are the most important factors in system selection, development and deployment. Statistical data is researched and analysed, a theoretical basis is developed and reviewed, key issues from case studies are identified and generalized to be presented along with the conclusions in the current study. The results give a good basis for corporate consideration and provide overall support to the key decisions in developing web-based software. The conclusion is that new web developments should be considered the stakeholders as an evolution of existing business systems, but they should then pay particular attention to the new advantages that web-based software offers in terms of standardised interfaces and procedures, universal deployment opportunities, and a range of other benefits the study highlights.