991 resultados para Standard Oil Company.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkielman päätavoitteena oli selvittää myytävänä olevien pitkäaikaisten omaisuuserien ja lopetettujen toimintojen IFRS-standardien mukainen käsittely tilinpäätöksessä. Tutkielman teoriaosassa selvitettiin asiaa käsittelevän IFRS 5 -standardin sisältö sekä käsiteltiin yleisellä tasolla IFRS-standardien käyttöönottoa ja perusperiaatteita. Tutkimuksen empiirisessä osassa tarkasteltiin standardin soveltamista yhdeksän suomalaisen yrityksen tilinpäätöksessä sekä haastattelemalla kysyttiin kolmen yrityksen kokemuksia standardin soveltamisesta. Empiirinen osa sisältää myös case-yritys SOK-yhtymää koskevan kirjausesimerkin myytävänä olevien pitkäaikaisten omaisuuserien tilinpäätöskäsittelystä. Tutkimus on kvalitatiivinen, normatiivisella tutkimusotteella tehty tapaustutkimus. Tutkimuksessa havaittiin, että pääsääntöisesti standardin edellyttämät tiedot löytyvät tilinpäätöksistä, mutta niiden käsittelyssä on yrityskohtaisia eroja. Standardin soveltaminen koettiin yrityksissä jossain määrin haasteellisena, mutta kokemuksen myötä yrityksille muodostunee yhtenäinen tulkinta standardin sisällöstä ja tietojen esittämistavasta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: A significant proportion of prematurely born children encounter behavioral difficulties, such as attention deficit or hyperactivity, which could be due to executive function disorders. AIMS: To examine whether the standard neurodevelopmental assessment offered to premature children in Switzerland recognizes executive function disorders. METHODS: The study population consisted of 49 children born before 29 weeks of gestation who were examined between 5 and 6 years of age with a standard assessment, with additional items to assess executive functioning. Children with severe neurodevelopmental impairment were excluded (mental retardation, cerebral palsy, autism). Standard assessment consisted in the Kaufman Assessment Battery for Children (K-ABC), which comprises three subscales: sequential processes (analysis of sequential information), simultaneous processes (global analysis of visual information), and composite mental processes (CMP) (result of the other two scales), as well as a behavioral evaluation using the standardized Strengths and Difficulties Questionnaire (SDQ). Executive functioning was assessed with tasks evaluating visual attention, divided attention, and digit memory as well as with a specialized questionnaire, the Behavior Rating Index of Executive Functions (BRIEF), which evaluates several aspects of executive function (regulation, attention, flexibility, working memory, etc). RESULTS: Children were divided according to their results on the three K-ABC scales (< or>85), and the different neuropsychological tasks assessing executive function were compared between the groups. The CMP did not differentiate children with executive difficulties, whereas a score<85 on the sequential processes was significantly associated with worse visual and divided attention. There was a strong correlation between the SDQ and the BRIEF questionnaires. For both questionnaires, children receiving psychotherapy had significantly higher results. Children who presented behavioral problems assessed with the SDQ presented significantly higher scores on the BRIEF. CONCLUSION: A detailed analysis of the standard neurodevelopmental assessment allows the identification of executive function disorders in premature children. Children who performed below 85 on the sequential processes of the K-ABC had significantly more attentional difficulties on the neuropsychological tasks and therefore have to be recognized and carefully followed. Emotional regulation had a strong correlation with behavioral difficulties, which were suitably assessed with the SDQ, recognized by the families, and treated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bakery products such as biscuits, cookies, and pastries represent a good medium for iron fortification in food products, since they are consumed by a large proportion of the population at risk of developing iron deficiency anemia, mainly children. The drawback, however, is that iron fortification can promote oxidation. To assess the extent of this, palm oil added with heme iron and different antioxidants was used as a model for evaluating the oxidative stability of some bakery products, such as baked goods containing chocolate. The palm oil samples were heated at 220°C for 10 min to mimic the conditions found during a typical baking processing. The selected antioxidants were a free radical scavenger (tocopherol extract (TE), 0 and 500 mg/kg), an oxygen scavenger (ascorbyl palmitate (AP), 0 and 500 mg/kg), and a chelating agent (citric acid (CA), 0 and 300 mg/kg). These antioxidants were combined using a factorial design and were compared to a control sample, which was not supplemented with antioxidants. Primary (peroxide value and lipid hydroperoxide content) and secondary oxidation parameters (p-anisidine value, p-AnV) were monitored over a period of 200 days in storage at room temperature. The combination of AP and CA was the most effective treatment in delaying the onset of oxidation. TE was not effective in preventing oxidation. The p-AnV did not increase during the storage period, indicating that this oxidation marker was not suitable for monitoring oxidation in this model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bakery products such as biscuits, cookies, and pastries represent a good medium for iron fortification in food products, since they are consumed by a large proportion of the population at risk of developing iron deficiency anemia, mainly children. The drawback, however, is that iron fortification can promote oxidation. To assess the extent of this, palm oil added with heme iron and different antioxidants was used as a model for evaluating the oxidative stability of some bakery products, such as baked goods containing chocolate. The palm oil samples were heated at 220°C for 10 min to mimic the conditions found during a typical baking processing. The selected antioxidants were a free radical scavenger (tocopherol extract (TE), 0 and 500 mg/kg), an oxygen scavenger (ascorbyl palmitate (AP), 0 and 500 mg/kg), and a chelating agent (citric acid (CA), 0 and 300 mg/kg). These antioxidants were combined using a factorial design and were compared to a control sample, which was not supplemented with antioxidants. Primary (peroxide value and lipid hydroperoxide content) and secondary oxidation parameters (p-anisidine value, p-AnV) were monitored over a period of 200 days in storage at room temperature. The combination of AP and CA was the most effective treatment in delaying the onset of oxidation. TE was not effective in preventing oxidation. The p-AnV did not increase during the storage period, indicating that this oxidation marker was not suitable for monitoring oxidation in this model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information Technology (IT) outsourcing has traditionally been seen as a means to acquire newresources and competencies to perform standard tasks at lowered cost. This dissertationchallenges the thought that outsourcing should be limited to non-strategic systems andcomponents, and presents ways to maximize outsourcing enabled benefits while minimizingassociated risks. In this dissertation IT outsourcing is approached as an efficiency improvement and valuecreationprocess rather than a sourcing decision. The study focuses on when and how tooutsource information technology, and presents a new set of critical success factors foroutsourcing project management. In a case study it re-validates the theory-based propositionthat in certain cases and situations it is beneficial to partly outsource also strategic IT systems. The main contribution of this dissertation is the validation of proposal that in companies wherethe level of IT competency is high, managerial support established and planning processes welldefined,it is possible to safely outsource also business critical IT systems. A model describing the critical success factors in such cases is presented based on existing knowledge on the fieldand the results of empirical study. This model further highlights the essence of aligning IT andbusiness strategies, assuming long-term focus on partnering, and the overall target ofoutsourcing to add to the strengths of the company rather than eliminating weaknesses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pysyvän kilpailuedun saavuttaminen on jokaisen yrityksen strategisen suunnittelun keskeinen tavoite. Monet tunnetut tutkijat ovat suunnanneet strategista johtamista omilla teorioillaan ja tuloksillaan. Tutkimuksissa on havaittu että perinteisillä strategisen johtamisen teorioilla ei voida selittää nykyisin nopeasti muuttuvassa maailmassa toimivien yritysten menestystä. Menestyksen takaa löytyviä tekijöitä kutsutaan yrityksen dynaamisiksi tekijöiksi. Yhtenä merkittävänä dynaamisena tekijänä ovat tulleet esille yrityksen prosessit. Tutkimuksessa on yhdistetty kaksi asiaa samaan tarkastelukehikkoon. Olen tuonut siihen toisaalta yrityksen prosessit, jotka ovat alan parhaiden käytäntöjen mukaiset. Toisaalta on esitetty teoria dynaamisista kyvykkyyksistä, joiden oletetaan ilmentyvän yrityksen prosesseissa. Tässä työssä tarkastellaan yritysten dynaamisista kyvykkyyksistä vain prosesseja.Työn tarkoituksena on löytää vastauksia seuraaviin tutkimuskysymyksiin: Mitkä ovat yrityksen prosessien kautta saavutettavat dynaamiset kyvykkyydet? Voiko yritys saavuttaa ja säilyttää kilpailuetua toteuttaessaan prosessinsa alan parhaiden käytäntöjen mukaan? Voidaanko tunnistetuille dynaamisille kyvykkyyksille rakentaa mallia niiden mittaamiseen?Tutkimuskysymyksiin olen hakenut vastausta esimerkkiyrityksen prosessikehityshankkeen kautta. Tavoitteena on ollut tunnistaa prosessikehityksen takaa mahdolliset yrityksen dynaamiset kyvykkyydet ja siten nimetä niitä yrityksen kilpailuedun tekijöiksi. Tutkimuskohteena olevaan ICT-alan yritykseen kohdistuu yhä suurempia tuottavuuteen ja laatuun liittyviä vaatimuksia. Yhtenä ratkaisuna vastata näihin vaateisiin ovat yritykset lähteneet kehittämään niiden prosesseja. ICT-palveluyritysten toimintaan tukeva viitekehys on ITIL. Viitekehykseen on kerätty useamman yrityksen ja organisaation kokemukset palveluiden tehokkaasta tuottamisesta. ITIL – viitekehys on ollut perustana vuonna 2005 julkaistulle ISO/IEC 20000-1:2005 palveluiden johtamisen standardille. Tutkimuksen perusteella voidaan sanoa, että määrämuotoiset prosessit lisäävät yritysten ketteryyttä mukautua muutoksiin ja siten myös niiden dynaamiset kyvykkyydet ovat lisääntyneet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Major oil spills can have long-term impacts since oil pollution does not only result in acute mortality of marine organisms, but also affects productivity levels, predator-prey dynamics, and damages habitats that support marine communities. However, despite the conservation implications of oil accidents, the monitoring and assessment of its lasting impacts still remains a difficult and daunting task. Here, we used European shags to evaluate the overall, lasting effects of the Prestige oil spill (2002) on the affected marine ecosystem. Using δ15N and Hg analysis, we trace temporal changes in feeding ecology potentially related to alterations of the food web due to the spill. Using climatic and oceanic data, we also investigate the influence of North Atlantic Oscillation (NAO) index, the sea surface temperature (SST) and the chlorophyll a (Chl a) on the observed changes. Analysis of δ15N and Hg concentrations revealed that after the Prestige oil spill, shag chicks abruptly switched their trophic level from a diet based on a high percentage of demersal-benthic fish to a higher proportion of pelagic/semi-pelagic species. There was no evidence that Chl a, SST and NAO reflected any particular changes or severity in environmental conditions for any year or season that may explain the sudden change observed in trophic level. Thus, this study highlighted an impact on the marine food web for at least three years. Our results provide the best evidence to date of the long-term consequences of the Prestige oil spill. They also show how, regardless of wider oceanographic variability, lasting impacts on predator-prey dynamics can be assessed using biochemical markers. This is particularly useful if larger scale and longer term monitoring of all trophic levels is unfeasible due to limited funding or high ecosystem complexity.