980 resultados para Software-Defined Networking, OpenFlow, rete programmabile
Resumo:
Problems of the designing active magnet bearingcontrol are developed. The estimation controller are designed and applied to a rigid rotor. The mathematical model of the active magnet bearing controller is developed. This mathematical model is realized on a DSP. The results of this realization are analyzed. The conclusions about the digital signal processing are made.
Resumo:
Ohjelmiston kehitystyökalut käyttävät infromaatiota kehittäjän tuottamasta lähdekoodista. Informaatiota hyödynnetään ohjelmistoprojektin eri vaiheissa ja eri tarkoituksissa. Moderneissa ohjelmistoprojekteissa käytetyn informaation määrä voi kasvaa erittäin suureksi. Ohjelmistotyökaluilla on omat informaatiomallinsa ja käyttömekanisminsa. Informaation määrä sekä erilliset työkaluinformaatiomallit tekevät erittäin hankalaksi rakentaa joustavaa työkaluympäristöä, erityisesti ongelma-aluekohtaiseen ohjelmiston kehitysprosessiin. Tässä työssä on analysoitu perusinformaatiometamalleja Unified Modeling language kielestä, Python ohjelmointikielestä ja C++ ohjelmointikielestä. Metainformaation taso on rajoitettu rakenteelliselle tasolle. Ajettavat rakenteet on jätetty pois. ModelBase metamalli on yhdistetty olemassa olevista analysoiduista metamalleista. Tätä metamallia voidaan käyttää tulevaisuudessa ohjelmistotyökalujen kehitykseen.
Resumo:
Tutkimuksen tavoitteena oli määrittää etuja, joita huolellinen immateriaali-oikeussalkun hallinnointi ohjelmistoalalla luo yritykselle. Tutkimusaineisto on kerätty haastattelemalla eri asemissa olevia ihmisiä kolmesta suomalaisesta ohjelmistoalan tuote- ja palveluyrityksestä. Tutkimuksesta käy ilmi, että ohjelmistoyritysten immateriaalioikeussalkut koostuvat liikesalaisuuksista, tekijänoikeudesta, tavaramerkeistä, verkkotunnuksista ja muutamista patenteista. Kiinnostus patentteihin ohjelmistoalalla on kasvanut erityisesti niiden tuoman tekijänoikeutta vahvemman suojan takia. Tällä hetkellä Euroopassa suhtautuminen ohjelmistopatentteihin on kuitenkin vielä käymistilassa. Jos ohjelmistopatentit hyväksytään, immateriaalioikeussalkun strateginen merkitys kasvaa. Tällöin salkunn hallinnointi tukee yrityksen tavoitteita - esimerkiksi oman toimintavapauden turvaamista - avustaen hakemus-prosessissa, tarkkaillen markkinoita sekä arvioiden yrityksen oman immateriaalioikeussalkun erilaisia hyväksikäyttömahdollisuuksia.
Resumo:
Tutkielman tavoitteena oli tutkia pienten- ja keskisuurten (PK) ohjelmistoyritysten kansainvälisiä kumppanuuksia. Päätavoitteena oli löytää keinoja kuinka PK-ohjelmistoyritykset voisivat tulla strategisiksi kumppaneksi suurten kansainvälisten yritysten kumppanuusohjelmissa. Lisäksi tutkielmassa oli tavoitteena selvittää kuinka kumppaneiden välistä sitoutumista voitaisiin vahvistaa, jotta PK-ohjelmistoyritykset voisivat saavuttaa todellista lisäarvoa ja kansainvälistä kasvua kumppanuusohjelmien kautta. Tutkielma jakaantuu teoreettiseen ja empiiriseen osaan. Teoreettinen osa keskittyy tarkastelemaan korkean teknologian markkinointia ohjelmistoalalla sekä kansainvälisiä kumppanuuksia. Suurten yritystenkumppanuusohjelmia ei ole tutkittu suomalaisten PK-ohjelmistoyritysten näkökulmasta, minkä vuoksi empiirinen tutkimus on perusteltua. Empiirinen tutkimus toteutettiin laadullisena case-tutkimuksena ja tutkimusmenetelmänä käytettiin puolistrukturoitua haastattelua. Tutkimustulokset osoittavat, että strategisen kumppanin aseman saavuttaminen on pitkä ja haastava matka PK-yrityksille. Suurten kansainvälisten yritysten kumppanuusohjelmat ovat useimmiten monimutkaisia ja todellisen lisäarvon saavuttaminen kumppanuusohjelman kautta vaatii paljon resursseja PK-yrityksiltä. Jotta PK-yritykset voisivat saavuttaa ja säilyttää strategisen kumppanin aseman kumppanuusohjelmassa, vaatii se aktiivista ja päivittäistä vuorovaikutusta kumppaneiden kesken. Erityisesti tiiviit henkilösuhteet oikeiden avainhenkilöiden kanssa ovat välttämättömyys. Läheiset kontaktit mahdollistavat sen, että PK-yritykset voivat ainakin osittain ohittaa kumppanuusohjelman byrokratian, mikä lisää luottamusta ja sitoutumista kumppanuussuhteessa sekä edistää kansainvälistä kasvua ja menestystä liiketoiminnassa.
Resumo:
Tämän pro gradu -tutkielman tarkoituksena oli selvittää, mikä on sopivin kasvustrategia omia ohjelmistoja tuottavalle tietojärjestelmäintegraattorille. Tavoitteena oli, että tutkimustuloksia voidaan käyttää konkreettisesti toimeksiantajan, Informa Oy:n, strategisen suunnittelun tukena. Yrityksen strateginen analyysi käsitellään ulkoisten ja sisäisten tekijöiden suhteen. Samalla vertaillaan eri strategiatutkimuksen koulukuntia ja luodaan synteesiä näiden välille. Kolmas teoreettinen kokonaisuus on kasvustrategioiden tarkastelu sekä suuntaus-, että kilpailustrategioiden näkökulmasta. Empiirisen osan aineisto kerättiin kvalitatiivisen haastattelututkimuksen avulla, jossa selvitettiin toimeksiantajayrityksessä eri tehtävissä työskentelevien henkilöiden näkemyksiä. Lisäksi analysoitiinyrityksen teettämiä asiakaskyselyitä ja toimialan kehitystä käsitteleviä lehtiartikkeleita. Toimialan merkittävimmiksi kehitystrendeiksi todettiin kansainvälistyminen, verkostoituminen, keskittyminen, tuotteiden vakioituminen ja ohjelmistorajapintojen avoimuuden lisääntyminen. Informa Oy:lle esitetyistä toimenpide-ehdotuksista tärkeimmät liittyvät resurssien panostamiseen entistä enemmän ydinosaamisen ja ydintuotteiden kehittämiseen. Toimialan keskittyminen tarkoittaa sitä, että pienen lattiatason tietojärjestelmiä toimittavan yrityksen on päästävä mukaan kilpailusta voittajana selviäviin arvoketjuihin. Asiakassegmenteistä potentiaalisimmaksi kasvualueeksi todettiin pk-yritykset, sekä julkinen- ja palvelusektori.
Resumo:
En esta memoria final se encuentra embebido la investigación realizada para poder generar una aplicación Web que permite registrar los procesos realizados para la producción de leche en el Cantón Cayambe de la provincia de Pichincha en Ecuador, el mismo que gracias a la ayuda del CILEC se pudo llevar a su culminación.En el primer capítulo de este documento se hace una breve introducción donde se profundiza la problemática del proyecto, así mismo se puntualiza los objetivos con los cuales se determina las directrices que dieron la guía al proyecto; en este capítulo también se topa brevemente sobre el estado del arte en el cual se puntualiza sobre los trabajos realizados hasta la actualidad.El segundo capítulo presenta el análisis realizado durante la recolección de requerimientos funcionales, deduciendo la automatización de los mismos, luego en mediante la aplicación de la metodología XP se pudo generar los diagramas que dieron el flujo del sistema. Aquí también se describe la estructura de la base de datos que se va a utilizar dentro de la aplicación. En consecución del diseño del sistema se procede a desarrollar la aplicación descrita en el tercer capítulo, donde se describe brevemente los paquetes creados y las configuraciones pertinentes, así mismo se plantea las pruebas de funcionamiento del sistema. En el cuarto capítulo se muestra los resultados de la aplicabilidad del sistema en función de los módulos determinados del sistema. Por último se expone las conclusiones como las referencias bibliográficas que se usó para el presente documento.
Resumo:
We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.
Resumo:
RÉSUMÉ Le Grand tétras est un galliforme de montagne apparenté au faisan et au tétras lyre. Il est distribué de manière continue à travers la toundra et les montagnes de moyenne altitude en Europe de l'ouest. Toutefois, les populations d'Europe de l'ouest ont subi un déclin constant au cours des derniers siècles. Les causes de ce déclin sont probablement liées à l'activité humaine, telle .que l'élevage ou le tourisme, qui ont engendré une modification et une fragmentation de l'habitat de l'espèce. Malheureusement, les populations soumises à de forts déclins démographiques peuvent subir des effets génétiques (augmentation de la consanguinité et perte de diversité génétique) pouvant diminuer leur potentiel de reproduction et conduire irrémédiablement à l'extinction. Cette thèse présente les analyses conduites dans le but d'estimer l'impact du déclin démographique des populations de Grand tétras sur l'étendue et la distribution de leur variabilité génétique dans le Jura et dans les Pyrénées. Du fait de la législation locale protégeant les tétraonidés en général, mais également en raison de la biologie très cryptique du Grand tétras, l'ensemble des analyses de cette étude a été réalisé à partir de matériel génétique extrait des fientes (ou échantillonnage génétique non invasif). Dans la première partie de l'étude, je détaille les protocoles d'extraction. d'ADN et d'amplification par PCR modifiés à partir des protocoles classiques utilisant des échantillons conventionnels, riches en ADN. L'utilisation d'ADN fécal impose des contraintes dues à la mauvaise qualité et à la faible quantité du matériel génétique à disposition dans les fientes. Ces contraintes ont pu être partiellement contournées en réalisant des répétitions multiples du génotypage afin d'obtenir un degré de fiabilité suffisante. J'ai également analysé les causes de la dégradation de l'ADN dans les excréments. Parmi les causes les plus communes, telles que l'activité bactérienne, l'hydrolyse spontanée et la dégradation enzymatique par les DNases libres, c'est ce dernier facteur qui apparaît comme étant la cause majeure et la plus rapide responsable de la dégradation de la qualité des échantillons. La rapidité de l'action enzymatique suggère que les plans d'échantillonnages de excréments sur le terrain pourraient être optimisés en les réalisant dans des conditions climatiques froides et sèches, favorisant ainsi l'inhibition des DNases. La seconde partie de la thèse est une étude par simulation visant à déterminer la capacité du logiciel Structure à identifier les structures génétiques complexes et hiérarchiques fréquemment rencontrées dans les populations naturelles, et ce en utilisant différents types de marqueurs génétiques. Les troisième et quatrième parties de cette thèse décrivent le statut génétique des populations résiduelles du Jura et des Pyrénées à partir de l'analyse de 11 loci microsatellites. Nous n'avons pas pu mettre en évidence dans les deux populations des effets liés à la consanguinité ou à la réduction de la diversité génétique. De plus, la différenciation génétique entre les patches d'habitats favorables reste modérée et corrélée à la distance géographique, ce qui suggère que la dispersion d'individus entre les patches a été importante au moins pendant ces dernières générations. La comparaison des paramètres de la diversité génétique avec ceux d'autres populations de Grand tétras, ou d'autres espèces proches, indique que la population du Jura a retenu une proportion importante de sa diversité originelle. Ces résultats suggèrent que le déclin récent des populations a jusqu'ici eu un impact modéré sur les facteurs génétiques et que ces populations semblent avoir conservé le potentiel génétique nécessaire à leur survie à long terme. Finalement, en cinquième partie, l'analyse de l'apparentement entre les mâles qui participent à la parade sur les places de chant (leks) indique que ces derniers sont distribués en agrégats de manière non aléatoire, préférentiellement entre individus apparentés. De plus, la corrélation entre les distances génétique et géographique entre les leks est en accord avec les motifs d'isolement par la distance mis en évidence à d'autres niveaux hiérarchiques (entre patches d'habitat et populations), ainsi qu'avec les études menées sur d'autres espèces ayant choisi ce même système de reproduction. En conclusion, cette première étude basée uniquement sur de l'ADN nucléaire aviaire extrait à partir de fèces a fourni des informations nouvelles qui n'auraient pas pu être obtenues par une méthode d'observation sur le terrain ou d'échantillonnage génétique classique. Aucun oiseau n'a été dérangé ou capturé, et les résultats sont comparables à d'autres études concernant des espèces proches. Néanmoins, la taille de ces populations approche des niveaux au-dessous desquels la survie à long terme est fortement incertaine. La persistance de la diversité génétique pour les prochaines générations reste en conséquence liée à la survie des adultes et à une reprise du succès de la reproduction. ABSTRACT Capercaillie (Tetrao urogallus) is a large grouse that is continuously distributed across the tundra and the mid-high mountains of Western Europe. However, the populations in Western Europe have been showing a constant decline during the last decades. The causes for this decline are possibly related to human activities, such as cattle breeding and tourism that have both led to habitat modification and fragmentation. Unfortunately, populations that have undergone drastic demographic bottlenecks often go through genetic processes of inbreeding and loss of diversity that decrease their fitness and eventually lead to extinction. This thesis presents the investigations conducted to estimate the impact of the demographic decline of capercaillie populations on the extent and distribution of their genetic variability in the Jura and in the Pyrenees mountains. Because grouse are protected by wildlife legislation, and also because of the cryptic behaviour of capercaillie, all DNA material used in this study was extracted from faeces (non-invasive genetic sampling). In the first part of my thesis, I detail the protocols of DNA extraction and PCR amplification adapted from classical methods using conventional DNA-rich samples. The use of faecal DNA imposes specific constraints due to the low quantity and the highly degraded genetic material available. These constraints are partially overcome by performing multiple genotyping repetitions to obtain sufficient reliability. I also investigate the causes of DNA degradation in faeces. Among the main degraders, namely bacterial activity, spontaneous hydrolysis, and free-¬DNase activities, the latter was pointed out as the most important according to our experiments. These enzymes degrade DNA very rapidly, and, as a consequence, faeces sampling schemes must be planned preferably in cold and dry weather conditions, allowing for enzyme activity inhibition. The second part of the thesis is a simulation study aiming to assess the capacity of the software Structure to detect population structure in hierarchical models relevant to situations encountered in wild populations, using several genetic markers. The methods implemented in Structure appear efficient in detecting the highest hierarchical structure. The third and fourth parts of the thesis describe the population genetics status of the remaining Jura and Pyrenees populations using 11 microsatellite loci. In either of these populations, no inbreeding nor reduced genetic diversity was detected. Furthermore, the genetic differentiation between patches defined by habitat suitability remains moderate and correlated with geographical distance, suggesting that significant dispersion between patches was at work at least until the last generations. The comparison of diversity indicators with other species or other populations of capercaillie indicate that population in the Jura has retained a large part of its original genetic diversity. These results suggest that the recent decline has had so forth a moderate impact on genetic factors and that these populations might have retained the potential for long term survival, if the decline is stopped. Finally, in the fifth part, the analysis of relatedness between males participating in the reproduction parade, or lek, indicate that capercaillie males, like has been shown for some other grouse species, gather on leks among individuals that are more related than the average of the population. This pattern appears to be due to both population structure and kin-association. As a conclusion, this first study relying exclusively on nuclear DNA extracted from faeces has provided novel information that was not available through field observation or classical genetic sampling. No bird has been captured or disturbed, and the results are consistent with other studies of closely related species. However, the size of these populations is approaching thresholds below which long-term survival is unlikely. The persistence of genetic diversity for the forthcoming generations remains therefore bond to adult survival and to the increase of reproduction success.
Resumo:
VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
This thesis investigates factors that affect software testing practice. The thesis consists of empirical studies, in which the affecting factors were analyzed and interpreted using quantitative and qualitative methods. First, the Delphi method was used to specify the scope of the thesis. Secondly, for the quantitative analysis 40industry experts from 30 organizational units (OUs) were interviewed. The survey method was used to explore factors that affect software testing practice. Conclusions were derived using correlation and regression analysis. Thirdly, from these 30 OUs, five were further selected for an in-depth case study. The data was collected through 41 semi-structured interviews. The affecting factors and their relationships were interpreted with qualitative analysis using grounded theory as the research method. The practice of software testing was analyzed from the process improvement and knowledge management viewpoints. The qualitative and quantitativeresults were triangulated to increase the validity of the thesis. Results suggested that testing ought to be adjusted according to the business orientation of the OU; the business orientation affects the testing organization and knowledge management strategy, and the business orientation andthe knowledge management strategy affect outsourcing. As a special case, the complex relationship between testing schedules and knowledge transfer is discussed. The results of this thesis can be used in improvingtesting processes and knowledge management in software testing.
Resumo:
The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.
Resumo:
This thesis examines coordination of systems development process in a contemporary software producing organization. The thesis consists of a series of empirical studies in which the actions, conceptions and artifacts of practitioners are analyzed using a theory-building case study research approach. The three phases of the thesis provide empirical observations on different aspects of systemsdevelopment. In the first phase is examined the role of architecture in coordination and cost estimation in multi-site environment. The second phase involves two studies on the evolving requirement understanding process and how to measure this process. The third phase summarizes the first two phases and concentrates on the role of methods and how practitioners work with them. All the phases provide evidence that current systems development method approaches are too naïve in looking at the complexity of the real world. In practice, development is influenced by opportunity and other contingent factors. The systems development processis not coordinated using phases and tasks defined in methods providing universal mechanism for managing this process like most of the method approaches assume.Instead, the studies suggest that managing systems development process happens through coordinating development activities using methods as tools. These studies contribute to the systems development methods by emphasizing the support of communication and collaboration between systems development participants. Methods should not describe the development activities and phases in a detail level, butshould include the higher level guidance for practitioners on how to act in different systems development environments.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.