997 resultados para brake even point


Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. 1. Summaries 1.1. Preamble and extended abstract The present thesis dissertation addresses the question of antiviral immunity from the particular standpoint of the adaptive T cell-mediated immune response. The experimental work is presented in the form of three published articles (two experimental articles and one review article, see sections 4.1, 4.2 and 4.3 on pages 73, 81 and 91, respectively), describing advances both in our understanding of viral control by CD8 T lymphocytes, and in vaccine development against the Human Immunodeficiency Virus Type 1 (HIV-1). Because the articles focus on rather specialized areas of antiviral immunity, the article sections are preceded by a general introduction (section 3) on the immune system in general, and on four viruses that were addressed in the experimental work, namely HIV-1, Cytomegalovirus (CMV), Epstein Barr Virus (EBV) and Influenzavirus (Flu). This introduction section is aimed at providing a glimpse on viral molecular biology and immunity, to help the hypothetical non-expert reader proceeding into the experimental part. For this reason, each section is presented as individual entity and can be consulted separately. The four viruses described are of peculiar relevance to immunity because they induce an array of opposite host responses. Flu causes a self limiting disease after which the virus is eradicated. CMV and EBV cause pauci-symptomatic or asymptomatic diseases after which the viruses establish lifelong latency in the host cells, but are kept in check by immunity. Eventually, HIV-1 establishes both latency - by inserting its genome into the host cell chromosome - and proceeds in destroying the immune system in a poorly controlled fashion. Hence, understanding the fundamental differences between these kinds of viral host interactions might help develop new strategies to curb progressive diseases caused by viruses such as HIV-1. Publication #1: The first article (section 4.1, page 73) represents the main frame of my laboratory work. It analyses the ability of CD8 T lymphocytes recovered from viral-infected patients to secrete interferon γ (IFN-γ) alone or in conjunction with interleukin 2 (IL-2) when exposed in vitro to their cognate viral antigens. CD8 T cells are instrumental in controlling viral infection. They can identify infected cells by detecting viral antigens presented at the surface of the infected cells, and eliminate both the cell and its infecting virus by triggering apoptosis and/or lysis of the infected cell. Recognition of these antigens triggers the cognate CD8 cells to produce cytokines, including IFN-γ and IL-2, which in turn attract and activate other pro-inflammatory cells. IFN-γ triggers both intrinsic antiviral activity of the infected cells and distant activation of pro-inflammatory cells, which are important for the eradication of infection. IL-2 is essential for clonal expansion of the antigen (Ag)-specific CD8 T cell. Hence the existence of Ag-specific CD8 cells secreting both IFN-γand IL-2 should be beneficial for controlling infection. In this first work we determined the percentage of IFN-y/IL-2 double positive and single IFN-γsecreting CD8 T cells against antigens HIV-1, CMV, EBV and Flu in three groups of subjects: (i) HIV-1 infected patients progressing to disease (progressors), (ii) HIV-1-infected subjects not progressing to disease (long-term non progressors or LTNP), and (iii) HIV negative blood donors. The results disclosed a specific IFN-y/IL-2 double positive CD8 response in all subjects able to control infection. In other words, IFN-y/IL-2 double positive CD8 cells were present in virus-specific CD8 T cells against Flu, CMV and EBV as well against HIV-1 in LTNP. In contrast, progressors only had single IFN-γsecreting CD8 T cells. Hence, the ability to develop an IFN-y/IL-2 double positive response might be critical to control infection, independently of the nature of the virus. Additional experiments helped identify the developmental stage of the missing cells (using different markers such as CD45RA and CCR7) and showed a correlation between the absence of IL-2 secreting CD8 T cells and a failure in the proliferation capacity of virus-specific CD8 T cells. Addition of exogenous IL-2 could restore clonal expansion of HIV-1 specific CD8 T cells, at least in vitro. It could further been shown, that IL-2 secreting CD8 T cells are sufficient to support proliferation even in absence of CD4 help. However, the reason for the missing IFN-y/IL-2 double positive CD8 T cell response in HIV-1 progessors has yet to be determined. Publication #2: The second article (section 4.2, page 81) explores new strategies to trigger CD8 T cell immunity against specific HIV-1 proteins believed to be processed and exposed as "infection signal" at the surface of infected cells. Such signals consist of peptide fragments (8- 13 amino acids) originating from viral proteins and presented to CD8 T cells in the frame of particular cell surface molecules of the major histocompatibility complex class I* (MHC I). To mimic "natural" viral infection, the HIV-1 polyprotein Gagpolnef was inserted and expressed in either of two attenuated viruses i.e. vaccinia virus (MVA) or poxvirus (NYVAC). Mice were infected with these recombinant viruses and specific CD8 T cell response to Gagpolnef peptides was sought. Mice could indeed mount a CD8 T cell response against the HIV-1 antigens, indicating that the system worked, at least in this animal model. To further test whether peptides from Gagpolnef could also be presented in the frame of the human MHC class I proteins, a second round of experiments was performed in "humanized" transgenic mice expressing human MHC molecules. The transgenic mice were also able to load Gagpolnef peptides on their human MHC molecule, and these cells could be detected and destroyed by Ag-specific CD8 T cells isolated from HIV-1-infected patients. Therefore, expressing Gagpolnef on attenuated recombinant viruses might represent a valid strategy for anti-HIV-1 immunization in human. Publication #3: This is a review paper (section 4.3, page 91) describing the immune response to CMV and newly developed methods to detect this cellular immune response. Some of it focuses on the detection of T cells by using in vitro manufactured tetramers. These consist of four MHC class I molecules linked together and loaded with the appropriate antigenic peptide. The tetramer can be labeled with a fluorochrome and analyzed with a fluorescence-activated cell sorter. Taken together, the work presented indicates that (i) an appropriate CD8 T cell response consisting of IFN-y/IL-2 double positive effectors, can potentially control viral infection, including HIV-1 infection, (ii) such a response might be triggered by recombinant viral vaccines, and (iii) CD8 T cell response can be monitored by a variety of techniques, including recently-developed MHC class I tetramers. 1. 2. Préambule et résumé élargi Le présent travail de thèse s'intéresse à l'immunité antivirale du point de vue particulier de la réponse adaptative des cellules T. Le travail expérimental est présenté sous la forme de trois articles publiés (2 articles expérimentaux et 1 article de revue, voir sections 4.1, 4.2 et 4.3, pages 58, 66 et 77, respectivement), décrivant des progrès dans la compréhension du contrôle de l'infection virale par les lymphocytes T CD8, ainsi que dans le développement de nouveaux vaccins contre le Virus d'Immunodéficience de Humaine de type 1 (VIH-1). En raison du caractère spécialisé de l'immunité antivirale de type cellulaire, les articles sont précédés par une introduction générale (section 3), dont le but est de pourvoir le lecteur non avisé avec des bases nécessaire à une meilleure appréhension du travail expérimental. Cette introduction présente les grandes lignes du système immunitaire, et décrit de façon générale les 4 virus utilisés dans le travail expérimental: à savoir le virus VIH-1, le Cytomégalovirus (CMV), le virus Epstein Barr (EBV) et le virus Influenza A (Flu). Toutes les sections sont présentées de façon individuelle et peuvent être consultées séparément. La description des 4 virus a une pertinence particulière quant à leur interaction avec le système immun. En effet, ils induisent une panoplie de réponses immunitaires s'étendant aux extrêmes de la réaction de l'hôte. Influenza A est à l'origine d'une maladie cytopathique aiguë, au décours de laquelle le virus est éradiqué par l'hôte. CMV et EBV sont classiquement à l'origine d'infections pauci-symptomatiques, voire asymptomatiques, après lesquelles les virus persistent de façon latente dans la cellule hôte. Cependant, ils restent sous le contrôle du système immun, qui peut prévenir une éventuelle réactivation. Enfin, VIH-1 s'établit à la fois en infection latente - par l'insertion de son génome dans le chromosome des cellules hôtes - et en infection productive et cytopathique, échappant au contrôle immunitaire et détruisant ses cellules cibles. La compréhension des différences fondamentales entre ces différents types d'interactions virus-hôte devraient faciliter le développement de nouvelles stratégies antivirales. Article 1: Le premier article (section 4.1 Page 58) représente l'objet principal de mon travail de laboratoire. Il analyse la capacité des lymphocytes T CD8 spécifiques de différent virus à sécréter de l'interféron gamma (IFN-y) et/ou de l'interleukine 2 (IL-2) après stimulation par leur antigène spécifique. Les cellules T CD8 jouent un rôle crucial dans le contrôle des infections virales. Elles identifient les cellules infectées en détectant des antigènes viraux présentés à la surface de ces mêmes cellules, et éliminent à la fois les cellules infectées et les virus qu'elles contiennent en induisant l'apoptose et/ou la lyse des cellules cibles. Parallèlement, l'identification de l'antigène par la cellule T CD8 la stimule à sécréter des cytokines. L'IFN-γen est un exemple. L'IFN-γ stimule les cellules infectées à développer une activé antivirale intrinsèque. De plus, il attire sur place d'autres cellules de l'inflammation, et active leur fonction d'éradication des pathogènes. L'IL-2 est un autre exemple. L'IL-2 est essentielle à l'expansion clonale des cellules T CD8 spécifiques à un virus donné. Elle est donc essentielle à augmenter le pool de lymphocytes antiviraux. En conséquence, la double capacité de sécréter de l'IFN-γ et de IL-2 pourrait être un avantage pour le contrôle antiviral par les cellules T CD8. Dans ce travail nous avons comparé les proportions de lymphocytes T CD8 doubles positifs (IFN-γ/IL-2) et simples positifs (IFN-γ) chez trois groupes de sujets: (i) des patients infectés par VIH-1 qui ne contrôlent pas l'infection (progresseurs), (ii) des patients infectés par VIH-1, mais contrôlant l'infection malgré l'absence de traitement ("long term non progressors" [LTNP]) et (iii) des donneurs de sang négatifs pour l'infection à VIH-1. Les résultats ont montré que les individus capables de contrôler une infection possédaient des cellules T CD8 doubles positifs (IFN-γ/IL-2), alors que les patients ne contrôlant pas l'infection procédaient prioritairement des CD8 simples positifs (IFN-γ). Spécifiquement, les lymphocytes T spécifiques pour Flu, CMV, EBV, et VII-1-1 chez les LTNP étaient tous IFN-γ/IL-2 doubles positifs. Au contraire, les lymphocytes T CD8 spécifique à VIH-1 étaient IFN-γ simples positifs chez les progresseurs. La capacité de développer une réponse IFN-γ/IL-2 pourraient être primordiale pour le contrôle de l'infection, indépendamment de la nature du virus. En effet, il a été montré que l'absence de sécrétion d'IL2 par les lymphocytes T CD8 corrélait avec leur incapacité de proliférer. Dans nos mains, cette prolifération a pu être restaurée in vitro par l'adjonction exogène d'IL-2. Toutefois, la faisabilité de ce type de complémentation in vivo n'est pas claire. Des expériences additionnelles ont permis de préciser de stade de développement des lymphocytes doubles positifs et simples positifs par le biais des marqueurs CD45RA et CCR7. Il reste maintenant à comprendre pourquoi certains lymphocytes T CD8 spécifiques sont incapables à sécréter de l'IL-2. Article 2: Le deuxième article explore des nouvelles stratégies pour induire une immunité T CD8 spécifique aux protéines du VIH-1, qui sont édités et exposés à la surface des cellules infectées. Ces signaux consistent en fragments de peptide de 8-13 acide aminés provenant de protéines virales, et exposées à la surface des cellules infectées dans le cadre des molécules spécialisées d'histocompatibilité de classe I (en anglais "major histocompatibility class I" ou MHC I). Pour mimer une infection virale, la polyprotéine Gagpolnef du VIH-1 a été insérée et exprimée dans deux vecteurs viraux atténués, soit MVA (provenant de vaccinia virus) ou NYVAC (provenant d'un poxvirus). Ensuite des souris ont été infectées avec ces virus recombinants et la réponse T CD8 aux peptides issus de Gagpolnef a été étudiée. Les souris ont été capables de développer une réponse de type CD8 T contre ces antigènes du VIH-1. Pour tester si ces antigènes pouvaient aussi être présentés par dans le cadre de molécules MHC humaines, des expériences supplémentaires ont été faites avec des souris exprimant un MHC humain. Les résultats de ces manipulations ont montré que des cellules T CD8 spécifique aux protéines du VIH pouvaient être détectées. Ce travail ouvre de nouvelles options quant à l'utilisation des virus recombinants exprimant Gagpolnef comme stratégie vaccinale contre le virus VIH-I chez l'homme. Article 3: Ces revues décrivent la réponse immunitaire à CMV ainsi que des nouvelles méthodes pouvant servir à sa détection. Une partie du manuscrit décrit la détection de cellule T à l'aide de tétramères. Il s'agit de protéines chimériques composées de 4 quatre molécules MHC liées entre elles. Elles sont ensuite "chargées" avec le peptide antigénique approprié, et utilisée pour détecter les cellules T CD8 spécifiques à ce montage. Elles sont aussi marquées par un fluorochrome, qui permet une analyse avec un cytomètre de flux, et l'isolement ultime des CD8 d'intérêt. En résumé, le travail présenté dans cette thèse indique que (i) une réponse T CD8 appropriée - définie par la présence des cellules effectrices doublement positives pour l'IFN-γ et l'IL-2 - semble indispensable pour le contrôle des infections virales, y compris par le VIH-1, (ii) une telle réponse peut être induite par des vaccin viral recombinant, et (iii) la réponse T CD8 peut être analysée et suivie grâce à plusieurs techniques, incluant celle des tétramères de MHC class I. 1.3. Résumé pour un large public Le système immunitaire humain est composé de différents éléments (cellules, tissus et organes) qui participent aux défenses de l'organisme contre les pathogènes (bactéries, virus). Parmi ces cellules, les lymphocytes T CD8, également appelés cellules tueuses, jouent un rôle important dans la réponse immunitaire et le contrôle des infections virales. Les cellules T CD8 reconnaissent de manière spécifique des fragments de protéines virales qui sont exposés à la surface des cellules infectées par le virus. Suite à cette reconnaissance, les cellules T CD8 sont capables de détruire et d'éliminer ces cellules infectées, ainsi que les virus qu'elles contiennent. Dans le contexte d'une infection par le virus de l'immunodéficience humaine (VIH), le virus responsable du SIDA, il a pu être montré que la présence des cellules T CD8 est primordiale. En effet, en l'absence de ces cellules, les individus infectés par le VIH progressent plus rapidement vers le SIDA. Au cours de la vie, l'Homme est exposé à plusieurs virus. Mais à l'opposé du VIH, certains d'entre eux ne causent pas des maladies graves : par exemple le virus de la grippe (Influenza), le cytomégalovirus ou encore le virus d'Epstein-Barr. Certains de ces virus peuvent être contrôlés et éliminés de l'organisme (p. ex. le virus de la grippe), alors que d'autres ne sont que contrôlés par notre système immunitaire et restent présents en petite quantité dans le corps sans avoir d'effet sur notre santé. Le sujet de mon travail de thèse porte sur la compréhension du mécanisme de contrôle des infections virales par le système immunitaire : pourquoi certains virus peuvent être contrôlés ou même éliminés de l'organisme alors que d'autres, et notamment le VIH, ne le sont pas. Ce travail a permis de démontrer que les cellules T CD8 spécifiques du VIH ne sécrètent pas les mêmes substances, nécessaires au développement d'une réponse antivirale efficace, que les cellules T CD8 spécifiques des virus contrôlés (le virus de la grippe, le cytomégalovirus et le virus d'Epstein-Barr). Parallèlement nous avons également observé que les lymphocytes T CD8 spécifiques du VIH ne possèdent pas la capacité de se diviser. Ils sont ainsi incapables d'être présents en quantité suffisante pour assurer un combat efficace contre le virus du SIDA. La (les) différence(s) entre les cellules T CD8 spécifiques aux virus contrôlés (grippe, cytomégalovirus et Epstein-Barr) et au VIH pourront peut-être nous amener à comprendre comment restaurer une immunité efficace contre ce dernier.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Freezing point depressions (¿Tf) of dilute solutions of several alkali metal chlorides and bromides were calculated by means of the best activity coefficient equations. In the calculations, Hückel, Hamer and Pitzer equationswere used for activity coefficients. The experimental ¿Tf values available in the literature for dilute LiCl, NaCl and KBr solutions can be predicted within experimental error by the Hückel equations used. The experimental ¿Tf values for dilute LiCl and KBr solutions can also be accurately calculated by corresponding Pitzer equations and those for dilute NaCl solutions by the Hamer equation for this salt. Neither Hamer nor Pitzer equations predict accurately the freezing points reported in the literature for LiBr and NaBr solutions. The ¿Tf values available for dilute solutions of RbCl, CsCl or CsBr are not known at the moment accurately because the existing data for these solutions are not precise. The freezing point depressions are tabulated in the present study for LiCl, NaCl and KBr solutions at several rounded molalities. The ¿Tf values in this table can be highly recommended. The activity coefficient equations used in the calculation of these values have been tested with almost allhigh-precision electrochemical data measured at 298.15 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé: Dans le contexte d'un climat de plus en plus chaud, la localisation du pergélisol dans les terrains sédimentaires à forte déclivité et l'évaluation des mouvements de terrain qui y ont cours s'avèrent primordiales. S'insérant dans cette problématique, ce travail de thèse s'articule autour de deux axes de recherche différents. D'un point de vue statique, cette recherche propose une étude de la distribution et des caractéristiques du pergélisol dans les éboulis de la zone périglaciaire alpine. D'un point de vue dynamique, une analyse de l'influence des caractéristiques du pergélisol (teneur en glace, température du pergélisol, etc.) et des variations des températures de l'air et du sol sur les vitesses de fluage des corps sédimentaires gelés est effectuée. Afin de répondre à ce double objectif, l'approche "terrain" a été privilégiée. Pour déterminer la répartition et les caractéristiques du pergélisol, les méthodes traditionnelles de prospection du pergélisol ont été utilisées, à savoir la mesure de la température du sol à la base du manteau neigeux (BTS), la mesure de la température du sol en continu ainsi que la méthode géoélectrique. Les mouvements de terrain ont pour leur part été mesurés à l'aide d'un GPS différentiel. L'étude de la distribution du pergélisol a été effectuée dans une quinzaine d'éboulis situés dans les régions du Mont Gelé (Verbier-Nendaz) et d'Arolla principalement. Dans la plupart des cas, un pergélisol a pu être mis en évidence dans la partie inférieure des accumulations sédimentaires, alors que la partie médiane des éboulis n'est, le plus souvent, pas gelée. Si cette absence de pergélisol se prolonge parfois dans les portions sommitales des pentes, les mesures réalisées montrent que dans d'autres cas des sédiments gelés y sont à nouveau présents. Les résistivités électriques mesurées dans les portions gelées des éboulis étudiés sont dans la plupart des cas nettement inférieures à celles mesurées sur les glaciers rocheux. Des études préalables ont montré que des circulations d'air internes sont responsables de l'anomalie thermique négative et, lorsqu'il existe, du pergélisol que l'on trouve dans la partie inférieure d'éboulis situés plus de 1000 m plus bas que la limite inférieure régionale du pergélisol discontinu. L'étude de quatre sites de basse altitude (1400-1900 m), et notamment l'équipement du site de Dreveneuse (Préalpes Valaisannes) avec deux forages, des capteurs de température de surface et un anémomètre a permis de vérifier et de préciser le mécanisme de ventilation actif au sein des éboulis froids de basse altitude. Ce mécanisme fonctionne de la manière suivante: en hiver, l'air contenu dans l'éboulis, plus chaud et plus léger que l'air extérieur, monte à l'intérieur de l'accumulation sédimentaire et est expulsé dans ses parties sommitales. Cet effet de cheminée provoque une aspiration d'air froid à l'intérieur de la partie inférieure de l'éboulis, causant ainsi un sur-refroidissement marqué du terrain. En été, le mécanisme s'inverse, l'éboulis étant plus froid que l'air environnant. De l'air froid est alors expulsé au bas de la pente. Une ventilation ascendante hivernale a pu être mise en évidence dans certains des éboulis de haute altitude étudiés. Elle est probablement en grande partie responsable de la configuration particulière des zones gelées observées. Même si l'existence d'un effet de cheminée n'a pu être démontrée dans tous les cas, du fait notamment de la glace interstitielle qui entrave le cheminement de l'air, des indices laissant présager son possible fonctionnement existent dans la quasi totalité des éboulis étudiés. L'absence de pergélisol à des altitudes qui lui sont favorables pourrait en tous les cas s'expliquer par un réchauffement du terrain lié à des expulsions d'air relativement chaud. L'étude des mouvements de terrain a été effectuée sur une dizaine de sites, principalement sur des glaciers rocheux, mais également sur une moraine de poussée et - II - Résumé ? abstract quelques éboulis. Plusieurs glaciers rocheux présentent des formes de déstabilisation récente (niches d'arrachement, blocs basculés, apparition de la matrice fine à la surface, etc.), ce qui témoigne d'une récente accélération des vitesses de déplacement. Ce phénomène, qui semble général à l'échelle alpine, est probablement à mettre sur le compte du réchauffement du pergélisol depuis une vingtaine d'années. Les vitesses mesurées sur ces formations sont souvent plus élevées que les valeurs habituellement proposées dans la littérature. On note par ailleurs une forte variabilité inter-annuelle des vitesses, qui semblent dépendre de la variation de la température moyenne annuelle de surface. Abstract: In the context of a warmer climate, the localisation of permafrost in steep sedimentary terrain and the measurement of terrain movements that occur in these areas is of great importance. With respect to these problems, this PhD thesis follows two different research axes. From a static point of view, the research presents a study of the permafrost distribution and characteristics in the talus slopes of the alpine periglacial belt. From a dynamic point of view, an analysis of the influence of the permafrost characteristics (ice content, permafrost temperature, etc.) and air and soil temperature variations on the creep velocities of frozen sedimentary bodies is carried out. In order to attain this double objective, the "field" approach was favoured. To determine the distribution and the characteristics of permafrost, the traditional methods of permafrost prospecting were used, i.e. ground surface temperature measurements at the base of the snow cover (BTS), year-round ground temperature measurements and DC-resistivity prospecting. The terrain movements were measured using a differential GPS. The permafrost distribution study was carried out on 15 talus slopes located mainly in the Mont Gelé (Verbier-Nendaz) and Arolla areas (Swiss Alps). In most cases, permafrost was found in the lower part of the talus slope, whereas the medium part was free of ice. In some cases, the upper part of the talus is also free of permafrost, whereas in other cases permafrost is present. Electrical resistivities measured in the frozen parts of the studied talus are in most cases clearly lower than those measured on rock glaciers. Former studies have shown that internal air circulation is responsible for the negative thermal anomaly and, when it exists, the permafrost present in the lower part of talus slopes located more than 1000 m below the regional lower limit of discontinuous permafrost. The study of four low-altitude talus slopes (1400-1900 m), and notably the equipment of Dreveneuse field site (Valais Prealps) with two boreholes, surface temperature sensors and an anemometer permitted to verify and to detail the ventilation mechanism active in low altitude talus slopes. This mechanism works in the following way: in winter, the air contained in the block accumulation is warmer and lighter than the surrounding air and therefore moves upward in the talus and is expelled in its upper part. This chimney effect induces an aspiration of cold air in the interior of the lower part of talus, that causes a strong overcooling of the ground. In summer, the mechanism is reversed because the talus slope is colder than the surrounding air. Cold air is then expelled in the lower part of the slope. Evidence of ascending ventilation in wintertime could also be found in some of the studied high-altitude talus slopes. It is probably mainly responsible for the particular configuration of the observed frozen areas. Even if the existence of a chimney effect could not be demonstrated in all cases, notably because of interstitial ice that obstructs Résumé ? abstract - III - the air circulation, indices of its presence exist in nearly all the studied talus. The absence of permafrost at altitudes favourable to its presence could be explained, for example, by the terrain warming caused by expulsion of relatively warm air. Terrain movements were measured at about ten sites, mainly on rock glaciers, but also on a push moraine and some talus slopes. Field observations reveal that many rock glaciers display recent destabilization features (landslide scars, tilted blocks, presence of fine grained sediments at the surface, etc.) that indicate a probable recent acceleration of the creep velocities. This phenomenon, which seems to be widespread at the alpine scale, is probably linked to the permafrost warming during the last decades. The measured velocities are often higher than values usually proposed in the literature. In addition, strong inter-annual variations of the velocities were observed, which seems to depend on the mean annual ground temperature variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimus tarkastelee johdon laskentatoimea yritysverkostoissa. Tutkimuksen tavoitteena on lisätä ymmärrystä koskien sitä, millaisia tarpeita verkostoituneilla yrityksillä on johdon laskentatoimen suhteen ja millaiset ovat verkostotasoisten laskentajärjestelmien soveltamismahdollisuudet. Johdon laskentatoimea ja yritysverkostoja on tutkittu erillisinä alueina jo varsin pitkään, mutta verkostoituneen toimintamuodon yleistymisestä huolimatta johdon laskentatoimen tutkimus nimenomaan verkostoympäristössä onvasta alkuvaiheessa. Tämä tutkimus on toteutettu suomalaisissa verkostoituneissa metalliteollisuusyrityksissä. Tutkitut verkostot muodostuvatverkoston ydinyrityksestä ¿ kärkiyrityksestä - ja sen ympärille ryhmittyneistä pienistä toimittajayrityksistä. Tutkimus on toteutettu käyttäen teemahaastatteluja, joita suoritettiin neljässä eri yritysverkostossa. Tutkimusote on luonteeltaan laadullinen ja pääosin kuvaileva. Verkostojen laskentatoimen tutkimuksessa pienten toimittajayritysten näkemykset ja asenteet laskennan kehittämistä kohtaan ovat jääneet taka-alalle. Tämän tutkimuksen keskeisenä näkökulmana on juuri verkostoituneiden pk-yritystentarpeet, asenteet ja mahdollisuudet osallistua laskentatoimen kehitystyöhön. Tutkimuksessa tarkastellaan johdon laskentatoimea yleisellä tasolla, eikä rajauduta yksittäisiin menetelmiin. Tutkimus toi esille joitakin informaatiotarpeita verkostotasolla. Useita jalostusvaiheita eri yrityksissä käsittävässä tuotannossa vaikeutena on tietää tarkalleen missä tilaus etenee ja milloin tilaus saapuu yritykseen jalostettavaksi. Toimittajayritykset kaipaavat myös tarkempaa ja pidemmän ajanjakson kattavaa tietoa verkoston kärkiyrityksen tilauskannasta. Informaatiopuutteet aiheuttavat turhaa työtä ja vaikeuttavat resurssien ohjausta. Tutkituissa verkostoissa suunnittelua avustavalle budjetoinnille olisi tarpeita, mutta yhteistä budjetointia ei ole käytännössä toteutettu. Verkostojen laskentatoimessa kustannusten avoin esittäminen kumppaneille tai yleisemmin avoimuus koskien muitakin toiminnan mittareita on keskeisin tekijä, johon verkoston laskentatoimen toteuttaminen tiivistyy. Tutkituista verkostoista kahdessa avoimuus toteutui koskien kustannuksia, kahdessa muussa verkostossa kärkiyritykset eivät nähneet toimittajien kustannustietoja verkoston kilpailukyvyn kannalta merkittäväksi. Kustannusten avoimen esittämisen taustatekijöinä korostuivat kärkiyrityksen tuki, monimutkaisten konstruktioiden aiheuttamat hinnoitteluongelmat ja kannattavan asiakassuhteen varmistaminen. Verkostolaskennan kehittämisessä kärkiyrityksen rooli korostuu. Jos toimittajayritykset eivät tunnista verkostolaskennan tarjoamia mahdollisuuksia toiminnan yhteisessä kehittämisessä ja ohjaamisessa, on kärkiyrityksen pystyttävä perustelemaan toimittajille uskottavasti mitä hyötyä laskentajärjestelmistä on koko verkostolle ja erityisesti toimittajayrityksille. Tutkittujen yritysten on pystyttävä rutinoimaan nyt yleisellä tasolla toimiva keskustelu järjestelmälliseksi suunnittelu- ja ohjaustyöksi. Hyvät kokemuksetyksinkertaisistakin laskenta- ja seurantajärjestelmistä parantavat mahdollisuuksia soveltaa myös kattavampia ja luottamuksellisempaa tietoa sisältäviä laskentajärjestelmiä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lisääntynyt Balanced Scorecardin käyttö herätti kiinnostuksen tutkia, mistä BSC:ssä oli oikein kysymys. Epäonnistumiset BSC-projekteissa vaikuttivat siihen, että haluttiin tutkia, mikä nykyisissä projektimalleissa oli vikana. Kirjallisuudessa on esitetty useita BSC:n käyttöönoton projektimalleja, joista tunnetuin on Kaplanin ja Nortonin kehittämä malli. Alun perin kyseessä oli varsin operatiivinen suoritusmittaristo, jonka tavoitteena oli nostaa ei-taloudelliset mittarit taloudellisten mittareiden rinnalle. Sittemmin lähestymistapa onlaajentunut strategiapohjaiseksi johtamisen järjestelmäksi, mutta mallin rakentamisessa on vielä puutteita. Havaitut puutteet BSC-projektimalleissa loivat tarpeen uuden mallin kehittämiselle. Tutkimuksen tavoitteena oli kehittää suomalaisten yritysjohtajien ja alan asiantuntijoiden avulla BSC-projektimalli, jota käyttämällä yritykset voisivat menestyksekkäämmin toteuttaa BSC-projektinsa. Lisäksi tavoitteena oli selvittää BSC:n käytön nykytila Suomen 500 suurimmassa yrityksessä. Tutkimuksessa haluttiin myös hankkia tietoa siitä, miksi yritykset olivat lähteneet BSC-projektiin, mitkä tekijät vaikuttivat BSC-projektin onnistumiseen jamitä muutoksia yritykset olivat tehneet BSC:n käytännön kokemuksen pohjalta. Tutkimuksen teoriaosassa tarkasteltiin yrityksen strategista suunnittelua ja johtamista, yrityksen johtamisessa käytettyjä ohjausjärjestelmiä, toiminnan kehittämistä ja BSC-projektien toteuttamista. Tutkimuksen empiriisessä osassa kehitettiinkymmenvaiheinen BSC-projektin toteuttamismalli. Se tehtiin tutustumalla 15 konsultointiyrityksen tapaan toteuttaa BSC-projekti ja paneutumalla 50 yrityksen BSC-projektista saamiin kokemuksiin. Kehitettyä mallia testattiin Tulikivi-casessa,ja sitä arvioitiin kyselytutkimuksessa ja työistunnossa. Kyselytutkimuksen mukaan ensimmäiset suomalaiset yritykset aloittivat BSC:n käytön vuonna 1995. Vuonna1996 käyttö yleistyi jonkin verran, ja vuosia 1997 ja 1998 voidaan Suomessa kutsua BSC:n läpimurtovuosiksi. Vastanneista yrityksistä 23,2 % ilmoitti käyttävänsä BSC:tä. Yrityksistä 14,8 % oli ottamassa sitä käyttöön, ja 19,2 % harkitsi käyttöönottamista. Yritykset olivat lähteneet BSC-projektiin mm. paremman ohjausjärjestelmän, toiminnan tehostamisen ja muutoksen aikaansaamisen toivossa. BSC-projektin onnistumisen tärkeimpinä tekijöinä pidettiin johdon sitoutumista hankkeeseen, mittariston kytkeytymistä strategiaan ja mittareiden selkeyttä. BSC:n nähtiin vaikuttaneen yrityksissä eniten liiketoiminnan kokonaisuuden ymmärtämiseen, strategian toteutumiseen ja ei-taloudellisten asioiden seurantaan. Yrityksissä olimuutettu toimintaa mm. niin, että se suuntautuisi enemmän asiakkaisiin ja tulevaisuuteen. Tulevaisuudessa BSC:llä uskottiin olevan suurimmat vaikutukset kokonaisvaltaiseen ja strategiseen johtamiseen sekä strategian toteutumisen seurantaan. Kyselytutkimuksen perusteella voitiin osoittaa, että suuret yritykset käyttävät BSC:tä enemmän kuin pienet yritykset. Myös alueellisia eroja on: pääkaupunkiseudulla BSC:tä käytetään enemmän kuin muualla maassa. Mitä kannattavammaksi kyselyyn vastaaja arvioi yrityksensä, sitä parempana se piti tässä tutkimuksessa kehitettyä BSC-projektimallia verrattuna Kaplanin ja Nortonin kehittämään BSC-projektimalliin. BSC-projekti on niin kokonaisvaltainen, että sen onnistuminen edellyttää koko henkilöstön osallistuvan siihen. Ylimmän johdon aito sitoutuminen on välttämätöntä, jotta BSC-projekti saa riittävästi resursseja. Projektissa visio jastrategiat puretaan käytännön toimiksi, joten ilman ylimmän johdon mukanaoloa projektilla ei ole asiakasta. Keskijohto ja henkilöstö toteuttavat laaditut strategiat, jolloin heidän panoksensa on erittäin merkittävä projektin onnistumiseksi. Henkilöstö pitää saada osallistumaan mittaristotyöhön, jotta he sitoutuisivat asetettuihin tavoitteisiin. Ellei henkilöstöä saada mukaan, mittaristo jää helposti ylimmän johdon työkaluksi. Tällöin strategian toteuttaminen koko organisaatiossa on hyvin työlästä, jopa mahdotonta. Mittariston pitää olla strategialähtöinen, eikä se saa olla liian monimutkainen. Mitä alemmalle tasolle organisaatiossamennään, sitä yksinkertaisempi mittariston pitää olla. Ylimmillä tasoilla mittareita voi olla kahdeksasta kahteentoista, mutta alemmilla tasoilla niitä on oltava hieman vähemmän. Projektin nopea läpivienti yrityksessä ei saa olla itsetarkoitus, mutta nopeasti saadut konkreettiset tulokset auttavat, että projekti saa resursseja ja mahdollistavat palautteen saamisen ja oppimisen. Kerralla ei BSC:täsaada täydellisesti toimivaksi, vaan se on oppimisprosessi, joka mahdollistaa syvällisemmän strategian toteuttamiseen. Tässä tutkimuksessa kehitetty BSC-projektin toteuttamismalli perustuu kymmenien asiantuntijoiden kokemuksiin ja näkemyksiin BSC-projektin toteuttamisesta. Kyselytutkimuksesta, työistunnosta ja Tulikivi-casesta saadut tulokset osoittavat, että kehitetyn mallin avulla yrityksillä on entistä paremmat mahdollisuudet saada BSC-projekti onnistumaan. Näin tutkimuksen päätavoite saavutettiin. Muut tavoitteet saavutettiin kyselytutkimuksen tulosten avulla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Membrane filtration has become increasingly attractive in the processing of both foodand biotechnological products. However, the poor selectivity of the membranes and fouling are the critical factors limiting the development of UF systems for the specific fractionation of protein mixtures. This thesis gives an overview on fractionation of proteins from model protein solutions or from biological solutions. An attempt was made to improve the selectivity of the available membranes by modifying the membranes and by exploiting the different electrostatic interactions between the proteins and the membrane pore surfaces. Fractionation and UF behavior of proteins in the model solutions and in the corresponding biological solutions were compared. Characterization of the membranes and protein adsorptionto the membrane were investigated with combined flux and streaming potential studies. It has been shown that fouling of the membranes can be reduced using "self-rejecting" membranes at pH values where electrostatic repulsion is achieved between the membrane and the proteins in solution. This effect is best shown in UF of dilute single protein solutions at low ionic strengths and low pressures. Fractionation of model proteins in single, binary, and ternary solutionshas been carried out. The results have been compared to the results obtained from fractination of biological solutions. It was generally observed that fractination of proteins from biological solutions are more difficult to carry out owingto the presence of non studied protein components with different properties. Itcan be generally concluded that it is easier to enrich the smaller protein in the permeate but it is also possible to enrich the larger protein in the permeateat pH values close to the isoelectric point of the protein. It should be possible to find an optimal flux and modification to effectively improve the fractination of proteins even with very similar molar masses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Globalization and new information technologies mean that organizations have to face world-wide competition in rapidly transforming, unpredictable environments, and thus the ability to constantly generate novel and improved products, services and processes has become quintessential for organizational success. Performance in turbulent environments is, above all, influenced by the organization's capability for renewal. Renewal capability consists of the ability of the organization to replicate, adapt, develop and change its assets, capabilities and strategies. An organization with a high renewal capability can sustain its current success factors while at the same time building new strengths for the future. This capability does not only mean that the organization is able to respond to today's challenges and to keep up with the changes in its environment, but also that it can actas a forerunner by creating innovations, both at the tactical and strategic levels of operation and thereby change the rules of the market. However, even though it is widely agreed that the dynamic capability for continuous learning, development and renewal is a major source of competitive advantage, there is no widely shared view on how organizational renewal capability should be defined, and the field is characterized by a plethora of concepts and definitions. Furthermore,there is a lack of methods for systematically assessing organizational renewal capability. The dissertation aims to bridge these gaps in the existing research by constructing an integrative theoretical framework for organizational renewal capability and by presenting a method for modeling and measuring this capability. The viability of the measurement tool is demonstrated in several contexts, andthe framework is also applied to assess renewal in inter-organizational networks. In this dissertation, organizational renewal capability is examined by drawing on three complimentary theoretical perspectives: knowledge management, strategic management and intellectual capital. The knowledge management perspective considers knowledge as inherently social and activity-based, and focuses on the organizational processes associated with its application and development. Within this framework, organizational renewal capability is understood as the capacity for flexible knowledge integration and creation. The strategic management perspective, on the other hand, approaches knowledge in organizations from the standpoint of its implications for the creation of competitive advantage. In this approach, organizational renewal is framed as the dynamic capability of firms. The intellectual capital perspective is focused on exploring how intangible assets can be measured, reported and communicated. From this vantage point, renewal capability is comprehended as the dynamic dimension of intellectual capital, which consists of the capability to maintain, modify and create knowledge assets. Each of the perspectives significantly contributes to the understanding of organizationalrenewal capability, and the integrative approach presented in this dissertationcontributes to the individual perspectives as well as to the understanding of organizational renewal capability as a whole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este trabajo se investiga la persistencia de las estimaciones puntuales subjetivas de rendimientos en cultivos anua- les realizadas por un amplio grupo de agricultores. La persistencia en el tiempo es una condición necesaria para la co- herencia y la confiabilidad de las estimaciones subjetivas de variables aleatorias. Los sujetos entrevistados estimaron valores puntuales de rendimientos de cultivos anuales (rendimientos medio, mayor, mínimo y más frecuente). Se han encontrado diferencias relativas poco importantes en todas las variables, excepto en los rendimientos mínimos, donde existe una alta dispersión. Los resultados son interesantes para estimar la adecuación de las técnicas de estimación de probabilidades subjetivas para ser utilizadas en los sistemas de ayuda en la toma de decisiones en agricultura.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To date, published studies of alluvial bar architecture in large rivers have been restricted mostly to case studies of individual bars and single locations. Relatively little is known about how the depositional processes and sedimentary architecture of kilometre-scale bars vary within a multi-kilometre reach or over several hundreds of kilometres downstream. This study presents Ground Penetrating Radar and core data from 11, kilometre-scale bars from the Rio Parana, Argentina. The investigated bars are located between 30km upstream and 540km downstream of the Rio Parana - Rio Paraguay confluence, where a significant volume of fine-grained suspended sediment is introduced into the network. Bar-scale cross-stratified sets, with lengths and widths up to 600m and thicknesses up to 12m, enable the distinction of large river deposits from stacked deposits of smaller rivers, but are only present in half the surface area of the bars. Up to 90% of bar-scale sets are found on top of finer-grained ripple-laminated bar-trough deposits. Bar-scale sets make up as much as 58% of the volume of the deposits in small, incipient mid-channel bars, but this proportion decreases significantly with increasing age and size of the bars. Contrary to what might be expected, a significant proportion of the sedimentary structures found in the Rio Parana is similar in scale to those found in much smaller rivers. In other words, large river deposits are not always characterized by big structures that allow a simple interpretation of river scale. However, the large scale of the depositional units in big rivers causes small-scale structures, such as ripple sets, to be grouped into thicker cosets, which indicate river scale even when no obvious large-scale sets are present. The results also show that the composition of bars differs between the studied reaches upstream and downstream of the confluence with the Rio Paraguay. Relative to other controls on downstream fining, the tributary input of fine-grained suspended material from the Rio Paraguay causes a marked change in the composition of the bar deposits. Compared to the upstream reaches, the sedimentary architecture of the downstream reaches in the top ca 5m of mid-channel bars shows: (i) an increase in the abundance and thickness (up to metre-scale) of laterally extensive (hundreds of metres) fine-grained layers; (ii) an increase in the percentage of deposits comprised of ripple sets (to >40% in the upper bar deposits); and (iii) an increase in bar-trough deposits and a corresponding decrease in bar-scale cross-strata (<10%). The thalweg deposits of the Rio Parana are composed of dune sets, even directly downstream from the Rio Paraguay where the upper channel deposits are dominantly fine-grained. Thus, the change in sedimentary facies due to a tributary point-source of fine-grained sediment is primarily expressed in the composition of the upper bar deposits.