910 resultados para dependency


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ilmavoimien johtamisjärjestelmä on osa Ilmavoimien järjestelmäkokonaisuutta, jonka kaksi muuta osaa ovat taistelujärjestelmä ja tukeutumisjärjestelmä. Ilmavoimien materiaalista suorituskykyä rakennetaan tämän järjestelmäajattelun pohjalta. tässä tutkimuksessa Ilmavoimien johtamisjärjestelmää tutkitaan kolmen kokonaisuuden, ilmavalvontajärjestelmän, ilmatilannekuvan muodostamisjärjestelmän ja tulenkäytön johtamisjärjestelmän, näkökulmasta. Ilmavoimien johtamisjärjestelmän laajuuden vuoksi tutkimusaluetta on jouduttu rajaamaan. Tutkimus perustuu evoluutioparadigmaan, jonka mukaisesti kaikki olevainen on evolutionaarista. Mikään tässä ajassa oleva ilmiö ei ole historiaton. Jokaisella ilmiöllä on nykyisyytensä lisäksi historia ja tulevaisuus. Evoluutioparadigman avulla laajennetaan Ilmavoimien johtamisjärjestelmän nykyisyyden ymmärtämistä kuvaamalla ja analysoimalla sen evoluutiota. Tutkimusaineistoa analysoidaan käyttäen hyväksi polkuriippuvuutta evolutionaarisena mallina. tätä mallia on käytetty uusinstitutionaalisessa ja evolutionaarisessa taloustieteessä ja taloushistoriassa tutkittaessa yritysten, toimialojen tai tuotteiden pysyvyyttä markkinoilla sekä erilaisten innovaatioiden vaikuttavuutta menestymiseen eri markkinatilanteissa. Tutkimusasetelman lähtökohtana on Ilmavoimien johtamisjärjestelmäevoluution kuvaaminen kolmen tekijän tasapainoasetelman suhteen, joita ovat instituutiot, ilmasotateoria ja kansainvälinen ilmavoimien johtamisjärjestelmän kehitys. tutkimuksen tavoitteena on löytää institutionaalinen logiikka Ilmavoimien johtamisjärjestelmän evoluutiolle sekä sen eri kehitysprosesseihin liittyvä mahdollinen polkuriippuvuuden logiikka. Tutkittavina instituutioina ovat kansallinen poliittinen päätöksenteko, joka ilmentyy erilaisina komiteamietintöjä, raportteina ja selontekoina. Sotilaallista instituutiota edustavat eri operatiiviset ohjeet, ohjesäännöt ja doktriinit, jotka ovat ohjanneet johtamisjärjestelmäkehitystä. Ilmasotateorian vaikuttavuuden analyysiä varten tutkimuskohteiksi on valittu seitsemän merkittävää ilmasotateoreetikkoa. Kenraalimajuri Giulio douhet, ilmamarsalkka Hugh Trenchard ja kenraalimajuri William Mitchell edustavat ilmasotateorian varhaista kautta. Kansallista ilmasotateorian kehitystä edustavat eversti Richard Lorentz ja kenraalimajuri Gustaf Erik Magnusson. Yhdysvaltalaiset everstit John Boyd ja John Warden III ovat uuden ajan ilmasotateoreetikkoja. Näiden henkilöiden tuottamien teorioiden avulla voidaan piirtää kuva muutoksesta, jota ilmasodankäynnin teoreetti- sessa ajattelussa on tapahtunut. Ilmavoimien johtamisjärjestelmän evoluutiolle haetaan vertailua kehityksestä, jota on tapahtunut Yhdysvalloissa, Isossa-Britanniassa ja Saksassa. Ilmavoimat on saanut vaikutteita muistakin maista, mutta näiden maiden kehityksen avulla voidaan selittää Suomessa tapahtunutta kehitystä. Tutkimuksessa osoitetaan, että kansainvälisellä johtamisjärjestelmäevoluutiolla on ollut merkittävä vaikutus suomalaiseen kehitykseen. Tämä tutkimus laajentaa prosessuaalista tutkimusteoriaa ja polkuriippuvuusmallin käyttöä sotatieteelliseen tutkimuskenttään. tutkimus yhdistää toisiinsa aivan uudella tavalla sotilasorganisaation institutionaalisia tekijöitä pitkässä evoluutioketjussa. Tutkimus luo pohjaa prosessuaaliseen, havaintoihin perustuvaan evoluutioajatteluun, jossa eri tekijöiden selitysmalleja ja kausaalisuutta eri periodien aikana voidaan kuvata. Tutkimuksen tuloksena ilmavoimien johtamisjärjestelmäevoluutiossa paljastui merkittäviä piirteitä. Teknologia on ollut voimakas katalysaattori ilmapuolustuksen evoluutiossa. Uusien teknologisten innovaatioiden ilmestyminen taistelukentälle on muuttanut oleellisesti taistelun kuvaa. Sodankäynnin revoluutiosta huolimatta sodankäynnin tai operaatiotaidon ja taktiikan perusperiaatteissa ei ole tapahtunut perustavanlaatuista muutosta. Ilmavoimien johtamisjärjestelmän kehitys on voimakkaasti linkittynyt ulkomaiseen johtamisjärjestelmäkehitykseen, jossa teknologiaimplementaatiot perustuvat usean eri ilmiön paljastumiseen ja hyväksikäyttöön. Sotilas- ja siviili-instituutiot ovat merkittävästi vaikuttaneet Ilmavoimien johtamisjärjestelmän kansalliseen kehitykseen. Ne ovat antaneet poliittisen ohjauksen, taloudellisten resurssien ja strategis-operatiivisten käskyjen ja suunnitelmien avulla perusteet, joiden pohjalta johtamisjärjestelmää on kehitetty. Tutkimus osoittaa, että Suomen taloudellisten resurssien rajallisuus on ollut merkittävin institutionaalinen rajoite Ilmavoimien johtamisjärjestelmää kehitettäessä. Useat poliittiset ohjausasiakirjat ovat korostaneet, ettei Suomella pienenä kansakuntana ole taloudellisia resursseja seurata kansainvälistä sotilasteknologiakehitystä. Lisäksi ulko- ja turvallisuuspoliittinen liikkumavapaus on vaikuttanut kehittämismahdollisuuksiin. Ilmasotateorian evoluutio on luonut johtamisjärjestelmän kehitykselle välttämättömän konseptuaalisen viitekehyksen, jotta ilmasota on voitu viedä käytännön tasolle. Teoria, doktriini ja instituutiot toimivat vuorovaikutuksessa, jossa ne interaktiivisesti vaikuttavat toinen toisiinsa. Tutkimus paljasti kuusi merkittävää sokkia, jotka saivat aikaan radikaaleja muutoksia johtamisjärjestelmän evoluutiopolulla. tutkimuksen perusteella vaikuttavimmat muutoksia aiheuttavat sokit olivat radikaalit turvallisuuspoliittiset muutokset kuten sota ja voimakkaat kansantalouden muutokset kuten lama. Sokkeja aiheuttaneet kuusi ajankohtaa olivat: 1. Puolustusvoimien rakentamisen aloittaminen vapaussodan jälkeen 1918 2. Maailmanlaajuinen lama 1929–1933 ja eurooppalainen rauhanaate 1928–1933 3. Talvi- ja jatkosota 1939–1944 4. Uusi alku Pariisin rauhansopimuksen 1947 ja YYA-sopimuksen 1948 varjossa 5. Kylmän sodan päättyminen ja Suomen lama 1990–1993 6. Maailmanlaajuinen lama 2008- Tutkimuksen perusteella voidaan todeta, että Suomen ilmavoimien johtamisjärjestelmän kehittäminen on perustunut rationaalisiin päätöksiin, jotka ovat saaneet vaikutteita ulkomaisesta ilmasotateorian ja -doktriinien kehityksestä sekä kansainvälisestä johtamisjärjestelmäkehityksestä. Johtamisjärjestelmän evoluutioon on vaikuttanut globaali konvergenssi, johon on tehty kansallisen tason ratkaisuja järjestelmien adaptaation ja implementaation yhteydessä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents studies on the environments of active galaxies. Paper I is a case study of a cluster of galaxies containing BL Lac object RGB 1745+398. We measured the velocity dispersion, mass, and richness of the cluster. This was one of the most thorough studies of the environments of a BL Lac object. Methods used in the paper could be used in the future for studying other clusters as well. In Paper II we studied the environments of nearby quasars in the Sloan Digital Sky Survey (SDSS). We found that quasars have less neighboring galaxies than luminous inactive galaxies. In the large-scale structure, quasars are usually located at the edges of superclusters or even in void regions. We concluded that these low-redshift quasars may have become active only recently because the galaxies in low-density environments evolve later to the phase where quasar activity can be triggered. In Paper III we extended the analysis of Paper II to other types of AGN besides quasars. We found that different types of AGN have different large-scale environments. Radio galaxies are more concentrated in superclusters, while quasars and Seyfert galaxies prefer low-density environments. Different environments indicate that AGN have different roles in galaxy evolution. Our results suggest that activity of galaxies may depend on their environment on the large scale. Our results in Paper III raised questions of the cause of the environment-dependency in the evolution of galaxies. Because high-density large-scale environments contain richer groups and clusters than the underdense environments, our results could reflect smaller-scale effects. In Paper IV we addressed this problem by studying the group and supercluster scale environments of galaxies together. We compared the galaxy populations in groups of different richnesses in different large-scale environments. We found that the large-scale environment affects the galaxies independently of the group richness. Galaxies in low-density environments on the large scale are more likely to be star-forming than those in superclusters even if they are in groups with the same richness. Based on these studies, the conclusion of this dissertation is that the large-scale environment affects the evolution of galaxies. This may be caused by different “speed” of galaxy evolution in low and high-density environments: galaxies in dense environments reach certain phases of evolution earlier than galaxies in underdense environments. As a result, the low-density regions at low redshifts are populated by galaxies in earlier phases of evolution than galaxies in high-density regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unsuccessful mergers are unfortunately the rule rather than the exception. Therefore it is necessary to gain an enhanced understanding of mergers and post-merger integrations (PMI) as well as learning more about how mergers and PMIs of information systems (IS) and people can be facilitated. Studies on PMI of IS are scarce and public sector mergers are even less studied. There is nothing however to indicate that public sector mergers are any more successful than those in the private sector. This thesis covers five studies carried out between 2008 and 2011 in two organizations in higher education that merged in January 2010. The most recent study was carried out two years after the new university was established. The longitudinal case-study focused on the administrators and their opinions of the IS, the work situation and the merger in general. These issues were investigated before, during and after the merger. Both surveys and interviews were used to collect data, to which were added documents that both describe and guide the merger process; in this way we aimed at a triangulation of findings. Administrators were chosen as the focus of the study since public organizations are highly dependent on this staff category, forming the backbone of the organization and whose performance is a key success factor for the organization. Reliable and effective IS are also critical for maintaining a functional and effective organization, and this makes administrators highly dependent on their organizations’ IS for the ability to carry out their duties as intended. The case-study has confirmed the administrators’ dependency on IS that work well. A merger is likely to lead to changes in the IS and the routines associated with the administrators’ work. Hence it was especially interesting to study how the administrators viewed the merger and its consequences for IS and the work situation. The overall research objective is to find key issues for successful mergers and PMIs. The first explorative study in 2008 showed that the administrators were confident of their skills and knowledge of IS and had no fear of having to learn new IS due to the merger. Most administrators had an academic background and were not anxious about whether IS training would be given or not. Before the merger the administrators were positive and enthusiastic towards the merger and also to the changes that they expected. The studies carried out before the merger showed that these administrators were very satisfied with the information provided about the merger. This information was disseminated through various channels and even negative information and postponed decisions were quickly distributed. The study conflicts with the theories that have found that resistance to change is inevitable in a merger. Shortly after the merger the (third) study showed disappointment with the fact that fewer changes than expected had been implemented even if the changes that actually were carried out sometimes led to a more problematic work situation. This was seen to be more prominent for routine changes than IS changes. Still the administrators showed a clear willingness to change and to share their knowledge with new colleagues. This knowledge sharing (also tacit) worked well in the merger and the PMI. The majority reported that the most common way to learn to use new ISs and to apply new routines was by asking help from colleagues. They also needed to take responsibility for their own training and development. Five months after the merger (the fourth study) the administrators had become worried about the changes in communication strategy that had been implemented in the new university. This was perceived as being more anonymous. Furthermore, it was harder to get to know what was happening and to contact the new decision makers. The administrators found that decisions, and the authority to make decisions, had been moved to a higher administrative level than they were accustomed to. A directive management style is recommended in mergers in order to achieve a quick transition without distracting from the core business. A merger process may be tiresome and require considerable effort from the participants. In addition, not everyone can make their voice heard during a merger and consensus is not possible in every question. It is important to find out what is best for the new organization instead of simply claiming that the tried and tested methods of doing things should be implemented. A major problem turned out to be the lack of management continuity during the merger process. Especially problematic was the situation in the IS-department with many substitute managers during the whole merger process (even after the merger was carried out). This meant that no one was in charge of IS-issues and the PMI of IS. Moreover, the top managers were appointed very late in the process; in some cases after the merger was carried out. This led to missed opportunities for building trust and management credibility was heavily affected. The administrators felt neglected and that their competences and knowledge no longer counted. This, together with a reduced and altered information flow, led to rumours and distrust. Before the merger the administrators were convinced that their achievements contributed value to their organizations and that they worked effectively. After the merger they were less sure of their value contribution and effectiveness even if these factors were not totally discounted. The fifth study in November 2011 found that the administrators were still satisfied with their IS as they had been throughout the whole study. Furthermore, they believed that the IS department had done a good job despite challenging circumstances. Both the former organizations lacked IS strategies, which badly affected the IS strategizing during the merger and the PMI. IS strategies deal with issues like system ownership; namely who should pay and who is responsible for maintenance and system development, for organizing system training for new IS, and for effectively run IS even during changing circumstances (e.g. more users). A proactive approach is recommended for IS strategizing to work. This is particularly true during a merger and PMI for handling issues about what ISs should be adopted and implemented in the new organization, issues of integration and reengineering of IS-related processes. In the new university an ITstrategy had still not been decided 26 months after the new university was established. The study shows the importance of the decisive management of IS in a merger requiring that IS issues are addressed in the merger process and that IS decisions are made early. Moreover, the new management needs to be appointed early in order to work actively with the IS-strategizing. It is also necessary to build trust and to plan and make decisions about integration of IS and people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dewatering of iron ore concentrates requires large capacity in addition to producing a cake with low moisture content. Such large processes are commonly energy intensive and means to lower the specific energy consumption are needed. Ceramic capillary action disc filters incorporate a novel filter medium enabling the harnessing of capillary action, which results in decreased energy consumption in comparison to traditional filtration technologies. As another benefit, the filter medium is mechanically and chemically more durable than, for example, filter cloths and can, thus, withstand harsh operating conditions and possible regeneration better than other types of filter media. In iron ore dewatering, the regeneration of the filter medium is done through a combination of several techniques: (1) backwashing, (2) ultrasonic cleaning, and (3) acid regeneration. Although it is commonly acknowledged that the filter medium is affected by slurry particles and extraneous compounds, published research, especially in the field of dewatering of mineral concentrates, is scarce. Whereas the regenerative effect of backwashing and ultrasound are more or less mechanical, regeneration with acids is based on chemistry. The chemistry behind the acid regeneration is, naturally, dissolution. The dissolution of iron oxide particles has been extensively studied over several decades but those studies may not necessarily be directly applicable in the regeneration of the filter medium which has undergone interactions with the slurry components. The aim of this thesis was to investigate if free particle dissolution indeed correlates with the regeneration of the filter medium. For this purpose, both free particle dissolution and dissolution of surface adhered particles were studied. The focus was on acidic dissolution of iron oxide particles and on the study of the ceramic filter medium used in the dewatering of iron ore concentrates. The free particle dissolution experiments show that the solubility of synthetic fine grained iron oxide particles in oxalic acid could be explained through linear models accounting for the effects of temperature and acid concentration, whereas the dissolution of a natural magnetite is not so easily explained by such models. In addition, the kinetic experiments performed both support and contradict the work of previous authors: the suitable kinetic model here supports previous research suggesting solid state reduction to be the reaction mechanism of hematite dissolution but the formation of a stable iron oxalate is not supported by the results of this research. Several other dissolution mechanisms have also been suggested for iron oxide dissolution in oxalic acid, indicating that the details of oxalate promoted reductive dissolution are not yet agreed and, in this respect, this research offers added value to the community. The results of the regeneration experiments with the ceramic filter media show that oxalic acid is highly effective in removing iron oxide particles from the surface of the filter medium. The dissolution of those particles did not, however, exhibit the expected behaviour, i.e. complete dissolution. The results of this thesis show that although the regeneration of the ceramic filter medium with acids incorporates the dissolution of slurry particles from the surface of the filter medium, the regeneration cannot be assessed purely based upon free particle dissolution. A steady state, dependent on temperature and on the acid concentration, was observed in the dissolution of particles from the surface even though the limit of solubility of free iron oxide particles had not been reached. Both the regeneration capacity and efficiency, with regards to the removal of iron oxide particles, was found to be temperature dependent, but was not affected by the acid concentration. This observation further suggests that the removal of the surface adhered particles does not follow the dissolution of free particles, which do exhibit a dependency on the acid concentration. In addition, changes in the permeability and in the pore structure of the filter medium were still observed after the bulk concentration of dissolved iron had reached a steady state. Consequently, the regeneration of the filter medium continued after the dissolution of particles from the surface had ceased. This observation suggests that internal changes take place at the final stages of regeneration. The regeneration process could, in theory, be divided into two, possibly overlapping, stages: (1) dissolution of surface-adhered particles, and (2) dissolution of extraneous compounds from within the pore structure. In addition to the fundamental knowledge generated during this thesis, tools to assess the effects of parameters on the regeneration of the ceramic filter medium are needed. It has become clear that the same tools used to estimate the dissolution of free particles cannot be used to estimate the regeneration of a filter medium unless only a robust characterisation of the order of regeneration efficiency is needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recurrent castration resistant prostate cancer remains a challenge for cancer therapies and novel treatment options in addition to current anti-androgen and mitosis inhibitors are needed. Aberrations in epigenetic enzymes and chromatin binding proteins have been linked to prostate cancer and they may form a novel class of drug targets in the future. In this thesis we systematically evaluated the epigenenome as a prostate cancer drug target. We functionally silenced 615 known and putative epigenetically active protein coding genes in prostate cancer cell lines using high throughput RNAi screening and evaluated the effects on cell proliferation, androgen receptor (AR) expression and histone patterns. Histone deacetylases (HDACs) were found to regulate AR expression. Furthermore, HDAC inhibitors reduced AR signaling and inhibited synergistically with androgen deprivation prostate cancer cell proliferation. In particular, TMPRSS2- EGR fusion gene positive prostate cancer cell lines were sensitive to combined HDAC and AR inhibition, which may partly be related to the dependency of a fusion gene induced epigenetic pathway. Histone demethylases (HDMs) were identified to regulate prostate cancer cell line proliferation. We discovered a novel histone JmjC-domain histone demethylase PHF8 to be highly expressed in high grade prostate cancers and mediate cell proliferation, migration and invasion in in vitro models. Additionally, we explored novel HDM inhibitor chemical structures using virtual screening methods. The structures best fitting to the active pocket of KDM4A were tested for enzyme inhibition and prostate cancer cell proliferation activity in vitro. In conclusion, our results show that prostate cancer may efficiently be targeted with combined AR and HDAC inhibition which is also currently being tested in clinical trials. HDMs were identified as another feasible novel drug target class. Future studies in representative animal models and development of specific inhibitors may reveal HDMs full potential in prostate cancer therapy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Corpus luteum is a temporary endocrine gland that regulates either the estrous cycle and pregnancy. It presents extreme dependency on the adequate blood supply. This work aims to evaluate goat corpus luteum (CL) vascular density (VD) over the estrous cycle. For that purpose, 20 females were submitted to estrus synchronization/ovulation treatment using a medroxyprogesterone intra-vaginal sponge as well as intramuscular (IM) application of cloprostenol and equine chorionic gonadotrophine (eCG). After sponge removal, estrus was identified at about 72hs. Once treatment was over, female goats were then subdivided into 4 groups (n=5 each) and slaughtered on days 2, 12, 16 and 22 after ovulation (p.o). Ovaries were collected, withdrawn and weighted. CL and ovaries had size and area recorded. Blood samples were collected and the plasma progesterone (P4) was measured through RIA commercial kits. The VD was 24.42±6.66, 36.26±5.61, 8.59±2.2 and 3.97±1.12 vessels/mm² for days 2, 12, 16 and 22 p.o, respectively. Progesterone plasma concentrations were 0.49±0.08, 2.63±0.66, 0.61±0.14 and 0.22±0.04ng/ml for days 2, 12, 16 e 22 p.o, respectively. Studied parameters were affected by the estrous cycle phase. Values greater than 12 p.o were observed. In the present work we observed that ovulation occurred predominantly in the right ovary (70% of the animals), which in turn presented bigger measures than the contra lateral one. There is a meaningful relationship between the weight and size of the ovary and these of CL (r=0.87, r=0.70, respectively, p<0.05). It is possible to conclude that morphology of goat's ovaries and plasma progesterone concentration changed according to estrous cycle stages. We propose these parameters can be used as indicators of CL functional activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The European transport market has confronted several changes during the last decade. Due to European Union legislative mandates, the railway freight market was deregulated in 2007. The market followed the trend started by other transport modes as well as other previously regulated industries such as banking, telecommunications and energy. Globally, the first country to deregulate the railway freight market was the United States, with the introduction of the Staggers Rail Act in 1980. Some European countries decided to follow suit already before regulation was mandated; among the forerunners were the United Kingdom, Sweden and Germany. The previous research has concentrated only on these countries, which has provided an interesting research gap for this thesis. The Baltic Sea Region consists of countries with different kinds of liberalization paths, including Sweden and Germany, which have been on the frontline, whereas Lithuania and Finland have only one active railway undertaking, the incumbent. The transport market of the European Union is facing further challenges in the near future, due to the Sulphur Directive, oil dependency and the changing structure of European rail networks. In order to improve the accessibility of this peripheral area, further action is required. This research focuses on topics such as the progression of deregulation, barriers to entry, country-specific features, cooperation and internationalization. Based on the research results, it can be stated that the Baltic Sea Region’s railway freight market is expected to change in the future. Further private railway undertakings are anticipated, and these would change the market structure. The realization of European Union’s plans to increase the improved rail network to cover the Baltic States is strongly hoped for, and railway freight market counterparts inside and among countries are starting to enhance their level of cooperation. The Baltic Sea Region countries have several special national characteristics which influence the market and should be taken into account when companies evaluate possible market entry actions. According to thesis interviews, the Swedish market has a strong level of cooperation in the form of an old-boy network, and is supported by a positive attitude of the incumbent towards the private railway undertakings. This has facilitated the entry process of newcomers, and currently the market has numerous operating railway undertakings. A contrary example was found from Poland, where the incumbent sent old rolling stock to the scrap yard rather than sell it to private railway undertakings. The importance of personal relations is highlighted in Russia, followed by the railway market’s strong political bond with politics. Nonetheless, some barriers to entry are shared by the Baltic Sea Region, the main ones being acquisition of rolling stock, bureaucracy and needed investments. The railway freight market is internationalizing, which is perceived via several alliances as well as the increased number of mergers and acquisitions. After deregulation, markets seem to increase the number of railway undertakings at a rather fast pace, but with the passage of time, the larger operators tend to acquire smaller ones. Therefore, it is expected that in a decade’s time, the number of railway undertakings will start to decrease in the deregulation pioneer countries, while the ones coming from behind might still experience an increase. The Russian market is expected to be totally liberalized, and further alliances between the Russian Railways and European railway undertakings are expected to occur. The Baltic Sea Region’s railway freight market is anticipated to improve, and, based on the interviewees’ comments, attract more cargoes from road to rail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally biologists have often considered individual differences in behaviour or physiology as a nuisance when investigating a population of individuals. These differences have mostly been dismissed as measurement errors or as non-adaptive variation around an adaptive population mean. Recent research, however, challenges this view. While long acknowledged in human personality studies, the importance of individual variation has recently entered into ecological and evolutionary studies in the form of animal personality. The concept of animal personality focuses on consistent differences within and between individuals in behavioural and physiological traits across time and contexts and its ecological and evolutionary consequences. Nevertheless, a satisfactory explanation for the existence of personality is still lacking. Although there is a growing number of explanatory theoretical models, there is still a lack of empirical studies on wild populations showing how traditional life-history tradeoffs can explain the maintenance of variation in personality traits. In this thesis, I first investigate the validity of variation in allostatic load or baseline corticosterone (CORT) concentrations as a measure for differences in individual quality. The association between CORT and quality has recently been summarised under the “CORT-fitness hypothesis”, which states that a general negative relationship between baseline CORT and fitness exists. I then continue to apply the concept of animal personality to depict how the life-history trade-off between survival and fecundity is mediated in incubating female eiders (Somateria mollissima), thereby maintaining variation in behaviour and physiology. To this end, I investigated breeding female eiders from a wild population that breeds in the archipelago around Tvärminne Zoological Station, SW Finland. The field data used was collected from 2008 to 2012. The overall aim of the thesis was to show how differences in personality and stress responsiveness are linked to a life-history context. In the four chapters I examine how the life-history trade-off between survival and fecundity could be resolved depending on consistent individual differences in escape behaviour, stress physiology, individual quality and nest-site selection. First, I corroborated the validity of the “CORT-fitness hypothesis”, by showing that reproductive success is generally negatively correlated with serum and faecal baseline CORT levels. The association between individual quality and baseline CORT is, however, context dependent. Poor body condition was associated with elevated serum baseline CORT only in older breeders, while a larger reproductive investment (clutch mass) was associated with elevated serum baseline CORT among females breeding late in the season. Interestingly, good body condition was associated with elevated faecal baseline CORT levels in late breeders. High faecal baseline CORT levels were positively related to high baseline body temperature, and breeders in poor condition showed an elevated baseline body temperature, but only on open islands. The relationship between stress physiology and individual quality is modulated by breeding experience and breeding phenology. Consequently, the context dependency highlights that this relationship has to be interpreted cautiously. Additionally, I verified if stress responsiveness is related to risk-taking behaviour. Females who took fewer risks (longer flight initiation distance) showed a stronger stress response (measured as an increase in CORT concentration after capture and handling of the bird). However, this association was modulated by breeding experience and body condition, with young breeders and those in poor body condition showing the strongest relationship between risktaking and stress responsiveness. Shy females (longer flight initiation distance) also incubated their clutch for a shorter time. Additionally, I demonstrated that stress responsiveness and predation risk interact with maternal investment and reproductive success. Under high risk of predation, females that incubated a larger clutch showed a stronger stress response. Surprisingly, these females also exhibited higher reproductive success than females with a weaker stress response. Again, these context dependent results suggest that the relationship between stress responsiveness and risk-taking behaviour should not be studied in isolation from individual quality and that stress responsiveness may show adaptive plasticity when individuals are exposed to different predation regimes. Finally, female risk-taking behaviour and stress coping styles were also related to nest-site choice. Less stress responsive females more frequently occupied nests with greater coverage that were farther away from the shoreline. Females nesting in nests with medium cover and farther from the shoreline had higher reproductive success. These results suggest that different personality types are distributed non-randomly in space. In this thesis I was able to demonstrate that personalities and stress coping strategies are persistent individual characteristics, which express measurable effects on fitness. This suggests that those traits are exposed to natural selection and thereby can evolve. Furthermore, individual variation in personality and stress coping strategy is linked to the alternative ways in which animals resolve essential life-history trade-offs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maritime safety is an issue that has gained a lot of attention in the Baltic Sea area due to the dense maritime traffic and transportation of oil in the area. Lots of effort has been paid to enhance maritime safety in the area. The risk exists that excessive legislation and other requirements mean more costs for limited benefit. In order to utilize both public and private resources efficiently, awareness is required of what kind of costs maritime safety policy instruments cause and whether the costs are in relation to benefits. The aim of this report is to present an overview of the cost-effectiveness of maritime safety policy instruments focusing on the cost aspect: what kind of costs maritime safety policy causes, to whom, what affects the cost-effectiveness and how cost-effectiveness is studied. The study is based on a literature review and on the interviews of Finnish maritime experts. The results of this study imply that cost-effectiveness is a complicated issue to evaluate. There are no uniform practices for which costs and benefits should be included in the evaluation and how they should be valued. One of the challenges is how to measure costs and benefits during the course of a longer time period. Often a lack of data erodes the reliability of evaluation. In the prevention of maritime accidents, costs typically include investments in ship structures or equipment, as well as maintenance and labor costs. Also large investments may be justifiable if they respectively provide significant improvements to maritime safety. Measures are cost-effective only if they are implemented properly. Costeffectiveness is decreased if a measure causes overlapping or repetitious work. Costeffectiveness is also decreased if the technology isn’t user-friendly or if it is soon replaced with a new technology or another new appliance. In future studies on the cost-effectiveness of maritime safety policy, it is important to acknowledge the dependency between different policy instruments and the uncertainty of the factors affecting cost-effectiveness. The costs of a single measure are rarely relatively significant and the effect of each measure on safety tends to be positive. The challenge is to rank the measures and to find the most effective combination of different policy instruments. The greatest potential offered for the analysis of cost-effectiveness of individual measures is their implementation in clearly defined risk situations, in which different measures are truly alternative to each other. Overall, maritime safety measures do not seem to be considered burdening for the shipping industry in Finland at the moment. Generally actors in the Finnish shipping industry seem to find maintaining a high safety level important and act accordingly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kysynnän ja tarjonnan epävarmuudet ovat nykyisin arkipäivää useilla toimialoilla. Elämme epävarmuuden suhteen ennen näkemättömiä aikoja, minkä on arvioitu jatkuvan myös tulevaisuudessa. Yritysten tilauskannat ovat lyhyitä, ja tilaukset viivästyvät tai peruuntuvat kokonaan. Toisaalta tarjonnan epävarmuudet aiheuttavat asiakasyrityksille haasteita esimerkiksi toimitusmyöhästymisten muodossa. Tuotannon ollessa hajaantunut verkostoihin yksittäisten yritysten toiminta ja päätökset vaikuttavat toisten verkostoyritysten toimintaan. Tämän takia epävarmuuden aiheuttamista muutoksista ja poikkeamista tulisi tiedottaa kumppaniyrityksiä, jotta kaikki pysyisivät samalla kellotaajuudella. Operatiivisen ja taktisen tiedon jakaminen on nykyisissä toimitusketjuissa jo arkipäivää, mutta yritysten välisistä rajapinnoista löytyy edelleen kehitettävää. Riittävästä ei kiinnitetä huomiota vastaanottajan kykyyn ja tapaan hyödyntää informaatiota – varsinkaan muutostilanteissa. Ajan/nopeuden ollessa yhä tärkeämpi kilpailutekijä informaation ajoituksella on kriittinen merkitys kysyntä-toimitusketjujen kokonaissuorituskykyyn. Toisin sanoen, millä ajanhetkellä tietoa tulisi jakaa, jotta kumppani pystyisi hyödyntämään saamaansa tietoa mahdollisimman hyvin. Kysyntä-toimitusketjun synkronoinnilla tarkoitetaan tässä väitöstutkimuksessa nimenomaan aikatekijään keskittymistä yritysten välisessä päätöksenteossa ja informaation jakamisessa toimitusketjun kokonaissuorituskyvyn parantamiseksi. Tutkimus kytkeytyy toimitusketjukoordinoinnin tieteelliseen keskusteluun. Koordinointiteorian keskeinen osa ovat riippuvuussuhteet, joita johdetaan koordinointimekanismien avulla. Kysyntätoimitusketjun synkronointia on mallinnettu aikaisemmin VOP-OPP-mallin (Value Offering Point – Order Penetration Point) ja sen johdannaisten avulla. Näissä malleissa asiakasyrityksen kysyntäketju ja toimittajayrityksen toimitusketju ovat keskinäisessä riippuvuussuhteessa, jota johdetaan päätöksenteon synkronoinnin ja informaation jakamisen koordinointimekanismeilla. VOP-OPP-malli johdannaisineen eivät kuitenkaan huomioi epävarman toimintaympäristön vaikutuksia synkronointiin. Näissä malleissa informaation ainoana laatudimensiona tarkasteltava aikatekijä on liian kapea-alainen näkökulma synkronointiin epävarmassa ympäristössä. Lisäksi nämä mallit keskittyvät vain yksisuuntaiseen, kysyntälähtöiseen, synkronointiin jättäen huomioimatta tarjontalähtöisen synkronoinnin. Aikatekijä- ja kokonaissuorituskykypainotustensa takia VOP-OPP-malli tarjosi kuitenkin hyvän lähtöfilosofian uusien synkronointimallien kehittämiseen. Väitöstutkimus toteutettiin hypoteettis-deduktiivisena tapaustutkimuksena, jossa ensin luotiin kirjallisuuden perusteella uudet teoreettiset synkronointimalliehdotukset. Tämän jälkeen ehdotusten toimivuutta arvioitiin käytännön kysyntä-toimitusketjuissa. Tutkimuksen uutuusarvo liittyy kysyntä-toimitusketjun synkronoinnin keskeisten piirteiden systeemiseen mallintamiseen epävarmassa toimintaympäristössä. Kontribuutiona esitetään kysyntä-toimitusketjun synkronoinnin moniulotteinen kokonaismalli, joka sisältää koordinointimekanismeina päätöksenteon synkronoinnin, informaation läpinäkyvyyden sekä asiakas- ja toimittajapään joustot. Tiedon vaihtoa mallissa tarkastellaan kaksisuuntaisesti – kysyntä- ja tarjontalähtöisesti. Informaation laatudimensioina mallissa ovat informaation ajoitus, luotettavuus ja tarkkuus. Kokonaismalli sisältää kolme alimallia: Demand Visibility Point – Demand Penetration Point (DVP-DPP) on kysyntälähtöisen synkronoinnin malli, Supply Visibility Point – Supply Information Penetration Point (SVP-SIPP) on tarjontalähtöisen synkronoinnin malli ja Integroitu DVP-DPP - SVP-SIPP-malli kytkee edellä mainitut mallit toisiinsa. Näissä alimalleissa informaation eri luokkia ovat tilausta edeltävä, tilaukseen liittyvä, tilauksen jälkeinen ja sovitun toimitusajankohdan jälkeinen kysyntä- ja tarjontatieto. Käytännön hyödyntämisen näkökulmasta mallit toimivat ns. mentaalitason koordinointimekanismeina, joiden tarkoitus on herättää toimitusketjukumppanit tavoittelemaan kokonaissuorituskyvyn parantamista oman edun tavoittelemisen sijasta. Tutkimuksen päärajoitteena on sen keskittyminen ainoastaan kahdenvälisiin yhteistyösuhteisiin, mikä tarjoaa nykyisessä verkostoituneessa toimintaympäristössä varsin kapean kuvan käytännön synkronointihaasteisiin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systemic innovation has emerged as an important topic due to the interconnected technological and sociotechnical change of our current complex world. This study approaches the phenomenon from an organizing perspective, by analyzing the various actors, collaborative activities and resources available in innovation systems. It presents knowledge production for innovation and discusses the organizational challenges of shared innovation activities from a dynamic perspective. Knowledge, interaction, and organizational interdependencies are seen as the core elements of organizing for systemic innovations. This dissertation is divided into two parts. The first part introduces the focus of the study and the relevant literature and summarizes conclusions. The second part includes seven publications, each reporting on an important aspect of the phenomenon studied. Each of the in-depth single-case studies takes a distinct and complementary systems approach to innovation activities – linking the refining of knowledge to the enabling of organizations to participate in shared innovation processes. These aspects are summarized as theoretical and practical implications for recognizing innovation opportunities and turning ideas into innovations by means of using information and organizing activities in an efficient manner. Through its investigation of the existing literature and empirical case studies, this study makes three main contributions. First, it describes the challenges inherent in utilizing information and transforming it into innovation knowledge. Secondly, it presents the role of interaction and organizational interdependencies in innovation activities from various novel perspectives. Third, it highlights the interconnection between innovations and organizations, and the related path dependency and anticipatory aspects in innovation activities. In general, the thesis adds to our knowledge of how different aspects of systems form innovations through interaction and organizational interdependencies. It highlights the continuous need to redefine information and adjust organizations and networks based on ongoing activities – stressing the emergent, systemic nature of innovation.