17 resultados para network metabolismo flux analysis markov recon
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
Increasing concern about global climate warming has accelerated research into renewable energy sources that could replace fossil petroleum-based fuels and materials. Bioethanol production from cellulosic biomass by fermentation with baker s yeast Saccharomyces cerevisiae is one of the most studied areas in this field. The focus has been on metabolic engineering of S. cerevisiae for utilisation of the pentose sugars, in particular D-xylose that is abundant in the hemicellulose fraction of biomass. Introduction of a heterologous xylose-utilisation pathway into S. cerevisiae enables xylose fermentation, but ethanol yield and productivity do not reach the theoretical level. In the present study, transcription, proteome and metabolic flux analyses of recombinant xylose-utilising S. cerevisiae expressing the genes encoding xylose reductase (XR) and xylitol dehydrogenase (XDH) from Pichia stipitis and the endogenous xylulokinase were carried out to characterise the global cellular responses to metabolism of xylose. The aim of these studies was to find novel ways to engineer cells for improved xylose fermentation. The analyses were carried out from cells grown on xylose and glucose both in batch and chemostat cultures. A particularly interesting observation was that several proteins had post-translationally modified forms with different abundance in cells grown on xylose and glucose. Hexokinase 2, glucokinase and both enolase isoenzymes 1 and 2 were phosphorylated differently on the two different carbon sources studied. This suggests that phosphorylation of glycolytic enzymes may be a yet poorly understood means to modulate their activity or function. The results also showed that metabolism of xylose affected the gene expression and abundance of proteins in pathways leading to acetyl-CoA synthesis and altered the metabolic fluxes in these pathways. Additionally, the analyses showed increased expression and abundance of several other genes and proteins involved in cellular redox reactions (e.g. aldo-ketoreductase Gcy1p and 6-phosphogluconate dehydrogenase) in cells grown on xylose. Metabolic flux analysis indicated increased NADPH-generating flux through the oxidative part of the pentose phosphate pathway in cells grown on xylose. The most importantly, results indicated that xylose was not able to repress to the same extent as glucose the genes of the tricarboxylic acid and glyoxylate cycles, gluconeogenesis and some other genes involved in the metabolism of respiratory carbon sources. This suggests that xylose is not recognised as a fully fermentative carbon source by the recombinant S. cerevisiae that may be one of the major reasons for the suboptimal fermentation of xylose. The regulatory network for carbon source recognition and catabolite repression is complex and its functions are only partly known. Consequently, multiple genetic modifications and also random approaches would probably be required if these pathways were to be modified for further improvement of xylose fermentation by recombinant S. cerevisiae strains.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
Resumo:
This study deals with language change and variation in the correspondence of the eighteenth-century Bluestocking circle, a social network which provided learned men and women with an informal environment for the pursuit of scholarly entertainment. Elizabeth Montagu (1718 1800), a notable social hostess and a Shakespearean scholar, was one of their key figures. The study presents the reconstruction of Elizabeth Montagu s social networks from her youth to her later years with a special focus on the Bluestocking circle, and linguistic research on private correspondence between Montagu and her Bluestocking friends and family members between the years 1738 1778. The epistolary language use is investigated using the methods and frameworks of corpus linguistics, historical sociolinguistics, and social network analysis. The approach is diachronic and concerns real-time language change. The research is based on a selection of manuscript letters which I have edited and compiled into an electronic corpus (Bluestocking Corpus). I have also devised a network strength scale in order to quantify the strength of network ties and to compare the results of the linguistic research with the network analysis. The studies range from the reconstruction and analysis of Elizabeth Montagu s most prominent social networks to the analysis of changing morphosyntactic features and spelling variation in Montagu s and her network members correspondence. The linguistic studies look at the use of the progressive construction, preposition stranding and pied piping, and spelling variation in terms of preterite and past participle endings in the regular paradigm (-ed, - d, -d, - t, -t) and full / contracted spellings of auxiliary verbs. The results are analysed in terms of social network membership, sociolinguistic variables of the correspondents, and, when relevant, aspects of eighteenth-century linguistic prescriptivism. The studies showed a slight diachronic increase in the use of the progressive, a significant decrease of the stigmatised preposition stranding and increase of pied piping, and relatively informal but socially controlled epistolary spelling. Certain significant changes in Elizabeth Montagu s language use over the years could be attributed to her increasingly prominent social standing and the changes in her social networks, and the strength of ties correlated strongly with the use of the progressive in the Bluestocking Corpus. Gender, social rank, and register in terms of kinship/friendship had a significant influence in language use, and an effect of prescriptivism could also be detected. Elizabeth Montagu s network ties resulted in language variation in terms of network membership, her own position in a given network, and the social factors that controlled eighteenth-century interaction. When all the network ties are strong, linguistic variation seems to be essentially linked to the social variables of the informants.
Resumo:
Elucidating the mechanisms responsible for the patterns of species abundance, diversity, and distribution within and across ecological systems is a fundamental research focus in ecology. Species abundance patterns are shaped in a convoluted way by interplays between inter-/intra-specific interactions, environmental forcing, demographic stochasticity, and dispersal. Comprehensive models and suitable inferential and computational tools for teasing out these different factors are quite limited, even though such tools are critically needed to guide the implementation of management and conservation strategies, the efficacy of which rests on a realistic evaluation of the underlying mechanisms. This is even more so in the prevailing context of concerns over climate change progress and its potential impacts on ecosystems. This thesis utilized the flexible hierarchical Bayesian modelling framework in combination with the computer intensive methods known as Markov chain Monte Carlo, to develop methodologies for identifying and evaluating the factors that control the structure and dynamics of ecological communities. These methodologies were used to analyze data from a range of taxa: macro-moths (Lepidoptera), fish, crustaceans, birds, and rodents. Environmental stochasticity emerged as the most important driver of community dynamics, followed by density dependent regulation; the influence of inter-specific interactions on community-level variances was broadly minor. This thesis contributes to the understanding of the mechanisms underlying the structure and dynamics of ecological communities, by showing directly that environmental fluctuations rather than inter-specific competition dominate the dynamics of several systems. This finding emphasizes the need to better understand how species are affected by the environment and acknowledge species differences in their responses to environmental heterogeneity, if we are to effectively model and predict their dynamics (e.g. for management and conservation purposes). The thesis also proposes a model-based approach to integrating the niche and neutral perspectives on community structure and dynamics, making it possible for the relative importance of each category of factors to be evaluated in light of field data.
Resumo:
Bacteria play an important role in many ecological systems. The molecular characterization of bacteria using either cultivation-dependent or cultivation-independent methods reveals the large scale of bacterial diversity in natural communities, and the vastness of subpopulations within a species or genus. Understanding how bacterial diversity varies across different environments and also within populations should provide insights into many important questions of bacterial evolution and population dynamics. This thesis presents novel statistical methods for analyzing bacterial diversity using widely employed molecular fingerprinting techniques. The first objective of this thesis was to develop Bayesian clustering models to identify bacterial population structures. Bacterial isolates were identified using multilous sequence typing (MLST), and Bayesian clustering models were used to explore the evolutionary relationships among isolates. Our method involves the inference of genetic population structures via an unsupervised clustering framework where the dependence between loci is represented using graphical models. The population dynamics that generate such a population stratification were investigated using a stochastic model, in which homologous recombination between subpopulations can be quantified within a gene flow network. The second part of the thesis focuses on cluster analysis of community compositional data produced by two different cultivation-independent analyses: terminal restriction fragment length polymorphism (T-RFLP) analysis, and fatty acid methyl ester (FAME) analysis. The cluster analysis aims to group bacterial communities that are similar in composition, which is an important step for understanding the overall influences of environmental and ecological perturbations on bacterial diversity. A common feature of T-RFLP and FAME data is zero-inflation, which indicates that the observation of a zero value is much more frequent than would be expected, for example, from a Poisson distribution in the discrete case, or a Gaussian distribution in the continuous case. We provided two strategies for modeling zero-inflation in the clustering framework, which were validated by both synthetic and empirical complex data sets. We show in the thesis that our model that takes into account dependencies between loci in MLST data can produce better clustering results than those methods which assume independent loci. Furthermore, computer algorithms that are efficient in analyzing large scale data were adopted for meeting the increasing computational need. Our method that detects homologous recombination in subpopulations may provide a theoretical criterion for defining bacterial species. The clustering of bacterial community data include T-RFLP and FAME provides an initial effort for discovering the evolutionary dynamics that structure and maintain bacterial diversity in the natural environment.
Resumo:
In this Thesis, we develop theory and methods for computational data analysis. The problems in data analysis are approached from three perspectives: statistical learning theory, the Bayesian framework, and the information-theoretic minimum description length (MDL) principle. Contributions in statistical learning theory address the possibility of generalization to unseen cases, and regression analysis with partially observed data with an application to mobile device positioning. In the second part of the Thesis, we discuss so called Bayesian network classifiers, and show that they are closely related to logistic regression models. In the final part, we apply the MDL principle to tracing the history of old manuscripts, and to noise reduction in digital signals.
Resumo:
Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
Resumo:
In this thesis, the solar wind-magnetosphere-ionosphere coupling is studied observationally, with the main focus on the ionospheric currents in the auroral region. The thesis consists of five research articles and an introductory part that summarises the most important results reached in the articles and places them in a wider context within the field of space physics. Ionospheric measurements are provided by the International Monitor for Auroral Geomagnetic Effects (IMAGE) magnetometer network, by the low-orbit CHAllenging Minisatellite Payload (CHAMP) satellite, by the European Incoherent SCATter (EISCAT) radar, and by the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Magnetospheric observations, on the other hand, are acquired from the four spacecraft of the Cluster mission, and solar wind observations from the Advanced Composition Explorer (ACE) and Wind spacecraft. Within the framework of this study, a new method for determining the ionospheric currents from low-orbit satellite-based magnetic field data is developed. In contrast to previous techniques, all three current density components can be determined on a matching spatial scale, and the validity of the necessary one-dimensionality approximation, and thus, the quality of the results, can be estimated directly from the data. The new method is applied to derive an empirical model for estimating the Hall-to-Pedersen conductance ratio from ground-based magnetic field data, and to investigate the statistical dependence of the large-scale ionospheric currents on solar wind and geomagnetic parameters. Equations describing the amount of field-aligned current in the auroral region, as well as the location of the auroral electrojets, as a function of these parameters are derived. Moreover, the mesoscale (10-1000 km) ionospheric equivalent currents related to two magnetotail plasma sheet phenomena, bursty bulk flows and flux ropes, are studied. Based on the analysis of 22 events, the typical equivalent current pattern related to bursty bulk flows is established. For the flux ropes, on the other hand, only two conjugate events are found. As the equivalent current patterns during these two events are not similar, it is suggested that the ionospheric signatures of a flux rope depend on the orientation and the length of the structure, but analysis of additional events is required to determine the possible ionospheric connection of flux ropes.
Resumo:
This study analyses personal relationships linking research to sociological theory on the questions of the social bond and on the self as social. From the viewpoint of disruptive life events and experiences, such as loss, divorce and illness, it aims at understanding how selves are bound to their significant others as those specific people ‘close or otherwise important’ to them. Who form the configurations of significant others? How do different bonds respond in disruptions and how do relational processes unfold? How is the embeddedness of selves manifested in the processes of bonding, on the one hand, and in the relational formation of the self, on the other? The bonds are analyzed from an anti-categorical viewpoint based on personal citations of significance as opposed to given relationship categories, such as ‘family’ or ‘friendship’ – the two kinds of relationships that in fact are most frequently significant. The study draws from analysis of the personal narratives of 37 Finnish women and men (in all 80 interviews) and their entire configurations of those specific people who they cite as ‘close or otherwise important’. The analysis stresses the subjective experiences, while also investigating the actualized relational processes and configurations of all personal relationships with certain relationship histories embedded in micro-level structures. The research is based on four empirical sub-studies of personal relationships and a summary discussing the questions of the self and social bond. Discussion draws from G. H. Mead, C. Cooley, N. Elias, T. Scheff, G. Simmel and the contributors of ‘relational sociology’. Sub-studies analyse bonds to others from the viewpoint of biographical disruption and re-configuration of significant others, estranged family bonds, peer support and the formation of the most intimate relationships into exclusive and inclusive configurations. All analyses examine the dialectics of the social and the personal, asking how different structuring mechanisms and personal experiences and negotiations together contribute to the unfolding of the bonds. The summary elaborates personal relationships as social bonds embedded in wider webs of interdependent people and social settings that are laden with cultural expectations. Regarding the question of the relational self, the study proposes both bonding and individuality as significant. They are seen as interdependent phases of the relationality of the self. Bonding anchors the self to its significant relationships, in which individuality is manifested, for example, in contrasting and differentiating dynamics, but also in active attempts to connect with others. Individuality is not a fixed quality of the self, but a fluid and interdependent phase of the relational self. More specifically, it appears in three formats in the flux of relational processes: as a sense of unique self (via cultivation of subjective experiences), as agency and as (a search for) relative autonomy. The study includes an epilogue addressing the ambivalence between the social expectation of individuality in society and the bonded reality of selves.
Resumo:
Despite thirty years of research in interorganizational networks and project business within the industrial networks approach and relationship marketing, collective capability of networks of business and other interorganizational actors has not been explicitly conceptualized and studied within the above-named approaches. This is despite the fact that the two approaches maintain that networking is one of the core strategies for the long-term survival of market actors. Recently, many scholars within the above-named approaches have emphasized that the survival of market actors is based on the strength of their networks and that inter-firm competition is being replaced by inter-network competition. Furthermore, project business is characterized by the building of goal-oriented, temporary networks whose aims, structures, and procedures are clarified and that are governed by processes of interaction as well as recurrent contracts. This study develops frameworks for studying and analysing collective network capability, i.e. collective capability created for the network of firms. The concept is first justified and positioned within the industrial networks, project business, and relationship marketing schools. An eclectic source of conceptual input is based on four major approaches to interorganizational business relationships. The study uses qualitative research and analysis, and the case report analyses the empirical phenomenon using a large number of qualitative techniques: tables, diagrams, network models, matrices etc. The study shows the high level of uniqueness and complexity of international project business. While perceived psychic distance between the parties may be small due to previous project experiences and the benefit of existing relationships, a varied number of critical events develop due to the economic and local context of the recipient country as well as the coordination demands of the large number of involved actors. The study shows that the successful creation of collective network capability led to the success of the network for the studied project. The processes and structures for creating collective network capability are encapsulated in a model of governance factors for interorganizational networks. The theoretical and management implications are summarized in seven propositions. The core implication is that project business success in unique and complex environments is achieved by accessing the capabilities of a network of actors, and project management in such environments should be built on both contractual and cooperative procedures with local recipient country parties.