15 resultados para event based

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

B2B document handling is moving from paper to electronic networks and electronic domain very rapidly. Moving, handling and transforming large electronic business documents requires a lot from the systems handling them. This paper explores new technologies such as SOA, event-driven systems and ESB and a scalable, event-driven enterprise service bus is created to demonstrate these new approaches to message handling. As an end result, we have a small but fully functional messaging system with several different components. This is the first larger Java-project done in-house, so on the side we developed our own set of best practices of Java development, setting up configurations, tools, code repositories and class naming and much more.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This bachelor’s thesis, written for Lappeenranta University of Technology and implemented in a medium-sized enterprise (SME), examines a distributed document migration system. The system was created to migrate a large number of electronic documents, along with their metadata, from one document management system to another, so as to enable a rapid switchover of an enterprise resource planning systems inside the company. The paper examines, through theoretical analysis, messaging as a possible enabler of distributing applications and how it naturally fits an event based model, whereby system transitions and states are expressed through recorded behaviours. This is put into practice by analysing the implemented migration systems and how the core components, MassTransit, RabbitMQ and MongoDB, were orchestrated together to realize such a system. As a result, the paper presents an architecture for a scalable and distributed system that could migrate hundreds of thousands of documents over weekend, serving its goals in enabling a rapid system switchover.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photosystem II (PSII) of oxygenic photosynthesis is susceptible to photoinhibition. Photoinhibition is defined as light induced damage resulting in turnover of the D1 protein subunit of the reaction center of PSII. Both visible and ultraviolet (UV) light cause photoinhibition. Photoinhibition induced by UV light damages the oxygen evolving complex (OEC) via absorption of UV photons by the Mn ion(s) of OEC. Under visible light, most of the earlier hypotheses assume that photoinhibition occurs when the rate of photon absorption by PSII antenna exceeds the use of the absorbed energy in photosynthesis. However, photoinhibition occurs at all light intensities with the same efficiency per photon. The aim of my thesis work was to build a model of photoinhibition that fits the experimental features of photoinhibition. I studied the role of electron transfer reactions of PSII in photoinhibition and found that changing the electron transfer rate had only minor influence on photoinhibition if light intensity was kept constant. Furthermore, quenching of antenna excitations protected less efficiently than it would protect if antenna chlorophylls were the only photoreceptors of photoinhibition. To identify photoreceptors of photoinhibition, I measured the action spectrum of photoinhibition. The action spectrum showed resemblance to the absorption spectra of Mn model compounds suggesting that the Mn cluster of OEC acts as a photoreceptor of photoinhibition under visible light, too. The role of Mn in photoinhibition was further supported by experiments showing that during photoinhibition OEC is damaged before electron transfer activity at the acceptor side of PSII is lost. Mn enzymes were found to be photosensitive under visible and UV light indicating that Mn-containing compounds, including OEC, are capable of functioning as photosensitizers both in visible and UV light. The experimental results above led to the Mn hypothesis of the mechanism of continuous-light-induced photoinhibition. According to the Mn hypothesis, excitation of Mn of OEC results in inhibition of electron donation from OEC to the oxidized primary donor P680+ both under UV and visible light. P680 is oxidized by photons absorbed by chlorophyll, and if not reduced by OEC, P680+ may cause harmful oxidation of other PSII components. Photoinhibition was also induced with intense laser pulses and it was found that the photoinhibitory efficiency increased in proportion to the square of pulse intensity suggesting that laser-pulse-induced photoinhibition is a two-photon reaction. I further developed the Mn hypothesis suggesting that the initial event in photoinhibition under both continuous and pulsed light is the same: Mn excitation that leads to the inhibition of electron donation from OEC to P680+. Under laser-pulse-illumination, another Mn-mediated inhibitory photoreaction occurs within the duration of the same pulse, whereas under continuous light, secondary damage is chlorophyll mediated. A mathematical model based on the Mn hypothesis was found to explain photoinhibition under continuous light, under flash illumination and under the combination of these two.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combating climate change is one of the key tasks of humanity in the 21st century. One of the leading causes is carbon dioxide emissions due to usage of fossil fuels. Renewable energy sources should be used instead of relying on oil, gas, and coal. In Finland a significant amount of energy is produced using wood. The usage of wood chips is expected to increase in the future significantly, over 60 %. The aim of this research is to improve understanding over the costs of wood chip supply chains. This is conducted by utilizing simulation as the main research method. The simulation model utilizes both agent-based modelling and discrete event simulation to imitate the wood chip supply chain. This thesis concentrates on the usage of simulation based decision support systems in strategic decision-making. The simulation model is part of a decision support system, which connects the simulation model to databases but also provides a graphical user interface for the decisionmaker. The main analysis conducted with the decision support system concentrates on comparing a traditional supply chain to a supply chain utilizing specialized containers. According to the analysis, the container supply chain is able to have smaller costs than the traditional supply chain. Also, a container supply chain can be more easily scaled up due to faster emptying operations. Initially the container operations would only supply part of the fuel needs of a power plant and it would complement the current supply chain. The model can be expanded to include intermodal supply chains as due to increased demand in the future there is not enough wood chips located close to current and future power plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the fact that the literature on mergers and acquisitions is extensive, relatively little effort has been made to examine the relationship between the acquiring firms’ financial slack and short-term post-takeover announcement abnormal stock returns. In this study, the case is made that the financial slack of a firm is not only an outcome of past business and financing activities but it also may affect the quality of acquisition decisions. We will hypothesize that the level of financial slack in a firm is negatively associated with the abnormal returns following acquisition announcements because slack reduces managerial discipline over the use of corporate funds and also because it may give rise to managerial self-serving behavior. In this study, financial slack is measured in terms of three financial statements ratios: leverage ratio, cash and equivalents to total assets ratio and free cash flow to total assets ratio. The data used in this paper is collected from two main sources. A list comprising 90 European acquisition announcements is retrieved from Thomson One Banker database. The stock price data and financial statements information for the respective firms is collected using Datastream. Our empirical analysis is two-fold. First, we conduct a two-sample t-test where we find that the most slack-rich firms experience lower abnormal returns than the most slack-poor firms in the event window [-1, +1], significant at 5% risk level. Second, we perform a cross sectional regression for sample firms using three financial statements ratios to explain cumulative abnormal returns (CAR). We find that leverage shows a statistically significant positive relationship with cumulative abnormal returns in event window [-1; +1] (significance 5%). Moreover, cash to total assets ratio showed a weak negative relationship with CAR (significant at 10%) in event window [-1; +1]. We conclude that our hypothesis for the inverse relationship between slack and abnormal returns receives empirical support. Based on the results of the event study we get empirical support for the hypothesis that the capital markets expect the acquisitions undertaken by slack-rich firms to more likely be driven by managerial self-serving behavior and hubris than do those undertaken by slackpoor firms, signaling possible agency problems and behavioral biases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision affords us with the ability to consciously see, and use this information in our behavior. While research has produced a detailed account of the function of the visual system, the neural processes that underlie conscious vision are still debated. One of the aims of the present thesis was to examine the time-course of the neuroelectrical processes that correlate with conscious vision. The second aim was to study the neural basis of unconscious vision, that is, situations where a stimulus that is not consciously perceived nevertheless influences behavior. According to current prevalent models of conscious vision, the activation of visual cortical areas is not, as such, sufficient for consciousness to emerge, although it might be sufficient for unconscious vision. Conscious vision is assumed to require reciprocal communication between cortical areas, but views differ substantially on the extent of this recurrent communication. Visual consciousness has been proposed to emerge from recurrent neural interactions within the visual system, while other models claim that more widespread cortical activation is needed for consciousness. Studies I-III compared models of conscious vision by studying event-related potentials (ERP). ERPs represent the brain’s average electrical response to stimulation. The results support the model that associates conscious vision with activity localized in the ventral visual cortex. The timing of this activity corresponds to an intermediate stage in visual processing. Earlier stages of visual processing may influence what becomes conscious, although these processes do not directly enable visual consciousness. Late processing stages, when more widespread cortical areas are activated, reflect the access to and manipulation of contents of consciousness. Studies IV and V concentrated on unconscious vision. By using transcranial magnetic stimulation (TMS) we show that when early visual cortical processing is disturbed so that subjects fail to consciously perceive visual stimuli, they may nevertheless guess (above chance-level) the location where the visual stimuli were presented. However, the results also suggest that in a similar situation, early visual cortex is necessary for both conscious and unconscious perception of chromatic information (i.e. color). Chromatic information that remains unconscious may influence behavioral responses when activity in visual cortex is not disturbed by TMS. Our results support the view that early stimulus-driven (feedforward) activation may be sufficient for unconscious processing. In conclusion, the results of this thesis support the view that conscious vision is enabled by a series of processing stages. The processes that most closely correlate with conscious vision take place in the ventral visual cortex ~200 ms after stimulus presentation, although preceding time-periods and contributions from other cortical areas such as the parietal cortex are also indispensable. Unconscious vision relies on intact early visual activation, although the location of visual stimulus may be unconsciously resolved even when activity in the early visual cortex is interfered with.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master’s thesis was done for a small company, Vipetec Oy, which offers specialized technological services for companies mainly in forest industry. The study was initiated partly because the company wants to expand its customer base to a new industry. There were two goals connected to each other. First was to find out how much and what kind of value current customers have realized from ATA Process Event Library, one of the products that the company offers. Second was to determine the best way to present this value and its implications for future value potential to both current and potential customers. ATA helps to make grade and product changes, starting after machine downtime, and recovery from production break faster for customers. All three events sometimes occur in production line. The faster operation results to savings in time and material. In addition to ATA Vipetec also offers other services related to development of automation and optimization of controls. Theoretical part concentrates on the concept of value, how it can be delivered to customers, and what kind of risk customer faces in industrial purchasing. Also the function of reference marketing towards customers is discussed. In the empirical part the realized value for existing customers is evaluated based on both numerical data and interviews. There’s also a brief case study about one customer. After that the value-based reference marketing for a target industry is examined through interviews of these potential customers. Finally answers to the research questions are stated and compared also to the theoretical knowledge about the subject. Results show that those customers’ machines which use the full service concept of ATA usually are able to save more time and material than the machines which use only some features of the product. Interviews indicated that sales arguments which focus on improved competitive status are not as effective as current arguments which focus on numerical improvements. In the case of potential customers in the new industry, current sales arguments likely work best for those whose irregular production situations are caused mainly by fault situations. When the actions of Vipetec were compared to ten key elements of creating customer references, it was seen that many of them the company has either already included in its strategy or has good chances to include them with the help of the results of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this qualitative research is to study what is the impact of event marketing on brand awareness in the context of electronic sport industry. Based on the research questions, the theoretical framework will be developed. This research will analyze earlier theories, and also searching more fresh literature to explain the current phenomenon in the eSport industry. In the empirical part, there were total of five case companies interviewed. The context of this research is eSport, which has its own chapter. The theoretical part of the thesis focuses on event marketing and brand awareness. In this research, event marketing is analyzed from the event organizers perspective. In some occasions, event exhibitors’ perspective is also analyzed. In brand awareness, the focus is how to create a brand recognizable, recalled and from there top of mind in consumers’ minds. The results of this research revealed that many companies’ struggles on getting their brand recognizable. Some of the case companies lacks a strategy and don’t exactly know the core values of their customers. However some of the case companies were opposite. One reason behind this is that some of them has experience on the field and the companies have resources that covers them. Also the current strong brand has clearly a positive affect on their business.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).