941 resultados para open source seismic data processing packages
Resumo:
The portfolio as a means of demonstrating personal skills has lately been gaining prominence among technology students. This is partially due to the introduction of electronic portfolios, or e-portfolios. As platforms for e-portfolio management with different approaches have been introduced, the learning cycle, traditional portfolio pedagogy, and learner centricity have sometimes been forgotten, and as a result, the tools have been used for the most part as data depositories. The purpose of this thesis is to show how the construction of e-portfolios of IT students can be supported by institutions through the usage of different tools that relate to study advising, teaching, and learning. The construction process is presented as a cycle based on learning theories. Actions related to the various phases of the e-portfolio construction process are supported by the implementation of software applications. To maximize learner-centricity and minimize the intervention of the institution, the evaluated and controlled actions for these practices can be separated from the e-portfolios, leaving the construction of the e-portfolio to students. The main contributions of this thesis are the implemented applications, which can be considered to support the e-portfolio construction by assisting in planning, organizing, and reflecting activities. Eventually, this supports the students in their construction of better and more extensive e-portfolios. The implemented tools include 1) JobSkillSearcher to help students’ recognition of the demands of the ICT industry regarding skills, 2) WebTUTOR to support students’ personal study planning, 3) Learning Styles to determine students' learning styles, and 4) MyPeerReview to provide a platform on which to carry out anonymous peer review processes in courses. The most visible outcome concerning the e-portfolio is its representation, meaning that one can use it to demonstrate personal achievements at the time of seeking a job and gaining employment. Testing the tools and the selected open-source e-portfolio application indicates that the degree of richness of e-portfolio content can be increased by using the implemented applications.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
This study examines the practice of supply chain management problems and the perceived demand information distortion’s (the bullwhip effect) reduction with the interfirm information system, which is delivered as a cloud service to a company operating in the telecommunications industry. The purpose is to shed light in practice that do the interfirm information system have impact on the performance of the supply chain and in particularly the reduction of bullwhip effect. In addition, a holistic case study of the global telecommunications company's supply chain is presented and also the challenges it’s facing, and this study also proposes some measures to improve the situation. The theoretical part consists of the supply chain and its management, as well as increasing the efficiency and introducing the theories and related previous research. In addition, study presents performance metrics for the bullwhip effect detection and tracking. The theoretical part ends in presenting cloud -based business intelligence theoretical framework used in the background of this study. The research strategy is a qualitative case study, supported by quantitative data, which is collected from a telecommunication sector company's databases. Qualitative data were gathered mainly with two open interviews and the e-mail exchange during the development project. In addition, other materials from the company were collected during the project and the company's web site information was also used as the source. The data was collected to a specific case study database in order to increase reliability. The results show that the bullwhip effect can be reduced with the interfirm information system and with the use of CPFR and S&OP models and in particularly combining them to an integrated business planning. According to this study the interfirm information system does not, however, solve all of the supply chain and their effectiveness -related problems, because also the company’s processes and human activities have a major impact.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Tässä diplomityössä tutkittiin, miten toteuttaa replikoitava palvelinjärjestelmä julkisen liikenteen avoimen datan jakeluun. Tutkimuksessa selvitettiin, onko vastaavanlaisia järjestelmiä suunniteltu aiemmin, vai pitääkö järjestelmä suunnitella itse. Projektissa käytettiin avoimen lähdekoodin OneBusAway-ohjelmistokokonaisuutta. Projektin avulla osoitettiin, että kyseinen ohjelmisto toimi yliopiston testikäytössä hyvin. Ohjelmiston avulla pystytään jakelemaan staattista ja reaaliaikaista dataa, ja se on replikoitavissa kunnasta toiseen maailmanlaajuisesti. Tulevaisuudessa olisi kuitenkin hyvä selvittää, miten ohjelmistosta puuttuva reittihakuominaisuus kannattaisi toteuttaa, sekä olisiko REST- rajapinta mahdollista muuttaa sellaiseksi, että se noudattaisi julkisen liikenteen standardeja.