8 resultados para repository
em Helda - Digital Repository of University of Helsinki
Resumo:
A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto
Resumo:
Olkiluoto Island is situated in the northern Baltic Sea, near the southwestern coast of Finland, and is the proposed location of a spent nuclear fuel repository. This study examined Holocene palaeoseismicity in the Olkiluoto area and in the surrounding sea areas by computer simulations together with acoustic-seismic, sedimentological and dating methods. The most abundant rock type on the island is migmatic mica gneiss, intruded by tonalites, granodiorites and granites. The surrounding Baltic Sea seabed consists of Palaeoproterozoic crystalline bedrock, which is to a great extent covered by younger Mesoproterozoic sedimentary rocks. The area contains several ancient deep-seated fracture zones that divide it into bedrock blocks. The response of bedrock at the Olkiluoto site was modelled considering four future ice-age scenarios. Each scenario produced shear displacements of fractures with different times of occurrence and varying recovery rates. Generally, the larger the maximum ice load, the larger were the permanent shear displacements. For a basic case, the maximum shear displacements were a few centimetres at the proposed nuclear waste repository level, at proximately 500 m b.s.l. High-resolution, low-frequency echo-sounding was used to examine the Holocene submarine sedimentary structures and possible direct and indirect indicators of palaeoseismic activity in the northern Baltic Sea. Echo-sounding profiles of Holocene submarine sediments revealed slides and slumps, normal faults, debris flows and turbidite-type structures. The profiles also showed pockmarks and other structures related to gas or groundwater seepages, which might be related to fracture zone activation. Evidence of postglacial reactivation in the study area was derived from the spatial occurrence of some of the structures, especial the faults and the seepages, in the vicinity of some old bedrock fracture zones. Palaeoseismic event(s) (a single or several events) in the Olkiluoto area were dated and the palaeoenvironment was characterized using palaeomagnetic, biostratigraphical and lithostratigraphical methods, enhancing the reliability of the chronology. Combined lithostratigraphy, biostratigraphy and palaeomagnetic stratigraphy revealed an age estimation of 10 650 to 10 200 cal. years BP for the palaeoseismic event(s). All Holocene sediment faults in the northern Baltic Sea occur at the same stratigraphical level, the age of which is estimated at 10 700 cal. years BP (9500 radiocarbon years BP). Their movement is suggested to have been triggered by palaeoseismic event(s) when the Late Weichselian ice sheet was retreating from the site and bedrock stresses were released along the bedrock fracture zones. Since no younger or repeated traces of seismic events were found, it corroborates the suggestion that the major seismic activity occurred within a short time during and after the last deglaciation. The origin of the gas/groundwater seepages remains unclear. Their reflections in the echo-sounding profiles imply that part of the gas is derived from the organic-bearing Litorina and modern gyttja clays. However, at least some of the gas is derived from the bedrock. Additional information could be gained by pore water analysis from the pockmarks. Information on postglacial fault activation and possible gas and/or fluid discharges under high hydraulic heads has relevance in evaluating the safety assessment of a planned spent nuclear fuel repository in the region.
Resumo:
Mass spectrometry (MS) became a standard tool for identifying metabolites in biological tissues, and metabolomics is slowly acknowledged as a legitimate research discipline for characterizing biological conditions. The computational analyses of metabolomics, however, lag behind compared with the rapid advances in analytical aspects for two reasons. First is the lack of standardized data repository for mass spectra: each research institution is flooded with gigabytes of mass-spectral data from its own analytical groups and cannot host a world-class repository for mass spectra. The second reason is the lack of informatics experts that are fully experienced with spectral analyses. The two barriers must be overcome to establish a publicly free data server for MS analysis in metabolomics as does GenBank in genomics and UniProt in proteomics. The workshop brought together bioinformaticians working on mass spectral analyses in Finland and Japan with the goal to establish a consortium to freely exchange and publicize mass spectra of metabolites measured on various platforms computational tools to analyze spectra spectral knowledge that are computationally predicted from standardized data. This book contains the abstracts of the presentations given in the workshop. The programme of the workshop consisted of oral presentations from Japan and Finland, invited lectures from Steffen Neumann (Leibniz Institute of Plant Biochemistry), Matej Oresic (VTT), Merja Penttila (VTT) and Nicola Zamboni (ETH Zurich) as well as free form discussion among the participants. The event was funded by Academy of Finland (grants 139203 and 118653), Japan Society for the Promotion of Science (JSPS Japan-Finland Bilateral Semi- nar Program 2010) and Department of Computer Science University of Helsinki. We would like to thank all the people contributing to the technical pro- gramme and the sponsors for making the workshop possible. Helsinki, October 2010 Masanori Arita, Markus Heinonen and Juho Rousu
Resumo:
Introduction. We estimate the total yearly volume of peer-reviewed scientific journal articles published world-wide as well as the share of these articles available openly on the Web either directly or as copies in e-print repositories. Method. We rely on data from two commercial databases (ISI and Ulrich's Periodicals Directory) supplemented by sampling and Google searches. Analysis. A central issue is the finding that ISI-indexed journals publish far more articles per year (111) than non ISI-indexed journals (26), which means that the total figure we obtain is much lower than many earlier estimates. Our method of analysing the number of repository copies (green open access) differs from several earlier studies which have studied the number of copies in identified repositories, since we start from a random sample of articles and then test if copies can be found by a Web search engine. Results. We estimate that in 2006 the total number of articles published was approximately 1,350,000. Of this number 4.6% became immediately openly available and an additional 3.5% after an embargo period of, typically, one year. Furthermore, usable copies of 11.3% could be found in subject-specific or institutional repositories or on the home pages of the authors. Conclusions. We believe our results are the most reliable so far published and, therefore, should be useful in the on-going debate about Open Access among both academics and science policy makers. The method is replicable and also lends itself to longitudinal studies in the future.
Resumo:
The tension created when companies are collaborating with competitors – sometimes termed co-opetition - has been subject of research within the network approach. As companies are collaborating with competitors, they need to simultaneously share and protect knowledge. The opportunistic behavior and learning intent of the partner may be underestimated, and collaboration may involve significant risks of loss of competitive edge. Contrastingly, the central tenet within the Intellectual Capital approach is that knowledge grows as it flows. The person sharing does not lose the knowledge and therefore knowledge has doubled from a company’s point of view. Value is created through the interplay of knowledge flows between and within three forms of intellectual capital: human, structural and relational capital. These are the points of departure for the research conducted in this thesis. The thesis investigates the tension between collaboration and competition through an Intellectual Capital lens, by identifying the actions taken to share and protect knowledge in interorganizational collaborative relationships. More specifically, it explores the tension in knowledge flows aimed at protecting and sharing knowledge, and their effect on the value creation of a company. It is assumed, that as two companies work closely together, the collaborative relationship becomes intertwined between the two partners and the intellectual capital flows of both companies are affected. The research finds that companies commonly protect knowledge also in close and long-term collaborative relationships. The knowledge flows identified are both collaborative and protective, with the result that they sometimes are counteracting and neutralize each other. The thesis contributes to the intellectual capital approach by expanding the understanding of knowledge protection in interorganizational relationships in three ways. First, departing from the research on co-opetition it shifts the focus from the internal view of the company as a repository of intellectual capital onto the collaborative relationships between competing companies. Second, instead of the traditional collaborative and sharing point of departure, it takes a competitive and protective perspective. Third, it identifies the intellectual capital flows as assets or liabilities depending on their effect on the value creation of the company. The actions taken to protect knowledge in an interorganizational relationship may decrease the value created in the company, which would make them liabilities.
Resumo:
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.