986 resultados para Metadata repository


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Marja-Liisa Seppälän esitys Kirjastoverkkopäivillä 26.10.2011 Helsingissä

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän diplomityön tarkoituksena oli löytää kehityskohteita Fortumin Loviisan ydinvoimalaitoksen konventionaalisesta, eli tavanomaisesta, jätehuollosta. Tavoitteena oli löytää erityisesti keinoja kaatopaikkajätteen määrän vähentämiseksi sekä lajittelun tehostamiseksi. Myös jätelainsäädännön kokonaisuudistuksen vaikutukset jätehuollon toimintaan olivat työn kannalta keskeisessä roolissa. Työ tehtiin jätehuoltosuunnitelman rakennetta noudattaen. Jätehuoltosuunnitelma koostuu alkukartoituksesta sekä jätehuoltosuunnitelman laatimisesta ja toteutuksesta. Varsinaisina kehitystarpeiden kartoittamismenetelminä käytettiin viranomaisvaatimusten selvittämistä, toiminnan tarkastelua, jätehuoltokyselyä voimalaitoksen työntekijöille, benchmarkingia sekä valittujen hyötykäyttö- ja loppusijoitusmenetelmien kustannusvertailua. Tulokseksi saatiin, että jätteiden lajittelua voitaisiin tehostaa ennen kaikkea lisäämällä työntekijöiden koulutusta. Lajittelun helpottamiseksi ohjeistuksen tulee olla paremmin saatavilla sekä voimalaitoksen omalle henkilöstölle kuin urakoitsijoillekin. Ongelmajätehuollossa eniten ongelmia ilmeni ongelmajätepakkausten merkitsemisessä jätteiden syntypaikoilla. Tähän ratkaisuna ehdotettiin kokeiltavaksi jätteiden syntykohteisiin sijoitettavia jätekortteja, joista pakkaajat voisivat helposti tarkistaa tarvittavat merkinnät. Myös mustan jäteöljyn keräämistä olisi mahdollista parantaa, jotta suurempi osa siitä saataisiin hyödynnettyä materiaalina. Kaatopaikkajätteen määrän vähentämiseksi työssä ehdotettiin sekajätteen viemistä kaatopaikan sijaan poltettavaksi. Muutoksen seurauksena voimalaitoksen jätehuollon kustannukset saattavat lisääntyä, mutta ympäristön kannalta muutos tulisi olemaan positiivinen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large biodiversity of cyanobacteria together with the increasing genomics and proteomics metadata provide novel information for finding new commercially valuable metabolites. With the advent of global warming, there is growing interest in the processes that results in efficient CO2 capture through the use of photosynthetic microorganisms such as cyanobacteria. This requires a detailed knowledge of how cyanobacteria respond to the ambient CO2. My study was aimed at understanding the changes in the protein profile of the model organism, Synechocystis PCC 6803 towards the varying CO2 level. In order to achieve this goal I have employed modern proteomics tools such as iTRAQ and DIGE, recombinant DNA techniques to construct different mutants in cyanobacteria and biophysical methods to study the photosynthetic properties. The proteomics study revealed several novel proteins, apart from the well characterized proteins involved in carbon concentrating mechanisms (CCMs), that were upregulated upon shift of the cells from high CO2 concentration (3%) to that in air level (0.039%). The unknown proteins, Slr0006 and flavodiiron proteins (FDPs) Sll0217-Flv4 and Sll0219-Flv2, were selected for further characterization. Although slr0006 was substantially upregulated under Ci limiting conditions, inactivation of the gene did not result in any visual phenotype under various environmental conditions indicating that this protein is not essential for cell survival. However, quantitative proteomics showed the induction of novel plasmid and chromosome encoded proteins in deltaslr0006 under air level CO2 conditions. The expression of the slr0006 gene was found to be strictly dependent on active photosynthetic electron transfer. Slr0006 contains conserved dsRNA binding domain that belongs to the Sua5/YrdC/YciO protein family. Structural modelling of Slr0006 showed an alpha/beta twisted open-sheet structure and a positively charged cavity, indicating a possible binding site for RNA. The 3D model and the co-localization of Slr0006 with ribosomal subunits suggest that it might play a role in translation or ribosome biogenesis. On the other hand, deletions in the sll0217-sll218- sll0219 operon resulted in enhanced photodamage of PSII and distorted energy transfer from phycobilisome (PBS) to PSII, suggesting a dynamic photoprotection role of the operon. Constructed homology models also suggest efficient electron transfer in heterodimeric Flv2/Flv4, apparently involved in PSII photoprotection. Both Slr0006 and FDPs exhibited several common features, including negative regulation by NdhR and ambiguous cellular localization when subjected to different concentrations of divalent ions. This strong association with the membranes remained undisturbed even in the presence of detergent or high salt. My finding brings ample information on three novel proteins and their functions towards carbon limitation. Nevertheless, many pathways and related proteins remain unexplored. The comprehensive understanding of the acclimation processes in cyanobacteria towards varying environmental CO2 levels will help to uncover adaptive mechanisms in other organisms, including higher plants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Liisa Savolaisen esitys Kuvailun tiedotuspäivillä Helsingissä 20.3.2013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Markku Heinäsenahon esitys Kansalliskirjaston kirjastoverkkopalveluiden Asiantuntijaseminaarissa 23.4.2013 Helsingissä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magaly Basconesin esitys Kirjastoverkkopäivillä 24.10.2013 Helsingissä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mirjam Kesslerin esitys Kirjastoverkkopäivillä 24.10.2013 Helsingissä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This bachelor’s thesis, written for Lappeenranta University of Technology and implemented in a medium-sized enterprise (SME), examines a distributed document migration system. The system was created to migrate a large number of electronic documents, along with their metadata, from one document management system to another, so as to enable a rapid switchover of an enterprise resource planning systems inside the company. The paper examines, through theoretical analysis, messaging as a possible enabler of distributing applications and how it naturally fits an event based model, whereby system transitions and states are expressed through recorded behaviours. This is put into practice by analysing the implemented migration systems and how the core components, MassTransit, RabbitMQ and MongoDB, were orchestrated together to realize such a system. As a result, the paper presents an architecture for a scalable and distributed system that could migrate hundreds of thousands of documents over weekend, serving its goals in enabling a rapid system switchover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lassi Lagerin esitys ARTIVA-seminaarissa Helsingissä 5.2.2014.