938 resultados para Optimistic data replication system
Resumo:
Especially in global enterprises, key data is fragmented in multiple Enterprise Resource Planning (ERP) systems. Thus the data is inconsistent, fragmented and redundant across the various systems. Master Data Management (MDM) is a concept, which creates cross-references between customers, suppliers and business units, and enables corporate hierarchies and structures. The overall goal for MDM is the ability to create an enterprise-wide consistent data model, which enables analyzing and reporting customer and supplier data. The goal of the study was defining the properties and success factors of a master data system. The theoretical background was based on literature and the case consisted of enterprise specific needs and demands. The theoretical part presents the concept, background, and principles of MDM and then the phases of system planning and implementation project. Case consists of background, definition of as is situation, definition of project, evaluation criterions and concludes the key results of the thesis. In the end chapter Conclusions combines common principles with the results of the case. The case part ended up dividing important factors of the system in success factors, technical requirements and business benefits. To clarify the project and find funding for the project, business benefits have to be defined and the realization has to be monitored. The thesis found out six success factors for the MDM system: Well defined business case, data management and monitoring, data models and structures defined and maintained, customer and supplier data governance, delivery and quality, commitment, and continuous communication with business. Technical requirements emerged several times during the thesis and therefore those can’t be ignored in the project. Conclusions chapter goes through these factors on a general level. The success factors and technical requirements are related to the essentials of MDM: Governance, Action and Quality. This chapter could be used as guidance in a master data management project.
Resumo:
OBJETIVO: avaliar a acurácia da mamografia para o diagnóstico de microcalcificações mamárias suspeitas, com as classificações do Breast Imaging Reporting and Data System (BI-RADS TM) e Le Gal em comparação com o resultado histopatológico utilizado como padrão-ouro. MÉTODOS: foram selecionados dos arquivos dos blocos cirúrgicos, 130 casos operados com mamografias contendo somente microcalcificações mamárias, inicialmente classificadas como suspeitas sem lesões detectáveis ao exame clínico. Estas foram reclassificadas por dois examinadores, utilizando as classificações de Le Gal e BI-RADS TM, obtendo-se diagnóstico de consenso. As biópsias foram revistas por dois patologistas e foi obtido diagnóstico de consenso. A leitura das mamografias e a revisão das lâminas foram feitas em duplo-cego. As análises estatísticas utilizadas neste estudo foram o teste do chi2, o modelo Fleiss quadrático para VPP e o programa Epi-Info 6.0. RESULTADOS: a correlação entre a análise histopatológica e mamográfica, usando BI-RADS TM e Le Gal, mostrou a mesma sensibilidade de 96,4%, especificidade de 55,9 e 30,3%, valor preditivo positivo (VPP) de 37,5% e 27,5% e acurácia de 64,6 e 44,6%, respectivamente. Quando discriminamos por categorias de BI-RADS TM, obtivemos VPPs: categoria 2, 0%; categoria 3, 1,8%; categoria 4, 31,6% e categoria 5, 60%. Os VPPs pela classificação de Le Gal foram: categoria 2, 3,1%; categoria 3, 18,1 %; categoria 4, 26,4%; categoria 5, 66,7% e não classificável, 5,2%. CONCLUSÕES: observou-se uma maior precisão com a classificação de BI-RADS TM, porém não se conseguiu reduzir a ambigüidade na avaliação das microcalcificações mamárias.
Resumo:
This thesis work describes the creation of a pipework data structure for design system integration. Work is completed in pulp and paper plant delivery company with global engineering network operations in mind. User case of process design to 3D pipework design is introduced with influence of subcontracting engineering offices. Company data element list is gathered by using key person interviews and results are processed into a pipework data element list. Inter-company co-operation is completed in standardization association and common standard for pipework data elements is found. As result inter-company created pipework data element list is introduced. Further list usage, development and relations to design software vendors are evaluated.
Resumo:
Open data refers to publishing data on the web in machine-readable formats for public access. Using open data, innovative applications can be developed to facilitate people‟s lives. In this thesis, based on the open data cases (discussed in the literature review), Open Data Lappeenranta is suggested, which publishes open data related to opening hours of shops and stores in Lappeenranta City. To prove the possibility of creating Open Data Lappeenranta, the implementation of an open data system is presented in this thesis, which publishes specific data related to shops and stores (including their opening hours) on the web in standard format (JSON). The published open data is used to develop web and mobile applications to demonstrate the benefits of open data in practice. Also, the open data system provides manual and automatic interfaces which make it possible for shops and stores to maintain their own data in the system. Finally in this thesis, the completed version of Open Data Lappeenranta is proposed, which publishes open data related to other fields and businesses in Lappeenranta beyond only stores‟ data.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
Product Data Management (PDM) systems have been utilized within companies since the 1980s. Mainly the PDM systems have been used by large companies. This thesis presents the premise that small and medium-sized companies can also benefit from utilizing the Product Data Management systems. Furthermore, the starting point for the thesis is that the existing PDM systems are either too expensive or do not properly respond to the requirements SMEs have. The aim of this study is to investigate what kinds of requirements and special features SMEs, operating in Finnish manufacturing industry, have towards Product Data Management. Additionally, the target is to create a conceptual model that could fulfill the specified requirements. The research has been carried out as a qualitative case study, in which the research data was collected from ten Finnish companies operating in manufacturing industry. The research data is formed by interviewing key personnel from the case companies. After this, the data formed from the interviews has been processed to comprise a generic set of information system requirements and the information system concept supporting it. The commercialization of the concept is studied in the thesis from the perspective of system development. The aim was to create a conceptual model, which would be economically feasible for both, a company utilizing the system and for a company developing it. For this reason, the thesis has sought ways to scale the system development effort for multiple simultaneous cases. The main methods found were to utilize platform-based thinking and a way to generalize the system requirements, or in other words abstracting the requirements of an information system. The results of the research highlight the special features Finnish manufacturing SMEs have towards PDM. The most significant of the special features is the usage of project model to manage the order-to-delivery –process. This differs significantly from the traditional concepts of Product Data Management presented in the literature. Furthermore, as a research result, this thesis presents a conceptual model of a PDM system, which would be viable for the case companies interviewed during the research. As a by-product, this research presents a synthesized model, found from the literature, to abstract information system requirements. In addition to this, the strategic importance and categorization of information systems within companies has been discussed from the perspective of information system customizations.
Resumo:
Le virus de l’immunodéficience humaine de type 1 (VIH-1), l’agent étiologique du SIDA, est un rétrovirus complexe arborant plusieurs protéines accessoires : Nef, Vif, Vpr, et Vpu. Celles-ci sont impliquées dans la modulation de la réplication virale, dans l’évasion immunitaire et dans la progression de la pathogenèse du SIDA. Dans ce contexte, il a été démontré que la protéine virale R (Vpr) induit un arrêt de cycle cellulaire en phase G2. Le mécanisme par lequel Vpr exerce cette fonction est l’activation, ATR (Ataxia telangiectasia and Rad3 related)-dépendante, du point de contrôle de dommage à l’ADN, mais les facteurs et mécanismes moléculaires directement impliqués dans cette activité demeurent inconnus. Afin d’identifier de nouveaux facteurs cellulaires interagissant avec Vpr, nous avons utilisé une purification d’affinité en tandem (TAP) pour isoler des complexes protéiques natifs contenant Vpr. Nous avons découvert que Vpr s’associait avec CRL4A(VprBP), un complexe cellulaire d’E3 ubiquitine ligase, comprenant les protéines Cullin 4A, DDB1 (DNA damage-binding protein 1) et VprBP (Vpr-binding protein). Nos études ont mis en évidence que le recrutement de la E3 ligase par Vpr était nécessaire mais non suffisant pour l’induction de l’arrêt de cycle cellulaire en G2, suggérant ainsi que des événements additionnels seraient impliqués dans ce processus. À cet égard, nous apportons des preuves directes que Vpr détourne les fonctions de CRL4A(VprBP) pour induire la polyubiquitination de type K48 et la dégradation protéosomale de protéines cellulaires encore inconnues. Ces événements d’ubiquitination induits par Vpr ont été démontrés comme étant nécessaire à l’activation d’ATR. Finalement, nous montrons que Vpr forme des foyers ancrés à la chromatine co-localisant avec VprBP ainsi qu’avec des facteurs impliqués dans la réparation de l’ADN. La formation de ces foyers représente un événement essentiel et précoce dans l’induction de l’arrêt de cycle cellulaire en G2. Enfin, nous démontrons que Vpr est capable de recruter CRL4A(VprBP) au niveau de la chromatine et nous apportons des preuves indiquant que le substrat inconnu ciblé par Vpr est une protéine associée à la chromatine. Globalement, nos résultats révèlent certains des ménanismes par lesquels Vpr induit des perturbations du cycle cellulaire. En outre, cette étude contribue à notre compréhension de la modulation du système ubiquitine-protéasome par le VIH-1 et son implication fonctionnelle dans la manipulation de l’environnement cellulaire de l’hôte.
Resumo:
Data assimilation – the set of techniques whereby information from observing systems and models is combined optimally – is rapidly becoming prominent in endeavours to exploit Earth Observation for Earth sciences, including climate prediction. This paper explains the broad principles of data assimilation, outlining different approaches (optimal interpolation, three-dimensional and four-dimensional variational methods, the Kalman Filter), together with the approximations that are often necessary to make them practicable. After pointing out a variety of benefits of data assimilation, the paper then outlines some practical applications of the exploitation of Earth Observation by data assimilation in the areas of operational oceanography, chemical weather forecasting and carbon cycle modelling. Finally, some challenges for the future are noted.
Resumo:
The long-term stability, high accuracy, all-weather capability, high vertical resolution, and global coverage of Global Navigation Satellite System (GNSS) radio occultation (RO) suggests it as a promising tool for global monitoring of atmospheric temperature change. With the aim to investigate and quantify how well a GNSS RO observing system is able to detect climate trends, we are currently performing an (climate) observing system simulation experiment over the 25-year period 2001 to 2025, which involves quasi-realistic modeling of the neutral atmosphere and the ionosphere. We carried out two climate simulations with the general circulation model MAECHAM5 (Middle Atmosphere European Centre/Hamburg Model Version 5) of the MPI-M Hamburg, covering the period 2001–2025: One control run with natural variability only and one run also including anthropogenic forcings due to greenhouse gases, sulfate aerosols, and tropospheric ozone. On the basis of this, we perform quasi-realistic simulations of RO observables for a small GNSS receiver constellation (six satellites), state-of-the-art data processing for atmospheric profiles retrieval, and a statistical analysis of temperature trends in both the “observed” climatology and the “true” climatology. Here we describe the setup of the experiment and results from a test bed study conducted to obtain a basic set of realistic estimates of observational errors (instrument- and retrieval processing-related errors) and sampling errors (due to spatial-temporal undersampling). The test bed results, obtained for a typical summer season and compared to the climatic 2001–2025 trends from the MAECHAM5 simulation including anthropogenic forcing, were found encouraging for performing the full 25-year experiment. They indicated that observational and sampling errors (both contributing about 0.2 K) are consistent with recent estimates of these errors from real RO data and that they should be sufficiently small for monitoring expected temperature trends in the global atmosphere over the next 10 to 20 years in most regions of the upper troposphere and lower stratosphere (UTLS). Inspection of the MAECHAM5 trends in different RO-accessible atmospheric parameters (microwave refractivity and pressure/geopotential height in addition to temperature) indicates complementary climate change sensitivity in different regions of the UTLS so that optimized climate monitoring shall combine information from all climatic key variables retrievable from GNSS RO data.
Resumo:
We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to nonrodent studies.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.