686 resultados para Impala, Hadoop, Big Data, HDFS, Social Business Intelligence, SBI, cloudera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Varmuuskopiointi ja tietoturva suomalaisessa mikroyrityksessä ovat asioita, joihin ei usein kiinnitetä riittävää huomiota puuttuvan osaamisen, kiireen tai liian vähäisten resurssien takia. Tietoturva on valittu erääksi työn tutkimusaiheeksi, koska se on ajankohtainen ja paljon puhuttu aihe. Toiseksi tutkimusaiheeksi on valittu varmuuskopiointi, sillä se liittyy hyvin vahvasti tietoturvaan ja se on pakollinen toimenpide yrityksen liiketoiminnan jatkuvuuden takaamiseksi. Tässä työssä tutkitaan mikroyrityksen tietoturvaa ja pohditaan, miten sitä voidaan parantaa yksinkertaisilla menetelmillä. Tämän lisäksi tarkastellaan mikroyrityksen varmuuskopiointia ja siihen liittyviä asioita ja ongelmia. Työn tavoitteena on tietoturvan ja varmuuskopioinnin tutkiminen yleisellä tasolla sekä useamman varmuuskopiointiratkaisuvaihtoehdon luominen kirjallisuuden ja teorian pohjalta. Työssä tarkastellaan yrityksen tietoturvaa ja varmuuskopiointia käyttäen hyväksi kuvitteellista malliyritystä tutkimusympäristönä, koska tällä tavalla tutkimusympäristö voidaan määritellä ja rajata tarkasti. Koska kyseiset aihealueet ovat varsin laajoja, on työn aihetta rajattu lähinnä varmuuskopiointiin, mahdollisiin tietoturvauhkiin ja tietoturvan tutkimiseen yleisellä tasolla. Tutkimuksen pohjalta on kehitetty kaksi mahdollista paikallisen varmuuskopioinnin ratkaisuvaihtoehtoa ja yksi etävarmuuskopiointiratkaisuvaihtoehto. Paikallisen varmuuskopioinnin ratkaisuvaihtoehdot ovat varmuuskopiointi ulkoiselle kovalevylle ja varmuuskopiointi NAS (Network Attached Storage) -verkkolevypalvelimelle. Etävarmuuskopiointiratkaisuvaihtoehto on varmuuskopiointi etäpalvelimelle, kuten pilvipalveluun. Vaikka NAS-verkkolevypalvelin on paikallisen varmuuskopioinnin ratkaisu, voidaan sitä myös käyttää etävarmuuskopiointiin riippuen laitteen sijainnista. Työssä vertaillaan ja arvioidaan lyhyesti ratkaisuvaihtoehtoja tutkimuksen pohjalta luoduilla arviointikriteereillä. Samalla esitellään pisteytysmalli ratkaisujen arvioinnin ja sopivan ratkaisuvaihtoehdon valitsemisen helpottamiseksi. Jokaisessa ratkaisuvaihtoehdossa on omat hyvät ja huonot puolensa, joten oikean ratkaisuvaihtoehdon valitseminen ei ole aina helppoa. Ratkaisuvaihtoehtojen sopivuus tietylle yritykselle riippuu aina yrityksen omista tarpeista ja vaatimuksista. Koska eri yrityksillä on usein erilaiset vaatimukset ja tarpeet varmuuskopioinnille, voi yritykselle parhaiten sopivan varmuuskopiointiratkaisun löytäminen olla vaikeaa ja aikaa vievää. Tässä työssä esitetyt ratkaisuvaihtoehdot toimivat ohjeena ja perustana mikroyrityksen varmuuskopioinnin suunnittelussa, valinnassa, päätöksen teossa ja järjestelmän rakentamisessa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo aborda a aplicação das Tecnologias de Informação e Comunicação como estratégias de boas práticas que conduzem à retenção e captura de alunos do ensino superior. É baseado num estudo de caso de oito anos, de utilização de um completo sistema de informação. Este sistema, além de constituir um ERP, de suporte às actividades de gestão académica, dispõe ainda de uma forte componente de SRM que confere suporte às actividades administrativas e lectivas. É descrito em que medida o sistema apresentado facilita a interacção e comunicação entre os membros da comunidade académica, recorrendo à internet, com serviços disponíveis na Web complementando-os com correio electrónico, SMS e CTI. Através de uma percepção, sustentada por análise empírica e por resultados de inquéritos, demonstra-se como este tipo de boas práticas pode elevar o nível de satisfação da comunidade. Muito em particular, é possível combater o insucesso escolar, evitar que alunos abandonem os seus cursos antes do seu término e que os recomendem a potenciais alunos. Em complemento, este tipo de estratégia permite ainda fortes economias na gestão da instituição, elevando o seu valor. Como trabalho futuro, é apresentada a nova fase do projecto que envereda pela aplicação de Business Intelligence para optimização do processo de gestão, tornando-o pró-activo. Também é apresentada a visão tecnológica que orienta os novos desenvolvimentos, para uma arquitectura baseada em serviços Web e linguagens de definição processual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article discusses the application of Information and Communication Technologies and strategies for best practices in order to capture and maintain faculty students' attention. It is based on a case study of ten years, using a complete information system. This system, in addition to be considered an ERP, to support the activities of academic management, also has a strong component of SRM that provides support to academic and administrative activities. It describes the extent to which the presented system facilitates the interaction and communication between members of the academic community, using the Internet, with services available on the Web complementing them with email, SMS and CTI. Through a perception, backed by empirical analysis and results of investigations, it demonstrates how this type of practice may raise the level of satisfaction of the community. In particular, it is possible to combat failure at school, avoid that students leave their course before its completion and also that they recommend them to potential students. In addition, such a strategy also allows strong economies in the management of the institution, increasing its value. As future work, we present the new phase of the project towards implementation of Business Intelligence to optimize the management process, making it proactive. The technological vision that guides new developments to a construction based on Web services and procedural languages is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O livro organizado por Kira Tarapanoff nos apresenta a temática inteligência organizacional e competitiva no contexto da Web 2.0., a partir de quatro distintos enfoques: I. Web 2.0: novas oportunidades para a atividade de inteligência e Big Data; II. Novas arquiteturas informacionais; III. Desenvolvimento de estratégias por meio da Web 2.0; IV. Metodologias. O livro reúne nove capítulos elaborados por quinze autores brasileiros e um finlandês, este último traduzido para o português, alinhados ao tema principal do livro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos últimos anos têm surgido vários debates e estudos sobre a importância do conhecimento. As organizações consideram a Gestão do Conhecimento (GC) uma vantagem competitiva, capaz de gerar riqueza e poder. Para tal necessitam de desenvolver mecanismos e de ter pessoas com capacidade de criar, compartilhar e disseminar conhecimentos na organização. As Tecnologias de Informação e os Sistemas de Informação são uma ferramenta de grande impacto na GC, pois assumem um papel importante no sucesso e na renovação dos conhecimentos. A partir do Modelo de GC desenvolvido por Nonaka e Takeuchi, o Modelo Metavisão, e as técnicas de Business lntelligence, elaborou-se um Modelo de Gestão do conhecimento para a Unidade de Saúde. Com este modelo pretende-se obter beneficies, que passam pelo desenvolvimento de mecanismos de comunicação interna, formação e melhorias no processo de tomada de decisão. ABSTRACT; During the last years, debates and studies have arisen in relation to the importance of knowledge. The organizations consider Knowledge Management a competitive advantage, capable of generating wealth and power. Therefore, new mechanisms have to be developed in the organizations and employ people with capacity to create, share and disseminate knowledge. The information and Communication Technology is an important tool and has a great impact on Knowledge Management because it assumes an important role in the success and the renovation of knowledge. The Model of Knowledge Management developed by Nonaka and Takeuchi, the "Metavision" Model and the techniques of Business intelligence were the starting point to elaborate a Model of Knowledge Management for a Health Unit. With this model we aim to obtain benefits, such as development of mechanisms of internal communication, training plans and improvement during the process of decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The South Carolina Department of Employment and Workforce Business Intelligence Department monthly publishes Insights in conjunction with the U.S. Department of Labor, Bureau of Labor Statistics. The monthly newsletter provides economic indicators, employment rates and changes by county, nonfarm employment trends, and other statistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Rheumatoid arthritis (RA) is a chronic inflammatory arthritis that causes significant morbidity and mortality and has no cure. Although early treatment strategies and biologic therapies such as TNFα blocking antibodies have revolutionised treatment, there still remains considerable unmet need. JAK kinase inhibitors, which target multiple inflammatory cytokines, have shown efficacy in treating RA although their exact mechanism of action remains to be determined. Stratified medicine promises to deliver the right drug to the right patient at the right time by using predictive ‘omic biomarkers discovered using bioinformatic and “Big Data” techniques. Therefore, knowledge across the realms of clinical rheumatology, applied immunology, bioinformatics and data science is required to realise this goal. Aim: To use bioinformatic tools to analyse the transcriptome of CD14 macrophages derived from patients with inflammatory arthritis and define a JAK/STAT signature. Thereafter to investigate the role of JAK inhibition on inflammatory cytokine production in a macrophage cell contact activation assay. Finally, to investigate JAK inhibition, following RA synovial fluid stimulation of monocytes. Methods and Results: Using bioinformatic software such as limma from the Bioconductor repository, I determined that there was a JAK/STAT signature in synovial CD14 macrophages from patients with RA and this differed from psoriatic arthritis samples. JAK inhibition using a JAK1/3 inhibitor tofacitinib reduced TNFα production when macrophages were cell contact activated by cytokine stimulated CD4 T-cells. Other pro-inflammatory cytokines such as IL-6 and chemokines such as IP-10 were also reduced. RA synovial fluid failed to stimulate monocytes to phosphorylate STAT1, 3 or 6 but CD4 T-cells activated STAT3 with this stimulus. RNA sequencing of synovial fluid stimulated CD4 T-cells showed an upregulation of SOCS3, BCL6 and SBNO2, a gene associated with RA but with unknown function and tofacitinib reversed this. Conclusion: These studies demonstrate that tofacitinib is effective at reducing inflammatory mediator production in a macrophage cell contact assay and also affects soluble factor mediated stimulation of CD4 T-cells. This suggests that the effectiveness of JAK inhibition is due to inhibition of multiple cytokine pathways such as IL-6, IL-15 and interferon. RNA sequencing is a useful tool to identify non-coding RNA transcripts that are associated with synovial fluid stimulation and JAK inhibition but these require further validation. SBNO2, a gene that is associated with RA, may be biomarker of tofacitinib treatment but requires further investigation and validation in wider disease cohorts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La ciencia de la computación arrancó con la era de las máquinas tabulables para después pasar a las programables. Sin embargo el mundo actual vive una transformación radical de la información. Por un lado la avalancha masiva de datos, el llamado Big Data hace que los sistemas requieran de una inteligencia adicional para extraer conocimiento válido de los datos. Por otro lado demandamos cada día más ordenadores que nos entiendan y se comuniquen mejor con nosotros. La computación cognitiva, la nueva era de la computación, viene a responder a estas necesidades: sistemas que utilizan la inteligencia biológica como modelo para establecer una relación más satisfactoria con los seres humanos. El lenguaje natural, la capacidad de moverse en un mundo ambiguo y el aprendizaje son características de los sistemas cognitivos, uno de los cuales, IBM Watson es el ejemplo más elocuente en la actualidad de este nuevo paradigma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Worldwide air traffic tends to increase and for many airports it is no longer an op-tion to expand terminals and runways, so airports are trying to maximize their op-erational efficiency. Many airports already operate near their maximal capacity. Peak hours imply operational bottlenecks and cause chained delays across flights impacting passengers, airlines and airports. Therefore there is a need for the opti-mization of the ground movements at the airports. The ground movement prob-lem consists of routing the departing planes from the gate to the runway for take-off, and the arriving planes from the runway to the gate, and to schedule their movements. The main goal is to minimize the time spent by the planes during their ground movements while respecting all the rules established by the Ad-vanced Surface Movement, Guidance and Control Systems of the International Civil Aviation. Each aircraft event (arrival or departing authorization) generates a new environment and therefore a new instance of the Ground Movement Prob-lem. The optimization approach proposed is based on an Iterated Local Search and provides a fast heuristic solution for each real-time event generated instance granting all safety regulations. Preliminary computational results are reported for real data comparing the heuristic solutions with the solutions obtained using a mixed-integer programming approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uno de los grandes retos de la HPC (High Performance Computing) consiste en optimizar el subsistema de Entrada/Salida, (E/S), o I/O (Input/Output). Ken Batcher resume este hecho en la siguiente frase: "Un supercomputador es un dispositivo que convierte los problemas limitados por la potencia de cálculo en problemas limitados por la E/S" ("A Supercomputer is a device for turning compute-bound problems into I/O-bound problems") . En otras palabras, el cuello de botella ya no reside tanto en el procesamiento de los datos como en la disponibilidad de los mismos. Además, este problema se exacerbará con la llegada del Exascale y la popularización de las aplicaciones Big Data. En este contexto, esta tesis contribuye a mejorar el rendimiento y la facilidad de uso del subsistema de E/S de los sistemas de supercomputación. Principalmente se proponen dos contribuciones al respecto: i) una interfaz de E/S desarrollada para el lenguaje Chapel que mejora la productividad del programador a la hora de codificar las operaciones de E/S; y ii) una implementación optimizada del almacenamiento de datos de secuencias genéticas. Con más detalle, la primera contribución estudia y analiza distintas optimizaciones de la E/S en Chapel, al tiempo que provee a los usuarios de una interfaz simple para el acceso paralelo y distribuido a los datos contenidos en ficheros. Por tanto, contribuimos tanto a aumentar la productividad de los desarrolladores, como a que la implementación sea lo más óptima posible. La segunda contribución también se enmarca dentro de los problemas de E/S, pero en este caso se centra en mejorar el almacenamiento de los datos de secuencias genéticas, incluyendo su compresión, y en permitir un uso eficiente de esos datos por parte de las aplicaciones existentes, permitiendo una recuperación eficiente tanto de forma secuencial como aleatoria. Adicionalmente, proponemos una implementación paralela basada en Chapel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las tecnologías relacionadas con el análisis de datos masivos están empezando a revolucionar nuestra forma de vivir, nos demos cuenta de ello o no. Desde las grandes compañías, que utilizan big data para la mejora de sus resultados, hasta nuestros teléfonos, que lo usan para medir nuestra actividad física. La medicina no es ajena a esta tecnología, que puede utilizarla para mejorar los diagnósticos y establecer planes de seguimiento personalizados a los pacientes. En particular, el trastorno bipolar requiere de atención constante por parte de los profesionales médicos. Con el objetivo de contribuir a esta labor, se presenta una plataforma, denominada bip4cast, que pretende predecir con antelación las crisis de estos enfermos. Uno de sus componentes es una aplicación web creada para realizar el seguimiento a los pacientes y representar gráficamente los datos de que se dispone con el objetivo de que el médico sea capaz de evaluar el estado del paciente, analizando el riesgo de recaída. Además, se estudian las diferentes visualizaciones implementadas en la aplicación con el objetivo de comprobar si se adaptan correctamente a los objetivos que se pretenden alcanzar con ellas. Para ello, generaremos datos aleatorios y representaremos estos gráficamente, examinando las posibles conclusiones que de ellos pudieran extraerse.