922 resultados para data storage concept
Resumo:
Nowadays, Oceanographic and Geospatial communities are closely related worlds. The problem is that they follow parallel paths in data storage, distributions, modelling and data analyzing. This situation produces different data model implementations for the same features. While Geospatial information systems have 2 or 3 dimensions, the Oceanographic models uses multidimensional parameters like temperature, salinity, streams, ocean colour... This implies significant differences between data models of both communities, and leads to difficulties in dataset analysis for both sciences. These troubles affect directly to the Mediterranean Institute for Advanced Studies ( IMEDEA (CSIC-UIB)). Researchers from this Institute perform intensive processing with data from oceanographic facilities like CTDs, moorings, gliders… and geospatial data collected related to the integrated management of coastal zones. In this paper, we present an approach solution based on THREDDS (Thematic Real-time Environmental Distributed Data Services). THREDDS allows data access through the standard geospatial data protocol Web Coverage Service, inside the European project (European Coastal Sea Operational Observing and Forecasting system). The goal of ECOOP is to consolidate, integrate and further develop existing European coastal and regional seas operational observing and forecasting systems into an integrated pan- European system targeted at detecting environmental and climate changes
Resumo:
This work provides a general description of the multi sensor data fusion concept, along with a new classification of currently used sensor fusion techniques for unmanned underwater vehicles (UUV). Unlike previous proposals that focus the classification on the sensors involved in the fusion, we propose a synthetic approach that is focused on the techniques involved in the fusion and their applications in UUV navigation. We believe that our approach is better oriented towards the development of sensor fusion systems, since a sensor fusion architecture should be first of all focused on its goals and then on the fused sensors
Resumo:
Building software for Web 2.0 and the Social Media world is non-trivial. It requires understanding how to create infrastructure that will survive at Web scale, meaning that it may have to deal with tens of millions of individual items of data, and cope with hits from hundreds of thousands of users every minute. It also requires you to build tools that will be part of a much larger ecosystem of software and application families. In this lecture we will look at how traditional relational database systems have tried to cope with the scale of Web 2.0, and explore the NoSQL movement that seeks to simplify data-storage and create ultra-swift data systems at the expense of immediate consistency. We will also look at the range of APIs, libraries and interoperability standards that are trying to make sense of the Social Media world, and ask what trends we might be seeing emerge.
Resumo:
The accurate prediction of storms is vital to the oil and gas sector for the management of their operations. An overview of research exploring the prediction of storms by ensemble prediction systems is presented and its application to the oil and gas sector is discussed. The analysis method used requires larger amounts of data storage and computer processing time than other more conventional analysis methods. To overcome these difficulties eScience techniques have been utilised. These techniques potentially have applications to the oil and gas sector to help incorporate environmental data into their information systems
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
The discourse surrounding the virtual has moved away from the utopian thinking accompanying the rise of the Internet in the 1990s. The Cyber-gurus of the last decades promised a technotopia removed from materiality and the confines of the flesh and the built environment, a liberation from old institutions and power structures. But since then, the virtual has grown into a distinct yet related sphere of cultural and political production that both parallels and occasionally flows over into the old world of material objects. The strict dichotomy of matter and digital purity has been replaced more recently with a more complex model where both the world of stuff and the world of knowledge support, resist and at the same time contain each other. Online social networks amplify and extend existing ones; other cultural interfaces like youtube have not replaced the communal experience of watching moving images in a semi-public space (the cinema) or the semi-private space (the family living room). Rather the experience of viewing is very much about sharing and communicating, offering interpretations and comments. Many of the web’s strongest entities (Amazon, eBay, Gumtree etc.) sit exactly at this juncture of applying tools taken from the knowledge management industry to organize the chaos of the material world along (post-)Fordist rationality. Since the early 1990s there have been many artistic and curatorial attempts to use the Internet as a platform of producing and exhibiting art, but a lot of these were reluctant to let go of the fantasy of digital freedom. Storage Room collapses the binary opposition of real and virtual space by using online data storage as a conduit for IRL art production. The artworks here will not be available for viewing online in a 'screen' environment but only as part of a downloadable package with the intention that the exhibition could be displayed (in a physical space) by any interested party and realised as ambitiously or minimally as the downloader wishes, based on their means. The artists will therefore also supply a set of instructions for the physical installation of the work alongside the digital files. In response to this curatorial initiative, File Transfer Protocol invites seven UK based artists to produce digital art for a physical environment, addressing the intersection between the virtual and the material. The files range from sound, video, digital prints and net art, blueprints for an action to take place, something to be made, a conceptual text piece, etc. About the works and artists: Polly Fibre is the pseudonym of London-based artist Christine Ellison. Ellison creates live music using domestic devices such as sewing machines, irons and slide projectors. Her costumes and stage sets propose a physical manifestation of the virtual space that is created inside software like Photoshop. For this exhibition, Polly Fibre invites the audience to create a musical composition using a pair of amplified scissors and a turntable. http://www.pollyfibre.com John Russell, a founding member of 1990s art group Bank, is an artist, curator and writer who explores in his work the contemporary political conditions of the work of art. In his digital print, Russell collages together visual representations of abstract philosophical ideas and transforms them into a post apocalyptic landscape that is complex and banal at the same time. www.john-russell.org The work of Bristol based artist Jem Nobel opens up a dialogue between the contemporary and the legacy of 20th century conceptual art around questions of collectivism and participation, authorship and individualism. His print SPACE concretizes the representation of the most common piece of Unicode: the vacant space between words. In this way, the gap itself turns from invisible cipher to sign. www.jemnoble.com Annabel Frearson is rewriting Mary Shelley's Frankenstein using all and only the words from the original text. Frankenstein 2, or the Monster of Main Stream, is read in parts by different performers, embodying the psychotic character of the protagonist, a mongrel hybrid of used language. www.annabelfrearson.com Darren Banks uses fragments of effect laden Holywood films to create an impossible space. The fictitious parts don't add up to a convincing material reality, leaving the viewer with a failed amalgamation of simulations of sophisticated technologies. www.darrenbanks.co.uk FIELDCLUB is collaboration between artist Paul Chaney and researcher Kenna Hernly. Chaney and Hernly developed together a project that critically examines various proposals for the management of sustainable ecological systems. Their FIELDMACHINE invites the public to design an ideal agricultural field. By playing with different types of crops that are found in the south west of England, it is possible for the user, for example, to create a balanced, but protein poor, diet or to simply decide to 'get rid' of half the population. The meeting point of the Platonic field and it physical consequences, generates a geometric abstraction that investigates the relationship between modernist utopianism and contemporary actuality. www.fieldclub.co.uk Pil and Galia Kollectiv, who have also curated the exhibition are London-based artists and run the xero, kline & coma gallery. Here they present a dialogue between two computers. The conversation opens with a simple text book problem in business studies. But gradually the language, mimicking the application of game theory in the business sector, becomes more abstract. The two interlocutors become adversaries trapped forever in a competition without winners. www.kollectiv.co.uk
An improved estimate of leaf area index based on the histogram analysis of hemispherical photographs
Resumo:
Leaf area index (LAI) is a key parameter that affects the surface fluxes of energy, mass, and momentum over vegetated lands, but observational measurements are scarce, especially in remote areas with complex canopy structure. In this paper we present an indirect method to calculate the LAI based on the analyses of histograms of hemispherical photographs. The optimal threshold value (OTV), the gray-level required to separate the background (sky) and the foreground (leaves), was analytically calculated using the entropy crossover method (Sahoo, P.K., Slaaf, D.W., Albert, T.A., 1997. Threshold selection using a minimal histogram entropy difference. Optical Engineering 36(7) 1976-1981). The OTV was used to calculate the LAI using the well-known gap fraction method. This methodology was tested in two different ecosystems, including Amazon forest and pasturelands in Brazil. In general, the error between observed and calculated LAI was similar to 6%. The methodology presented is suitable for the calculation of LAI since it is responsive to sky conditions, automatic, easy to implement, faster than commercially available software, and requires less data storage. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Two-photon absorption induced polymerization provides a powerful method for the fabrication of intricate three-dimensional microstructures. Recently, Lucirin TPO-L was shown to be a photoinitiator with several advantageous properties for two-photon induced polymerization. Here we measure the two-photon absorption cross-section spectrum of Lucirin TPO-L, which presents a maximum of 1.2 GM at 610 nm. Despite its small two-photon absorption cross-section, it is possible to fabricate excellent microstructures by two-photon polymerization due to the high polymerization quantum yield of Lucirin TPO-L. These results indicate that optimization of the two-photon absorption cross-section is not the only material parameter to be considered when searching for new photoinitiators for microfabrication via two-photon absorption.
Resumo:
The synthesis and self-assembly of tetragonal phase-containing L1(0)-Fe(55)Pt(45) nanorods with high coercive field is described. The experimental procedure resulted in a tetragonal/cubic phase ratio close to 1:1 for the as-synthesized nanoparticles. Using different surfactant/solvent proportions in the process allowed control of particle morphology from nanospheres to nanowires. Monodisperse nanorods with lengths of 60 +/- 5 nm and diameters of 2-3 nm were self-assembled in a perpendicular oriented array onto a substrate surface using hexadecylamine as organic spacer. Magnetic alignment and properties assigned, respectively, to the shape anisotropy and the tetragonal phase suggest that the self-assembled materials are a strong candidate to solve the problem of random magnetic alignment observed in FePt nanospheres leading to applications in ultrahigh magnetic recording (UHMR) systems capable of achieving a performance of the order of terabits/in(2).
Resumo:
Uma linguagem orientada ao problema de projeto estrutural de edifícios e a correspondente estrutura de armazenamento de dados são apresentados, como núcleo principal do sistema PROADE. Objetiva-se assim permitir ao engenheiro estrutural descrever o problema em termos correntes de Engenharia, organizandose os dados recebidos para posterior análise e dimensionamento da estrutura. São discutidos o problema PROADE e os dados correspondentes, seguidos pela descrição das estruturas de armazenamento de dados do sistema. A seguir, define-se a linguagem PROADE e finalmente apresenta-se a organização do sistema PROADE.
Resumo:
Comércio eletrônico é um assunto que envolve múltiplos aspectos relacionados à utilização de infra-estrutura digital para suportar a transação de negócios. V árias são as formas de se exemplificar a aplicação de comércio eletrônico, como, por exemplo: utilização de quiosques de auto atendimento para a aquisição de refrigerantes, cartões telefônicos e café, obtenção de saldos bancários e pagamento de contas; utilização de armazenamento de dados de clientes que para fins de implantação de um programa de marketing de relacionamento; utilização de redes que interliguem organizações diversas, internamente ou externamente, para fins de otimização logística e permitir o fluxo de dados necessários à gestão das organizações Apesar das diversas possibilidades de se adotar práticas de comércio eletrônico, não se deve esperar que essas práticas sejam passíveis de serem replicadas, genericamente, por todas as organizações, pois estas diferem em sua composição, no que se trata das suas culturas, estruturas, estratégias e outros componentes. Devido ao caráter amplo do tema comércio eletrônico, este trabalho traz uma abordagem conceitual do mesmo e algumas das suas aplicações nas áreas de marketing, logística e governo eletrônico; apresenta alguns comentários sobre sistemas de informações e tecnologias de comunicação que os suportam; caracteriza as diferenças que existem entre as organizações, utilizando-se de um modelo organizacional que retrata as organizações como sendo um conjunto de forças: cultura e estrutura, estratégia, pessoas e seus papéis e tecnologia, em equilíbrio dinâmico entre si e inseridas no ambiente social, tecnológico, econômico e político e, como exemplo, infere a respeito de possíveis resultados que podem ser esperados a partir da adoção da modalidade de licitação pregão, eletrônico ou presencial, no âmbito de organizações militares do Exército Brasileiro, no que se refere à cultura, às pessoas e seus papéis e à tecnologia.
Resumo:
A quantificação do risco país – e do risco político em particular – levanta várias dificuldades às empresas, instituições, e investidores. Como os indicadores econômicos são atualizados com muito menos freqüência do que o Facebook, compreender, e mais precisamente, medir – o que está ocorrendo no terreno em tempo real pode constituir um desafio para os analistas de risco político. No entanto, com a crescente disponibilidade de “big data” de ferramentas sociais como o Twitter, agora é o momento oportuno para examinar os tipos de métricas das ferramentas sociais que estão disponíveis e as limitações da sua aplicação para a análise de risco país, especialmente durante episódios de violência política. Utilizando o método qualitativo de pesquisa bibliográfica, este estudo identifica a paisagem atual de dados disponíveis a partir do Twitter, analisa os métodos atuais e potenciais de análise, e discute a sua possível aplicação no campo da análise de risco político. Depois de uma revisão completa do campo até hoje, e tendo em conta os avanços tecnológicos esperados a curto e médio prazo, este estudo conclui que, apesar de obstáculos como o custo de armazenamento de informação, as limitações da análise em tempo real, e o potencial para a manipulação de dados, os benefícios potenciais da aplicação de métricas de ferramentas sociais para o campo da análise de risco político, particularmente para os modelos qualitativos-estruturados e quantitativos, claramente superam os desafios.
Resumo:
Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Antimony based glasses have been investigated for the first time regarding the possibility of holographic data storage using visible lasers sources. Changes in both refractive index and the absorption coefficient were measured using a holographic setup. The modulation of the optical constants is reversible by heat treatment. Bragg gratings were written under visible light of an Ar laser and erased thermally.