901 resultados para Data dissemination and sharing
Resumo:
Este estudo investiga as convergências e as divergências na comunicação primária e na comunicação secundária do câncer de mama. Nós usamos um esquema interpretativo fornecido pela Análise de Enquadramento, Agenda Setting, Teoria do Aprendizado Social, Difusão de Inovações, Semiótica e conceito de Novidade na Ciência e no Jornalismo, para argumentar que cientistas e jornalistas comunicam as novidades da Ciência de modos diversos. Também tivemos como uma proposta secundária traçar um panorama histórico da Comunicação da Saúde, e sua evolução, considerando que a Comunicação empreendeu um esforço para legitimar um espaço de encontro com a Saúde, afirmando uma área de aplicação de teorias, princípios e técnicas comunicacionais, com o objetivo preciso de difundir e compartilhar informação, conhecimentos e práticas que contribuam para melhorar os sistemas de saúde e o bem-estar das populações. Através da análise dos dados de periódicos científicos e jornalísticos que divulgam o câncer de mama, nós encontramos apoio significante para nossas predições. As implicações destas diferenças entre a comunicação primária (interpares) e a comunicação secundária (público leigo) para a comunicação da saúde são discutidas, às vezes apresentando-se como convergências, às vezes como divergências. Quando bem esclarecidas e compreendidas, fazem avançar a Comunicação da Saúde, obtendo resultados positivos no bem-estar das populações, considerando que a origem das doen ças está, fundamentalmente, onde se entrelaçam o biológico e o social.(AU)
Resumo:
We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.
Resumo:
In data assimilation, one prepares the grid data as the best possible estimate of the true initial state of a considered system by merging various measurements irregularly distributed in space and time, with a prior knowledge of the state given by a numerical model. Because it may improve forecasting or modeling and increase physical understanding of considered systems, data assimilation now plays a very important role in studies of atmospheric and oceanic problems. Here, three examples are presented to illustrate the use of new types of observations and the ability of improving forecasting or modeling.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Subsidence is a natural hazard that affects wide areas in the world causing important economic costs annually. This phenomenon has occurred in the metropolitan area of Murcia City (SE Spain) as a result of groundwater overexploitation. In this work aquifer system subsidence is investigated using an advanced differential SAR interferometry remote sensing technique (A-DInSAR) called Stable Point Network (SPN). The SPN derived displacement results, mainly the velocity displacement maps and the time series of the displacement, reveal that in the period 2004–2008 the rate of subsidence in Murcia metropolitan area doubled with respect to the previous period from 1995 to 2005. The acceleration of the deformation phenomenon is explained by the drought period started in 2006. The comparison of the temporal evolution of the displacements measured with the extensometers and the SPN technique shows an average absolute error of 3.9±3.8 mm. Finally, results from a finite element model developed to simulate the recorded time history subsidence from known water table height changes compares well with the SPN displacement time series estimations. This result demonstrates the potential of A-DInSAR techniques to validate subsidence prediction models as an alternative to using instrumental ground based techniques for validation.
Resumo:
The robotics is one of the most active areas. We also need to join a large number of disciplines to create robots. With these premises, one problem is the management of information from multiple heterogeneous sources. Each component, hardware or software, produces data with different nature: temporal frequencies, processing needs, size, type, etc. Nowadays, technologies and software engineering paradigms such as service-oriented architectures are applied to solve this problem in other areas. This paper proposes the use of these technologies to implement a robotic control system based on services. This type of system will allow integration and collaborative work of different elements that make up a robotic system.
Resumo:
This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such.
Resumo:
The present data set includes 268,127 vertical in situ fluorescence profiles obtained from several available online databases and from published and unpublished individual sources. Metadata about each profiles are given in the file provided here in further details. The majority of profiles comes from the National Oceanographic Data Center (NODC) and the fluorescence profiles acquired by Bio-Argo floats available on the Oceanographic Autonomous Observations (OAO) platform (63.7% and 12.5% respectively).
Different modes of acquisition were used to collect the data presented in this study: (1) CTD profiles are acquired using a fluorometer mounted on a CTD-rosette; (2) OSD (Ocean Station Data) profiles are derived from water samples and are defined as low resolution profiles; (3) the UOR (Undulating Oceanographic Recorder) profiles are acquired by a