938 resultados para Open Data, Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Massive open online courses (MOOCs) have become commonplace in the e-learning landscape. Thousands of elderly learners are participating in courses offered by various institutions on a multitude of platforms in many different languages. However, there is very little research into understanding elderly learners in MOOCs. Objective: We aim to show that a considerable proportion of elderly learners are participating in MOOCs and that there is a lack of research in this area. We hope this assertion of the wide gap in research on elderly learners in MOOCs will pave the way for more research in this area. Methods: Pre-course survey data for 10 University of Reading courses on the FutureLearn platform were analyzed to show the level of participation of elderly learners in MOOCs. Two MOOC aggregator sites (Class Central and MOOC List) were consulted to gather data on MOOC offerings that include topics relating to aging. In parallel, a selected set of MOOC platform catalogues, along with a recently published review on health and medicine-related MOOCs, were searched to find courses relating to aging. A systematic literature search was then employed to identify research articles on elderly learners in MOOCs. Results: The 10 courses reviewed had a considerable proportion of elderly learners participating in them. For the over-66 age group, this varied from 0.5% (on the course “Managing people”) to 16.3% (on the course “Our changing climate”), while for the over-56 age group it ranged from 3.0% (on “A beginners guide to writing in English”) to 39.5% (on “Heart health”). Only six MOOCs were found to include topics related to aging: three were on the Coursera platform, two on the FutureLearn platform, and one on the Open2Study platform. Just three scholarly articles relating to MOOCs and elderly learners were retrieved from the literature search. Conclusions: This review presents evidence to suggest that elderly learners are already participating in MOOCs. Despite this, there has been very little research into their engagement with MOOCs. Similarly, there has been little research into exploiting the scope of MOOCs for delivering topics that would be of interest to elderly learners. We believe there is potential to use MOOCs as a way of tackling the issue of loneliness among older adults by engaging them as either resource personnel or learners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method to measure the epicycle frequency kappa in the Galactic disc is presented. We make use of the large data base on open clusters completed by our group to derive the observed velocity vector (amplitude and direction) of the clusters in the Galactic plane. In the epicycle approximation, this velocity is equal to the circular velocity given by the rotation curve, plus a residual or perturbation velocity, of which the direction rotates as a function of time with the frequency kappa. Due to the non-random direction of the perturbation velocity at the birth time of the clusters, a plot of the present-day direction angle of this velocity as a function of the age of the clusters reveals systematic trends from which the epicycle frequency can be obtained. Our analysis considers that the Galactic potential is mainly axis-symmetric, or in other words, that the effect of the spiral arms on the Galactic orbits is small; in this sense, our results do not depend on any specific model of the spiral structure. The values of kappa that we obtain provide constraints on the rotation velocity of the in particular, V(0) is found to be 230 +/- 15 km s(-1) even if the scale (R(0) = 7.5 kpc) of the Galaxy is adopted. The measured kappa at the solar radius is 43 +/- 5 km s(-1) kpc(-1). The distribution of initial velocities of open clusters is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employ the recently installed near-infrared Multi-Conjugate Adaptive Optics demonstrator (MAD) to determine the basic properties of a newly identified, old and distant, Galactic open cluster (FSR 1415). The MAD facility remarkably approaches the diffraction limit, reaching a resolution of 0.07 arcsec (in K), that is also uniform in a field of similar to 1.8 arcmin in diameter. The MAD facility provides photometry that is 50 per cent complete at K similar to 19. This corresponds to about 2.5 mag below the cluster main-sequence turn-off. This high-quality data set allows us to derive an accurate heliocentric distance of 8.6 kpc, a metallicity close to solar and an age of similar to 2.5 Gyr. On the other hand, the deepness of the data allows us to reconstruct (completeness-corrected) mass functions (MFs) indicating a relatively massive cluster, with a flat core MF. The Very Large Telescope/MAD capabilities will therefore provide fundamental data for identifying/analysing other faint and distant open clusters in the Galaxy III and IV quadrants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past 35 years, more than two thirds of the Cerrado`s original expanse has been taken by agriculture. Even if some attempts have been made to conserve closed cerrado physiognomies, open cerrado physiognomies, richer in species and more fragile, have been systematically ignored. These open physiognomies are used by almost half of the Cerrado bird species, many of which being endemics. Using data from 11 surveys carried out in Cerrado landscapes, we asked what would happen to bird functional diversity if open cerrado species became extinct. Open cerrado birds would be able to keep on average 59% of the functional diversity. If they became extinct, on average 27% of the functional diversity would be lost. In this case, the remaining functional diversity would be lower than what would be expected by chance in five sites. Although many functions were shared by both open cerrado and forest species, there was some degree of complementarity between them, highlighted by the decrease in functional diversity when the former became extinct. Destruction of open cerrado physiognomies would lead to a habitat simplification, decrease in bird functional diversity, and, ultimately, to a considerable impact on community functioning. Thus, open cerrado physiognomies must receive much more conservation attention than they are currently receiving, because they maintain a high bird functional diversity that would otherwise be considerably diminished Were open cerrado species to become extinct.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Felsic microgranular enclaves with structures indicating that they interacted in a plastic state with their chemically similar host granite are abundant in the Maua Pluton, SE Brazil. Larger plagioclase xenocrysts are in textural disequilibrium with the enclave groundmass and show complex zoning patterns with partially resorbed An-rich cores (locally with patchy textures) surrounded by more sodic rims. In situ laser ablation-(multi-collector) inductively coupled plasma mass spectrometry trace element and Sr isotopic analyses performed on the plagioclase xenocrysts indicate open-system crystallization; however, no evidence of derivation from more primitive basic melts is observed. The An-rich cores have more radiogenic initial Sr isotopic ratios that decrease towards the outermost part of the rims, which are in isotopic equilibrium with the matrix plagioclase. These profiles may have been produced by either (1) diffusional re-equilibration after rim crystallization from the enclave-forming magma, as indicated by relatively short calculated residence times, or (2) episodic contamination with a decrease of the contaminant ratio proportional to the extent to which the country rocks were isolated by the crystallization front. Profiles of trace elements with high diffusion coefficients would require unrealistically long residence times, and can be modeled in terms of fractional crystallization. A combination of trace element and Sr isotope data suggests that the felsic microgranular enclaves from the Maua Pluton are the products of interaction between end-member magmas that had similar compositions, thus recording `self-mixing` events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Home-management of malaria (HMM) strategy improves early access of anti-malarial medicines to high-risk groups in remote areas of sub-Saharan Africa. However, limited data are available on the effectiveness of using artemisinin-based combination therapy (ACT) within the HMM strategy. The aim of this study was to assess the effectiveness of artemether-lumefantrine (AL), presently the most favoured ACT in Africa, in under-five children with uncomplicated Plasmodium falciparum malaria in Tanzania, when provided by community health workers (CHWs) and administered unsupervised by parents or guardians at home. Methods: An open label, single arm prospective study was conducted in two rural villages with high malaria transmission in Kibaha District, Tanzania. Children presenting to CHWs with uncomplicated fever and a positive rapid malaria diagnostic test (RDT) were provisionally enrolled and provided AL for unsupervised treatment at home. Patients with microscopy confirmed P. falciparum parasitaemia were definitely enrolled and reviewed weekly by the CHWs during 42 days. Primary outcome measure was PCR corrected parasitological cure rate by day 42, as estimated by Kaplan-Meier survival analysis. This trial is registered with ClinicalTrials.gov, number NCT00454961. Results: A total of 244 febrile children were enrolled between March-August 2007. Two patients were lost to follow up on day 14, and one patient withdrew consent on day 21. Some 141/241 (58.5%) patients had recurrent infection during follow-up, of whom 14 had recrudescence. The PCR corrected cure rate by day 42 was 93.0% (95% CI 88.3%-95.9%). The median lumefantrine concentration was statistically significantly lower in patients with recrudescence (97 ng/mL [IQR 0-234]; n = 10) compared with reinfections (205 ng/mL [114-390]; n = 92), or no parasite reappearance (217 [121-374] ng/mL; n = 70; p <= 0.046). Conclusions: Provision of AL by CHWs for unsupervised malaria treatment at home was highly effective, which provides evidence base for scaling-up implementation of HMM with AL in Tanzania.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background qtl.outbred is an extendible interface in the statistical environment, R, for combining quantitative trait loci (QTL) mapping tools. It is built as an umbrella package that enables outbred genotype probabilities to be calculated and/or imported into the software package R/qtl. Findings Using qtl.outbred, the genotype probabilities from outbred line cross data can be calculated by interfacing with a new and efficient algorithm developed for analyzing arbitrarily large datasets (included in the package) or imported from other sources such as the web-based tool, GridQTL. Conclusion qtl.outbred will improve the speed for calculating probabilities and the ability to analyse large future datasets. This package enables the user to analyse outbred line cross data accurately, but with similar effort than inbred line cross data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delineation of commuting regions has always been based on statistical units, often municipalities or wards. However, using these units has certain disadvantages as their land areas differ considerably. Much information is lost in the larger spatial base units and distortions in self-containment values, the main criterion in rule-based delineation procedures, occur. Alternatively, one can start from relatively small standard size units such as hexagons. In this way, much greater detail in spatial patterns is obtained. In this paper, regions are built by means of intrazonal maximization (Intramax) on the basis of hexagons. The use of geoprocessing tools, specifically developed for the processing ofcommuting data, speeds up processing time considerably. The results of the Intramax analysis are evaluated with travel-to-work area constraints, and comparisons are made with commuting fields, accessibility to employment, commuting flow density and network commuting flow size. From selected steps in the regionalization process, a hierarchy of nested commuting regions emerges, revealing the complexity of commuting patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

o ambiente econômico atual tem exigido empenho das empresas em conhecer, interagir, diferenciar e personalizar cada vez mais produtos e serviços para os clientes. Este cenário requer ferramentas e modelos de gestão para gerenciar as relações com os clientes, com o objetivo de permitir que a empresa consiga perceber e responder rapidamente a exigências dos consumidores. Este trabalho revisa conceitos de CRM (Customer Relationschip Management ou Gerenciamento das Relações com os Clientes) e descreve a implementação de ferramenta de gestão de relacionamento com clientes em empresa de consórcio. O desenvolvimento do trabalho reflete uma necessidade apontada no planejamento estratégico da empresa, sendo que ferramentas de tecnologia de informação e software de banco de dados foram usadas como suporte aos propósitos da gestão empresarial. Como resultado do trabalho, a empresa está hoje atuando com um sistema de Data Base Marketing, o qual foi criado para auxiliar os profissionais envolvidos no processo de atendimento e gestão de relacionamento com clientes. O Data Base Marketing esta sendo utilizado para coletar dados de atendimento a clientes, tais como históricos de atendimento, dados cadastrais, perfil demográfico, perfil psicográfico e categoria de valor dos clientes. Durante o processo de interação com clientes, o sistema facilita o trabalho dos especialistas e permite melhorar a qualidade do atendimento aos clientes, contemplando necessidades dos diversos especialistas da empresa em assuntos como vendas, qualidade em serviços, finanças e gestão empresarial.O processo começou pela constituição de um grupo de trabalho interno para discutir estratégias e cronograma de implantação. A primeira decisão do grupo foi pelo desenvolvimento interno do software visando atender plenamente o "core business" da empresa. O processo começou pela constituição de um grupo de trabalho interno para discutir estratégias e cronograma de implantação. A primeira decisão do grupo foi pelo desenvolvimento interno do software visando atender plenamente o "core business" da empresa. O projeto contou com o conhecimento do negócio dos profissionais da empresa e auxilio de especialistas e consultores externos. O detalhamento do projeto, bem como os passos da pesquisa-ação, está descrito no corpo da dissertação.